When I think about the growing role of AI at radio or anywhere else, I think about the self-serve lines at the supermarket. I’ve learned to prep for those lines a lot better, and yet in any trip through self-serve, I will usually experience some snag requiring human intervention. That person is usually busy with somebody else, but from the store’s POV, the process is probably successful because one person is replacing several staffed express lanes.
A few years ago, a radio person messaged me to let me know they had been fired along with several co-workers. The chatbot had several suggested responses for me. They were “that’s great!,” “awesome!,” and a thumbs up. That’s why I don’t let the chatbot tell me what to write next. Often, when I text, AI does a great job of guessing the next word. Sometimes, though, I have what seems like an entirely instinctive next word that needs to be entirely typed out. AI can’t do everything. Yet.
Recently, I was listening to the stream of a major-market radio station. I tuned in during a stopset and heard two songs inserted as streaming fill. Over the course of the next hour, I heard those same two songs in the next stopset, and again in the next. In other words, two gold titles, perhaps judged not suitable for regular programming, had played in 25-minute rotation. More than twice as fast as the hottest current.
Over the years, I’ve heard a lot of doozies from streaming stopset replacement. I’ve heard one major station where music sweeps were interrupted by extra commercial breaks in the middle of songs. I’ve also heard a stopset that includes a nearly regular spotload followed by a 3-5 minute stream-only song, such that a break that was merely long on the air goes into extra innings via the stream. The stream caught up at the end of the hour, meaning that an over-the-air song was bumped for a fill song.
There are other examples of radio stations not being entirely in control of their technology. I know a large-market station where the break going into a stopset is regularly cut off — that’s another regular occurrence in streaming, but this is over the air. The station in question no longer has a break between the final song and the stopset. They have changed their formatics to accommodate automation.
As somebody who schedules music for a living, I spend a lot of time editing song choices that are within the letter but not the spirit of the rules that were set. There are definitely some radio people who would tell me I just need to reset the rules, until I have a log that needs no editing. But one of the stations I sweat over had an 11 share in a very competitive market last week. I would also argue that in a world of nine-minute listening spans that a lot of stations’ logs could use a little more attention.
As we consider the role of AI in radio, I believe all of the following things to be true:
- There are people who are absolutely sincere in their belief that AI can upgrade our current air product, which is often carelessly voice-tracked or not hosted at all;
- Radio should co-opt new technology, rather than fearing it;
- Radio is falling down on the use of the technology it has. Streaming substitution isn’t exactly “low stakes,” particularly if it in any way plays into the average nine-minute listening occasion. But it comes with a lot fewer issues than trying to generate an AI air personality.
Whenever I’ve reached out to a client about issues on their stream, it has usually led to me being sent to an IT person with a battery of questions about my browser and my streaming software. (No one has yet asked if I’ve tried rebooting, although I have been told to clear my cache.) Often the suggestion is that I am the only one having this problem, although the “only one person” argument is cold comfort if that person has a PPM meter. Perhaps AI will help with this problem. But perhaps it’s time for fill songs to become part of the programming log, rather than handled at the server level. The admirable intent of providing a customized experience has often succeeded only in making a six-minute stopset somehow even worse.
I’ve written a lot about radio’s stopset issues for years. You may think that I’m just seizing on a novel way to harp on it again. But as the other automation issue shows, streaming substitution isn’t the only place where technology lets us down. It’s merely the mistake I’m most likely to recognize on a regular basis from my vantage point outside the walls of a radio station. We are in a loop where streaming seems to still not be big enough to become a digital priority, but also streaming is not big enough because it is not a digital priority.
Any discussion of AI at radio or in any field now leads back to the meme, already quoted on convention panels and everywhere else, about employees being replaced not by AI, but by the person who understands AI. So how can the relatively simple task of streaming stopsets be handled by somebody who understands radio programming, and why is the same song or PSA over and over a problem?
At this moment, the creation of “AI Ashley” at KBFF (Live 95.5) Portland is the product of open experimentation between the real-life Ashley Z, her program director, her group programmer, and likely the Futuri RadioGPT team. As the use of AI personality proliferates, along with the number of vendors offering it, it’s fair to wonder who will have their hands on the controls in the case of a midnight malfunction. A customer-service rep? A chief engineer shared with six other local stations? (To be fair, some of the people I’ve encountered for streaming issues are likely far more interested in playing with AI.)
Over the last 15 years, I’ve seen the local programmer’s job come to revolve much more around technical tasks, particularly involving automation. When I first noticed that, unpacking a station’s syndicated evening show had just replaced critiquing a night jock. Now, it goes a lot further. In many cases, we have made overseeing a station’s on-air product into an engineering job.
Radio people often feel as if their bosses (or former station owners) “don’t care” or have thrown their hands up. That reflects our often-overextended management and airstaffs, but also reflects perhaps how we have given people at both radio stations and in our streaming infrastructure the tasks they are least inclined to or suited for.
Perhaps radio station clusters need two different programming positions — a creative director who can turn their full attention back to content and a programming technology person who can execute programming decisions through the prism of the station’s available tools. Because now, that prism is often more like a funhouse mirror. AI is just one place where those two jobs would work together.
At some level, radio’s current travails — including not being able to support a full local airstaff to begin with — come back to its failure to achieve greater dominance in streaming. Maybe 75% of that is due to spotload, but at least 25% comes down to those bugs that make the user experience worse, and those are so much more aggravating for being unnecessary. Being the person who understands new technology should mean taking charge of the AI process now, but it also means radio understanding the importance of wrangling control of the small and large places where technology is currently letting us down.
This story first appeared on radioinsight.com