For six months, listeners of a Sydney radio station tuned in to hear Thy, unaware that their host wasn’t human. Australian Radio Network’s (ARN) CADA station, broadcasting across western Sydney and online, deployed an AI-generated host for its “Workdays with Thy” slot without any disclosure.
Thy, an artificial host created using ElevenLabs’ AI audio platform, presented four hours of hip-hop each weekday. Her voice and likeness were reportedly cloned from a real employee within ARN’s finance team.
The station’s website simply stated that Thy would be playing the “hottest tracks from around the world,” leaving listeners in the dark about the AI’s true nature.
ARN’s spokesperson acknowledged the trial, stating that they were exploring how new technology could enhance the listener experience. However, they also emphasized the importance of real personalities in driving compelling content.
While the Australian Communications and Media Authority (ACMA) confirmed that there were no current restrictions on the use of AI in broadcast content or obligations to disclose its use. Teresa Lim, Vice president of the Australian Association of Voice Actors, said CADA’s failure to disclose its use of AI reinforces how necessary legislation around AI labelling has become
According to the Australian Financial Review, “Workdays with Thy” reached an estimated 72,000 people in the past month’s ratings.
This incident raises important questions about transparency and authenticity in media. Teresa Lim emphasized the importance of truth and disclosure in broadcasting, particularly regarding the use of AI.
Lim also noted the potential impact on minority groups, stating, “There are a limited number of Asian-Australian female presenters who are available for the job, so just give it to one of them. Don’t take that opportunity away from a minority group who’s already struggling.”
While CADA isn’t the first station to use AI hosts, others, like Disrupt Radio, have been transparent about their use. As AI technology advances, discussions surrounding regulation and ethical use in media are becoming increasingly crucial.
ACMA is currently developing policies to ensure the safe and responsible use of AI, including considering mandatory transparency measures in high-risk settings.