Skip to Content

The Listener’s Bill of Rights in the Age of AI

A proposed industry guide for AI in audio

As podcast creators and producers, we’re optimistic about the introduction of new audio production tools powered by artificial intelligence (AI). This software improves workflows, cuts down on hours spent editing tape, and opens up new avenues for creativity. However, podcasting is a medium where you can’t see the speaker and we are therefore responsible for disclosing when and how AI is used to alter and edit speech.

Given that there are currently no industry-wide standards for using AI in audio, it is our goal to establish those standards and create an ecosystem in which the public is well-protected. These guidelines will naturally change over time as technology and consumers evolve. But starting this conversation ourselves is the best way to create a dialogue, promote transparency, and avert pre-emptive government regulation that may stifle future innovation.

With that in mind, we propose these listener’s rights:

  • The right to know when a host’s or guest’s voice has been synthesized or cloned using AI tools. Here, the words “synthesized” and “cloned” refer to the process of artificially rendering a voice to sound like a specific person based on that person’s vocal samples, with their full consent. Whether it’s used for pick-ups to correct errors and stumbles, a promo or ad, or the narration for an entire scripted episode, voice cloning in podcasting should be disclosed and consent of the voice owner should always be obtained. Cloning should not be used to imply someone conducted an interview or had a conversation that they did not have.
  • The right to know when a speaker’s words have been altered using AI tools, for clarity and accuracy. AI makes it possible for podcast editors to seamlessly correct a speaker’s factual error, generate a clean word or phrase to cover a digital glitch during recording, or replace a word that wasn’t pronounced properly with a synthetic piece of audio. This results in a higher quality, more accurate product, but listeners should still be made aware that words have been manipulated. On the other hand, trimming stumbles and ‘ums’ out of speech or changing the order of words to improve the flow of conversation would not qualify as needing to be disclosed.
  • The right to know when a large language model (LLM) such as ChatGPT has been used to generate a significant portion of a podcast script. The use of text-generating AI tools has become so common in the workplace it would be impossible to disclose every time someone involved with the production of a podcast used one. The word “significant” is therefore up for interpretation. But we believe the use of a full script generated by an LLM should be disclosed to the audience. Listeners have a reasonable expectation that the words and ideas expressed in podcasts have been attended to by human beings. If this is not the case, it should be noted.
  • The right to know when any voice heard in the content of a podcast does not come from the human associated with it, but has been generated using a text-to-voice or voice-to-voice AI platform. Digital voice generation has improved to a point where speech entirely generated by AI is virtually indistinguishable from human voices. Rather than risk audiences thinking a computer voice is a human’s, listeners should be made aware when they are hearing an entirely digitally generated voice. However, vocal filters that augment vocal tone or reduce background noise, used during recording or in post-production, are not new to podcasts or audio media and do not require disclosure.

An adequate definition of disclosure may differ between creators and companies, and the diversity of styles and formats within podcasting will likely impact how people choose to communicate with their audiences. But for the purposes of this document, here are some examples of useful disclosure language:

  • Some of the voices featured in this episode were created and/or modified using AI. We have full permission and consent from all parties involved.
  • The script for this podcast was written by generative AI tools.
  • This episode contains vocal audio that does not belong to a specific person, but is entirely generated by AI.

Finally, we the undersigned commit to not using AI-powered tools to do any of the following:

  • No Purposefully misrepresent a person in order to defame them or deceive listeners.
  • No Generate a synthesized vocal performance to deceive audiences into believing someone has appeared on the podcast who has not.
  • No Use an AI-generated voice clone of anyone without their explicit consent and cooperation.
  • No Create fake news stories or deceptively simulate real-world events.
  • No Upload a raw transcript of a guest’s interview into an LLM without the guest’s consent.
  • No Intentionally replicate any copyrighted material without express permission of the original creator.
  • No Generally take actions that are shady or approaching Bond villain behavior.

AI tools can be incredibly useful. The people designing these tools can and should take measures to protect against malicious actors, but ultimately it’s up to us, the creators who use the tools, to establish our own set of ethical standards.

Postscript: No AI was used in the writing of this Listener’s Bill of Rights.

Sign the Pledge

Do the principles outlined in this proposal align with your commitments to your listeners? Sign up below to add your name to the pledge.

Questions, concerns, or ideas? Contact us.

18 people have signed this pledge


Bob, The Listening Tube
2 months ago
I have no need for, nor intention to use, artificial intelligence during the planning, writing, producing or editing of The Listening Tube podcast
Sarah, Alumni Podcasts
2 months ago
The authenticity of voice is crucial in the podcasts we produce for schools, universities and non profits. I welcome industry guidelines to keep listeners correctly informed.
Kathy, Washington Gardener magazine
2 months ago
The GardenDC podcast is created entirely organically.
Shyno, Pod Mirror
2 months ago
Thanks for this