
AI Transparency Coalition
Statement on AI Transparency
The spread of AI-generated audio presents serious risks to individuals, businesses, and democratic societies. Without clear transparency measures, this technology can be exploited for harmful purposes, including:

Manipulation and harm caused by deceptive AI voice agents.

AI receiving revenue intended for human music & podcast creators.

Misrepresentation of AI-generated audio as human-created content.

Financial fraud via AI-powered voice scams.

Erosion of democratic discourse through AI-generated deepfakes.
We believe transparency in AI audio is non-negotiable. We urge all stakeholders to commit to labeling AI-generated audio, ensuring that people can clearly distinguish between human-created and AI-generated content. Implementing labeling will reduce risks and foster a more trustworthy digital ecosystem.
Sign Statement
