Global Privacy Bodies Unite to Curb Harmful AI Imagery

Last month, 61 data protection authorities released a Joint Statement on AI-Generated Imagery.

Concerns about artificial intelligence (AI) systems that can produce lifelike pictures and videos of identifiable people without their knowledge or consent are addressed in the Joint Statement.

The Joint Statement expresses special concern about the risks that children and other vulnerable groups may face, including the potential for cyberbullying and exploitation.

“While AI can bring meaningful benefits for individuals and society, recent developments particularly AI image and video generation integrated into widely accessible social media platforms – have enabled the creation of non-consensual intimate imagery, defamatory depictions, and other harmful content featuring real individuals. We are especially concerned about potential harms to children and other vulnerable groups, such as cyber-bullying and/or exploitation,” stated the Joint Statement.

The Joint Statement called upon organisations to engage proactively with regulators, implement robust safeguards from the outset, and ensure that technological advancement does not come at the expense of privacy, dignity, safety, and other fundamental rights – particularly for the most vulnerable of our global society.

Since the end of December 2025, Grok — X’s AI chatbot — has been subject to widespread backlash because of many users that exploited Grok to produce thousands of unauthorized sexual pictures and videos of real people based off of regular photographs.

This has raised serious concerns about how AI tools can be misused to violate others’ privacy, dignity, and personal safety.

This Joint Statement is a move to regulate and curb these harmful uses of generative AI tools, as well as provide stronger safeguards for AI generated content. The joint statement has already been signed by several authorities in multiple countries throughout Europe, Canada, South Korea, The United Arab Emirates, Mexico, etc., reflecting a growing trend internationally for the establishment of accountability in the use of AI technologies.