Should artificial intelligence content be flagged? Vice-President of the European Commission calls for stricter regulations for AI media

Věra Jourová, European Commission Vice-President for Values ​​and Transparency, calls for stricter requirements for identifying content with artificial intelligence.

Content created by artificial intelligence “needs to be recognized and clearly labeled” – at least according to Věra Jourová, European Commission Vice-President for Values ​​and Transparency, who calls for a broader crackdown on AI media.

Jourová called for labeled AI content at a meeting with signatories to the controversial EU code of conduct on disinformation today. Allegedly designed to combat disinformation online, the code of conduct, which includes “44 commitments and 128 specific measures”, was signed by the 44 signatories in June 2022.

The latter include Adobe, Clubhouse, Twitch, TikTok, Google, Meta, Kinzen (which Spotify, a dedicated EU lobbyist, acquired in October 2022) and a number of comparatively unknown players. Each of the companies and organizations that have included their name in the code of conduct have selected and accepted various “commitments” (the gist of which is set out in the voluminous text of the law), ostensibly aimed at preventing the spread of disinformation.

And while the detailed code of conduct includes a single commitment to artificial intelligence, it goes without saying that the technology’s reach and adoption has increased dramatically over the past year. Consequently, Jourová ahead of today’s meeting disclosed She believes that the code of conduct “should also address emerging threats such as misuse of generative AI.”

In keeping with the remark, Jourová emphasized in a tweet this morning that among the signatories’ “principal responsibilities” is “managing the risks of AI.”

When it comes to generative AI “like ChatGPT” – whose parent company could exit the EU altogether as MPs continue to work on the lengthy “AI law” – the government official shared that “these services will not be used by malicious actors to generate disinformation can be used.”, saying that “such content must be recognizable and clearly labeled to users.”

“Image generators can create authentic-looking images of events that never happened,” says Jourová called during a short speech. “Speech generation software can mimic a person’s voice from a sample of a few seconds. The new technologies also pose new challenges in the fight against disinformation.

“Signatories who have services that have the potential to spread AI-generated disinformation should, in turn, adopt technologies to recognize such content and clearly label it to users,” the 58-year-old continued, also addressing the imminent implementation of the Digital Services Act. “I’ve said many times that our primary responsibility is to protect freedom of expression. But when it comes to AI production, I see no right to freedom of expression for the machines.”

Although outside the “disinformation” category, all types of music (including authorized and unauthorized releases alike) would be affected by a regulatory requirement mandating the identification of media generated in whole or in part by AI.

Notwithstanding the relatively limited scope of the Code of Conduct on Disinformation, time will tell if the potential rule will find its way into the aforementioned AI law, which is due to have another preliminary vote later this month. As the legislation continues to take shape, quite a few artists and professionals are speaking out against the perceived pitfalls of artificial intelligence, whose music-specific implications have led some financial pundits to downgrade Warner Music Group’s stock.