Meta, the parent company of Facebook and Instagram, has detailed its approach to combat the misuse of generative artificial intelligence (AI) across its platforms ahead of the upcoming European Parliament elections in June 2024.
In a recent blog post, Marco Pancini, Meta’s head of EU Affairs, emphasized that the company’s established “Community Standards” and “Ad Standards” principles will extend to AI-generated content. This includes subjecting AI-generated content to review and rating by independent fact-checking partners, with specific attention to identifying manipulated or altered media.
Furthermore, Meta has announced plans to introduce features that will enable the labeling of AI-generated content originating from various tools such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. Users will also be prompted to disclose when sharing AI-generated videos or audio, with potential penalties for non-disclosure.
Additionally, advertisers engaging in political, social, or election-related campaigns using AI-altered content must disclose its usage. Meta’s efforts in this regard have resulted in the removal of numerous non-compliant ads across the European Union.
This initiative aligns with a broader industry trend, as evidenced by Google’s decision to restrict AI-generated responses related to elections on its platforms. Companies like OpenAI have also implemented internal standards to monitor and mitigate AI-related interference in global elections.
Furthermore, a collective effort among major players in the AI industry, including Microsoft, Google, Meta, and others, has been undertaken to address concerns regarding AI election interference through a formal pledge.
Governments worldwide have also taken proactive measures to address the misuse of AI in elections, with the European Commission proposing election security guidelines and the United States banning AI-generated voices in phone scams to mitigate the spread of misinformation and protect democratic processes.