Tech News, Technology

Facebook and Instagram to Label Digitally Altered Content as ‘Made with AI’

Meta Announces Changes to Policies on Digitally Created and Altered Media

Meta, the parent company of Facebook and Instagram, unveiled significant updates to its policies regarding digitally created and altered media ahead of upcoming elections, which will test its ability to combat deceptive content generated by artificial intelligence (AI) technologies.

In a blog post, Monika Bickert, the vice president of content policy at Meta, announced that the social media giant would begin applying “Made with AI” labels to AI-generated videos, images, and audio posted on Facebook and Instagram starting in May. This expansion of policy aims to address a broader range of doctored media, which was previously only partially covered.

Bickert explained that Meta will also introduce separate and more prominent labels for digitally altered media that presents a high risk of materially deceiving the public, irrespective of whether AI or other tools were used to create the content. The implementation of these “high-risk” labels will commence immediately.

This shift in approach signifies a move away from solely removing a limited set of manipulated posts to keeping such content accessible while providing viewers with essential information about its creation process.

Previously, Meta announced a plan to detect images generated using third-party generative AI tools by incorporating invisible markers into the files, although no start date was specified at the time.

A Meta spokesperson confirmed that the labeling policy would extend to content posted on Facebook, Instagram, and Threads. Other Meta services, such as WhatsApp and Quest virtual-reality headsets, operate under different guidelines.

These policy changes come ahead of the US presidential election in November, during which generative AI technologies could potentially play a significant role. Political campaigns have already begun leveraging AI tools in various regions, pushing the boundaries of guidelines set by Meta and leading AI market player OpenAI.

In February, Meta’s oversight board criticized the company’s existing rules on manipulated media as “incoherent” after reviewing a video posted on Facebook last year that misleadingly altered real footage of Joe Biden. The board recommended that Meta’s policy on manipulated media should apply to all misleadingly altered content, regardless of whether AI was involved, as such content can still be highly deceptive.