in series of threaded posts This afternoon, Instagram chief Adam Mosseri said users should not trust the images they see online because AI is “clearly creating” content that can easily be mistaken for reality. So users need to consider the source, and social platforms need to help with that, he says.
“Our role as internet platforms is to label as much AI-generated content as possible,” Mosseri wrote, but acknowledges that “some content” will be missed by those labels. I admit it. As such, platforms “must also provide context about who is sharing” so users can decide how much to trust their content.
Just as it’s good to remember that chatbots can brazenly lie before trusting an AI-powered search engine, it’s good to remember whether the claims and images posted are from trustworthy accounts. It helps to consider its veracity. At the moment, Meta’s platform doesn’t provide much context like what Mosseri posted today, but the company recently hinted that major changes are coming to its content rules.
Mosseri’s explanation sounds similar to user-driven moderation like X and YouTube’s community notes and Bluesky’s custom moderation filters. It’s unclear if Meta plans to introduce anything like that, but again, he’s known to be quoting pages from Bluesky’s book.