Twitter’s looking to up the value of its underperforming Community Notes, with a new feature that’ll allow Community Notes contributors to add a contextual note to an image in the app, which will then see Twitter’s system attach that note to matching re-shares of the same image across all Tweets.
Note on Expansion
As seen in this example, when a Community Notes contributor marks an image as ‘questionable’, and then adds an explanatory note to it, that same note will be attached to all other Tweets using the same image. As per Twitter:
“If you’re a contributor with a Writing Impact of 10 or above, you’ll see a new option on some Tweets to mark your notes as ‘About the image’. This option can be selected when you believe the media is potentially misleading in itself, regardless of which Tweet it is featured in.”
Community Notes attached to images will include an explainer that clarifies that the note is about the image, and not about the contents of the Tweet. The option is currently only available for still images, but Twitter says that it’s hoping to expand it to videos and Tweets with multiple images soon.
It’s a decent update, which, as noted by Twitter, will become increasingly important as AI-generated visuals spark new viral trends across social apps. After all, who could forget amazing images such as the Pope in Balenciaga drip, right? It’s a more light-hearted example of why such alerts could benefit in shedding light on the true origins of a picture within a Tweet itself.
More recently, we’ve also seen examples of how AI-generated images can be harmful, with a digitally created picture of an explosion outside the Pentagon, sparking brief online panic, before further clarification confirmed that it wasn’t real. That specific incident likely prompted Twitter to take action on this front, and the use of Community Notes for this purpose could be a good way to maximize application to AI-enhanced photos at scale.
Though Community Notes, for all its benefits, remains a flawed system when it comes to addressing online misinformation. One of its key issues is that Notes can only be applied after images have been shared. And given the real-time nature of Tweets, the delayed turnaround of Community Notes could mean that Tweets like the Pentagon hoax would have already gained a lot of exposure before these Notes could be appended.
It would likely be faster for Twitter to personally take on moderation itself in extreme cases, and remove potentially harmful content outright. However, that goes against Musk’s free-speech-aligned approach, where the decision lies with users in determining what is and isn’t correct. That ensures that content decisions are dictated by the Twitter community, and not its management, while simultaneously reducing operational costs. In the end, this is still a good addition to the whole Community Notes process, which will only become more important as the hype around generative AI continues to rise.