Given the paradoxical surge in the popularity and use of generative AI tools and the subsequent flood of hyper-realistic AI-generated images, this news makes a lot of sense. Twitter’s currently working on a new element of its Community Notes contextual information indicators that would enable users to include notes on visuals attached to Tweets, which, once applied, would be further appended to all versions of an image shared across the app.

Is There Even a Difference?

An example shared by app researcher Nima Owji shows that soon, Community NOtes contributors can add specific notes about attached images, with a selectable checkbox to apply the same note to all other instances of the same image across the app, which could come in handy for instances like this.

Ei no cap though – the father be looking fly.

The image of the Pope in a very modern-looking coat is very believable, even when it’s completely fake – it was created in the latest version of MidJourney. There have also been AI-generated images of former President Donald Trump being arrested, and various other unreal visuals of celebrities shared via Tweet, which do look rather convincing. Remember, they’re not – and having a contextual marker on each, like a Community Note, could help to quash misinformation and potential concerns before they become issues.

Community Notes has emerged as a key foundation of Elon Musk’s ‘Twitter 2.0’ push, with Musk hoping that crowdsourced fact-checking can provide an alternative means to let the people decide what should and shouldn’t be allowed in the app. That could lessen the moderation burden on Twitter management, while also using the platform’s millions of users to detect and dispel untruths, thus diluting their impact. However, crowdsourcing facts do come with some risks, with some already noting that Community Notes, at times, are used to essentially censor contradictory opinions, by selectively fact-checking certain elements of Tweets, thus raising questions about the whole post.

There will likely be varying opinions on such application, but Community Notes could be weaponized against opposing viewpoints – though Twitter is working to build in additional safeguards and approval processes to tackle such. If you want to enable users to share their opinions on what should and shouldn’t be seen, giving every user a simple way to do so would be more indicative, while Community Notes can only be applied in retrospect anyway, similar to how up and downvotes as a ranking factor, so it’s pretty similar in this context.

The Wrap

The main concern here would be that bot armies might weaponize downvotes – but it may also provide another means to crowdsource user input on a broader scale. Musk would likely make it a Twitter Blue exclusive anyway, so it probably won’t help, but there are some concerns that the limited scope of Community Notes contributors could lessen its value, in a broadly indicative sense. Fortunately, the Community Notes team is improving the process and could prove to be a valuable addition.

Sources

http://bit.ly/3ziWwjd