Twitter is part enlightening and part terrifying for those who have bore witness to its true dynamism throughout the years. For those Tweeting the ‘wrong’ things, they’ll soon realize how it can be a cruel and unforgiving platform, regardless of the intent. However, some use this to their advantage, with many media personalities and politicians now posting divisive comments as a way to boost their own presence. Yet for others, the Twitter backlash can prove fatal, which is why Twitter has been working to provide more ways for users to control their in-app experience, limiting negative interaction where possible.

Filtered Response

Twitter could be releasing another new element on this front. Posted by reverse engineering expert Jane Manchun Wong, Twitter’s currently developing a new ‘Reply Filter’ option, which would enable users to reduce their exposure to Tweets that include ‘potentially harmful or offensive language’, as identified by Twitter’s detection systems.

As noted in its description, the filter would only stop you from seeing those replies, so others would still be able to view all your responses to your Tweets. However, it could also be another way to also avoid unwanted attention in the app, which could make it a more enjoyable experience for those who are simply fed up with random accounts pushing all kinds of crap their way.

The system would presumably utilize the same detection algorithms as Twitter’s offensive reply warnings, which it re-launched in February last year, after shelving the project during the 2020 US election. Twitter says that these prompts have proven effective, with users opting to change or delete their replies in 30% of cases where these alerts were shown. That suggests that many Twitter users don’t intentionally seek to upset or offend others with their responses, with even a simple pop-up like this having a potentially significant effect on platform discourse, improving Tweet engagement.

Then again, this also means that around 70% of people didn’t agree with Twitter’s auto-comment assessment, or are simply not concerned about offending people. Which rings true – as noted, Twitter can still be a pretty unrelenting platform for those in the spotlight.

The Wrap

Still, a 30% reduction in potential Tweet toxicity is significant, and this new option could add to that in another way, utilizing the same identifiers and algorithms. As such, it’s a worthy experiment from Twitter, at the very least, providing even more ways for users to control their in-app experience.

There’s no info about an official release just yet, but based on Jane Manchun Wong’s latest examples, it’s highly likely that they’ll be in before the holidays kick it into high gear.

Subscribe to our ‘Bottoms Up!’ Newsletter. Get the latest social media blogs about news, updates, trends, and effective social media strategies to take your business to the highest level from Tristan Ahumada and Jeff Pfitzer.


Sources 

https://bit.ly/3QZ70ev