Surely you’ve come across Captcha verifications at least once in your online life, right? See, the thing is, testing for ‘human-ness’ can be rather tricky online, and people can often fall prey to sneaky scammers who have a bit of tech savviness. However, Twitter, under the leadership of pro-human and pro-verification advocate Elon Musk, believes it can test for this, thanks to this new message discovered somewhere on the back-end code of the app.

Please Identify Your Humanity

The great Alessandro Paluzzi does it again and uncovers another treat from deep within Twitter’s back-end trench. As found by Paluzzi, Twitter seems to be working on a new process that would disallow users from using certain elements of the app until they’re able to prove that they’re human – by virtue of personal timeline engagement. Successfully proving that you’re human lifts this gatekeeper, granting you improved Tweet visibility and access to the app’s full suite of messaging tools. 

However, it’s what this human test entails that’s complicated. How can Twitter distinguish what actions a human can take that bots can’t? How would they even measure such? Of course, it’s safe to assume that Twitter wouldn’t just tell you the answer even if it knew, otherwise, it’d just be opening it up to scammers. Then again, there’s really not a lot that a regular user can do that a bot couldn’t. Bot activity had always been a pervasive issue on the app, but Chief Twit Elon Musk has vowed to remedy the problem and finally rid Twitter of these massive, fallacy-inducing actors. 

Twitter, pre-Musk takeover, said that bots only made up about 5% of its total users, while Musk and his team pegged this to actually be closer to 27%, with only 7% of its total human users seeing the majority of its ads. Since taking over, Musk has touted Twitter’s user counts, which are now supposedly at ‘record highs’. By his own team’s estimations, it seems that bots would still make up a significant portion of these profiles, something that Musk is now fully keen on stamping out with the advent of Twitter 2.0.

Musk and his team look to implement new measures, including improved bot detection and removal. Musk reports that Twitter has supposedly taken down numerous bot accounts over the last week, even needing to scale back the detection threshold after also removing a small number of non-bot accounts.

Twitter is now also leaning more toward automation of content moderation, which is understandable given that its current staff is now less than 50% of what it used to be. To offset the massive loss in manpower, Twitter has to now rely more on certain machine-operated processes, which has seen a positive increase in terms of violative content takedowns. While increased reliance on automation might improve response and action times, it also eventually leads to incorrect reports and actions. Twitter chooses to err on the side of caution for now, with Vice President of Trust and Safety Ella Irwin saying that their teams are more inclined to ‘move fast and be as aggressive as possible’ on these elements. 

The Wrap

While it’s good that Twitter is exploring all these new options, only time will tell if they truly make a difference. Some third-party reports indicate that the prevalence of hate speech increased when Musk took over, while various child safety experts state that his leadership has made little to no improvement on on-platform child exploitation cases.Mediamedia has proven time and again, it won’t be easy. Twitter at least acknowledges this and proceeds to implement new measures to see how they can better combat these concerns. Implementing a ‘human’ test is going to be tough, but perhaps Musk and Co. can come up with yet another breakthrough, changing online security and verification as we know it. 

Sources 

https://bit.ly/3iIzygg