Meta recently published a new overview of its evolving efforts to combat coordinated influence operations across its apps. Such a trend has become a key platform focus following the 2016 US Presidential Election when Russian-based operatives were found to be using Facebook to influence US voters. 

6 years later, and counting, Meta has detected and removed more than 200 covert influence operations, while simultaneously sharing information on each network’s behavior with others in the industry. The reason is Meta wanted all of these networks to learn from the same data, thus developing better approaches to tackle such. 

Misinformation Is A No-No

As explained by Meta: 

“Whether they come from nation states, commercial firms or unattributed groups, sharing this information has enabled our teams, investigative journalists, government officials, and industry peers to better understand and expose internet-wide security risks, including ahead of critical elections.”

Meta supposed reports having detected over 100 different nations, with the United States being the ‘most targeted’ country. Trailing the US would be Ukraine, then the UK, respectively. This likely points to the influence and sway that the US has over global policy, while it could also relate to the popularity of social networks in these regions, making it an even bigger influence vector. 

Though the true origin is not certain, a lot of the perpetrating groups were identified to come from Russia, Iran, and Mexico. Of the three, Russia has become the most publicized home for CIB activity. However, Meta also notes that while many Russian operations have mainly targeted the US, more operations from Russia actually targeted Ukraine and Africa, as part of a larger effort to sway public and political sentiment. Over time, Meta has also noted how more of these types of operations have begun targeting their own country. 

When it comes to how these operations are evolving, Meta notes that CIB groups are increasingly turning to AI-generated images. As an example, CIB groups are using AI-generated images to ‘cloak’ or disguise their activity. This is interesting, especially when you consider the steady rise of AI-generation technology, spanning from still images, to video, to text. While these systems will have valuable uses, there are, of course, risks as well. What’s really interesting to consider is how such technologies can be used to shroud inauthentic activity. 

The Wrap

The report provides some valuable perspective on the scale of the issue, and how Meta continues to work to take on the ever-evolving tactics of scammers and other ill-actors online. What’s more, it’s likely that such actors won’t stop – hence, Meta has put out a call for increased regulations. At the same time, Meta’s also updating its own policies and processes in line with evolving needs, which includes updated security and support options. 

It’s a big update and it has always been difficult to scale human-based support, But Meta’s now working to provide more support functionality as another means to better protect people and shield them from online harm. It’s an eternal battle, and with the capacity to reach an innumerable number of people, you should expect to see bad actors continue to target Meta’s apps as a means to forward their agendas. So, keep your eye on Meta and its evolving moderation and regulatory efforts.

Sources 

https://bit.ly/3hDxxln