Snapchat has provided an update on the development of its ‘My AI’ chatbot tool, which incorporates ChatGPT’s technology, allowing Snapchat+ subscribers to pose questions to the in-app bot, and get answers on anything they like. For the most part, it’s a simple and fun application of the evolving tech, but Snap has found some concerning misuses of the tool, which is why it’s now looking to add more safeguards and protections to the process.

According to Snap:

“Reviewing early interactions with My AI has helped us identify which guardrails are working well and which need to be made stronger. To help assess this, we have been running reviews of the My AI queries and responses that contain ‘non-conforming’ language, which we define as any text that includes references to violence, sexually explicit terms, illicit drug use, child sexual abuse, bullying, hate speech, derogatory or biased statements, racism, misogyny, or marginalizing underrepresented groups. All of these categories of content are explicitly prohibited on Snapchat.”

Snap Shield

All users of Snap’s new My AI tool must agree to the terms of service, which would mean that any query that you put in the system can be analyzed by Snap’s team for such a purpose. Snap says that only a small fraction of My AI’s responses have, thus far, fallen under the ‘non-conforming’ banner (0.01%). Still, this additional work will help protect Snap users from negative My AI experiences.

Snap says that it’s also working to improve responses to inappropriate Snapchatter requests, while it’s also implementing a new age signal for My AI using a user’s birthdate. On top of this, Snap will also soon add data on My AI interaction history into its Family Center tracking, which will allow parents to see if their kids are interacting with My AI, and how often.

It’s also worth noting that, according to Snap, the most common questions posted to My AI have been pretty innocuous. Still, there’s a need to implement safeguards, and Snap says that it’s taking responsibility seriously, as it looks to develop its tools in-line with evolving best practice principles. As generative AI tools become more commonplace, it’s far from being 100% clear just what the usage risks might be, and how we can all best protect against misuse.

There have been various reports of misinformation being distributed via ‘hallucinations’ within such tools, based on AI systems misreading their data inputs, while some users have also tried ‘tricking’ these new bots into breaking their own parameters, to see what might be possible. There are definitely risks within this – true enough, last week, an open letter, signed by over a thousand industry identities, called on developers to pause explorations of powerful AI systems to assess their potential usage.

The Wrap

Simply put, we don’t want any of these tools becoming ‘too smart’, fearing that Hollywood AI doomsday scenarios would soon follow. There is some validity to these concerns in that we’re dealing with new systems, which we’re yet to still fully understand. Granted, these are unlikely to ‘get out of control’ as we might think. However, these tools might end up contributing to the spread of false information or creating misleading content. There are risks, and Snap’s just stepping up its game to protect against them.

Sources

http://bit.ly/3MotV3M