Riot Games and Ubisoft team up in AI research project — using player chat logs as training data

[ad_1]

Ubisoft and Riot are collaborating on an anti-toxicity research project, which will focus on collecting in-game chat logs as training data for AI algorithms. Both companies are set to publish their findings from this data next summer, at which point future steps will be decided.

Speaking with me via Zoom call, Wesley Kerr, Riot’s director of technology research, and Yves Jacquier, CEO of Ubisoft La Forge, shared their long-term goals and hopes for the project. Hailing it as the first of its kind as an open research collaboration in the field of AI between two game companies, they hope that the learnings published next year will be the first step in the industry’s effective use of AI as tool to reduce toxicity.

For whatever reason, Udyr’s players were always very toxic when he played LoL a lot. I wonder if that is still true after his rework.

According to Jacquier, the project has three main objectives. First, to create a network of shared data sets, filled with completely anonymous player data. The second, to create an AI algorithm that can work with this data. Finally, that this association act as a “prototype” for future industry initiatives against toxicity, fostering competition and further advances in the field.

It makes sense that Riot and Ubisoft are two companies dedicated to solving this problem, considering their popular multiplayer titles. Rainbow Six: Siege gets dirty real quick as soon as the whole team’s cooperation is affected, and Riot’s troublesome twins League of Legends and Valorant are drenched in toxic ooze.

Both Kerr and Jacquier emphasized throughout the interview that player anonymity and compliance with regional laws and the GDPR were among their top priorities. When asked if player data is shared between companies, Kerr stressed that his League of Legends account information would not be sent to other companies without the player’s consent. Rather, chat logs would be stripped of identifying information before algorithms could detect them.

The most immediate issue that comes to mind when you hear about AI reducing toxicity is the perseverance of players, determined to let you know how trashy you are. The invention of new words, an ever-changing lexicon of trash talk, is constantly changing within online communities. How could an AI respond to that? The trick, according to Jacquier, is not to trust the dictionary and static data sources. Thus, the value of using current player chat logs, which reflect the current toxicity meta.

Then there’s the other concern of misfiring, especially in a medium where friendly banter between friends, random teammates, and even enemy players can be part of the experience. If I’m playing top lane in League of Legends and I text “good CS friend” to my 0/3 lane opponent, that’s just a bit of a joke, right? If they do the same to me, that’s stimulating. It makes me want to win more and enhances the experience. How can an AI tell the difference between a genuine harmful toxicity and a hoax?

“It’s very difficult,” says Jacquer. “Understanding the context of a discussion is one of the hardest parts. For example, if a player threatens another player. In Rainbow Six, if a player says ‘hey, I’m going to take you out’, that could be part of the fantasy. Whereas in other contexts it could have a very different meaning. Kerr continued with some of the benefits that video games have in this regard, thanks to other factors.

According to him, taking into account who you queue up with is an example of a factor that could help AI determine the actual toxicity of funny jokes. In theory, stray dogs wouldn’t beat you up if you called your lifelong best friend shit in an LoL lobby.

As for the future, all eyes are on next year’s published results. For now, it’s focused only on chat logs, but with Riot Games looking to monitor voice communications in Valorant, Kerr refused to take it off the table as a future area of ​​investigation if the collaboration continues beyond 2023. For now , is a model. A first step in a long journey, both companies appear dedicated to travel. While both Kerr and Jacquier are hopeful that the research project will produce important findings and inspire other companies to follow suit, they don’t believe that AI is the be all and end all in moderating toxicity.

“AI is a tool, but it is not a panacea. There are many ways to ensure player safety, so the idea is to better understand how this tool can best be used to address harmful content.”

Ultimately, this investigation is just one component of a larger effort, but in the minds of Jacquier and Kerr one that will hopefully prove critical in the future. Only time will tell if they’re right, if they can deliver on their promise to maintain player privacy, and if AI really is the next frontier in the battle against toxicity.



[ad_2]

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *