Ubisoft and Riot Games are teaming up on a new research project that’s intended to reduce toxic in-game chats. From a report: The new project, called “Zero Harm in Comms,” will be broken up into two main phases. For the first phase, Ubisoft and Riot will try to create a framework that lets them share, collect, and tag data in a privacy-protecting way. It’s a critical first step to ensure that the companies aren’t keeping data that contains personally identifiable information, and if Ubisoft and Riot find they can’t do it, “the project stops,” Yves Jacquier, executive director at Ubisoft La Forge, said in an interview with The Verge.
Once that privacy-protecting framework is established, Ubisoft and Riot plan to build tools that use AI trained by the datasets to try and detect and mitigate “disruptive behaviors,” according to a press release. Traditionally, detecting harmful intent has relied on “dictionary-based technologies,” where you have a list of words spelled in different ways that can be used to determine if a message might be bad, according to Jacquier. With this partnership, Ubisoft and Riot are trying to use natural language processing to extract the general meaning of a sentence but take the context of the discussion into account, he said.