Whether it is trolling, racism, sexism, doxing, or just general harassment, the internet has a bad behavior problem. Researchers from Caltech and Activision Publishing, a video game publisher, are working together to bring their combined expertise to address this behavior in video games.
Because this kind of toxic behavior makes the internet an unpleasant place to be, there have been many attempts over the years to make sure people behave themselves online. In the earlier days of the internet, websites often relied on moderators—volunteers or staff—who were trained to keep discussions and content civil and appropriate. But as the internet continued to grow and harmful behaviors became more extreme, it became apparent that moderators need better tools at their disposal.
Increasingly, the online world is moving toward automated moderation tools that can identify abusive words and behavior without the need for human intervention. Now, two researchers from Caltech, one an expert in artificial intelligence (AI) and the other a political scientist, are teaming up with Activision on a two-year research project that aims to create an AI that can detect abusive online behavior and help the company's support and moderation teams to combat it.
The sponsored research agreement involves Anima Anandkumar, the Bren Professor of Computing and Mathematical Sciences, who has trained AI to fly drones and study the coronavirus; Michael Alvarez, professor of political and computational social science, who has used machine learning tools to study political trends in social media; and Activision's data engineers, who will provide insight into player engagement and game-driven data.
Alvarez and Anandkumar have already worked together on training AI to detect trolling in social media. Their project with the team that works on the Call of Duty video games will allow them to develop similar technology for potential use in gaming.
"Over the past few years, our collaboration with Anima Anandkumar's group has been very productive," Alvarez says. "We have learned a great deal about how to use large data and deep learning to identify toxic conversation and behavior. This new direction, with our colleagues at Activision, gives us an opportunity to apply what we have learned to study toxic behavior in a new and important area—gaming."
For Anandkumar, the important questions this research will answer are: "How do we enable AI that is transparent, beneficial to society, and free of biases?" and "How do we ensure a safe gaming environment for everyone?"
She adds that working with Activision gives the researchers not only access to data about how people interact in online games, but also to their specialized knowledge.
"We want to know how players interact. What kind of language do they use? What kinds of biases do they have? What should we be looking for? That requires domain expertise," she says.
Michael Vance, Activision 's chief technology officer, says the firm is excited to work with Caltech.
"Our teams continue to make great progress in combating disruptive behavior, and we also want to look much further down the road," Vance says. "This collaboration will allow us to build upon our existing work and explore the frontier of research in this area."