AI chess engine sacrifices mastery to mimic human play

ITHACA, N.Y. – Since IBM’s Deep Blue defeated world chess champion Garry Kasparov in 1997, advances in artificial intelligence have made chess-playing computers more and more formidable. No human has beaten a computer in a chess tournament in 15 years.

Now, a team of computer scientists has developed an artificially intelligent chess engine that doesn’t necessarily seek to beat humans – it’s trained to play like a human. This not only creates a more enjoyable chess-playing experience, it also sheds light on how computers make decisions differently from people, and how that could help humans learn to do better.

“Chess sits alongside virtuosic musical instrument playing and mathematical achievement as something humans study their whole lives and get really good at. And yet in chess, computers are in every possible sense better than we are at this point,” Jon Kleinberg, professor of computer science at Cornell University said. “So chess becomes a place where we can try understanding human skill through the lens of super-intelligent AI.”

Kleinberg is a co-author of “Aligning Superhuman AI With Human Behavior: Chess as a Model System,” presented at the Association for Computing Machinery SIGKDD Conference on Knowledge Discovery and Data Mining, held virtually in August. In December, the Maia chess engine, which grew out of the research, was released on the free online chess server lichess.org, where it was played more than 40,000 times in its first week. Agadmator, the most-subscribed chess channel on YouTube, talked about the project and played two live games against Maia.

The paper’s other co-authors are Reid McIlroy-Young, doctoral student at the University of Toronto, Ashton Anderson, assistant professor at the University of Toronto, and Siddhartha Sen of Microsoft Research.

As artificial intelligence approaches or surpasses human abilities in a range of areas, researchers are exploring how to design AI systems with human collaboration in mind. In this project, the researchers sought to develop AI that reduced the disparities between human and algorithmic behavior by training the computer on the traces of individual human steps, rather than having it teach itself to successfully complete an entire task. Chess – with hundreds of millions of recorded moves by online players at every skill level – offered an ideal opportunity to train AI models to do just that.

“Chess been described as the `fruit fly’ of AI research,” Kleinberg said. “Just as geneticists often care less about the fruit fly itself than its role as a model organism, AI researchers love chess, because it’s one of their model organisms. It’s a self-contained world you can explore, and it illustrates many of the phenomena that we see in AI more broadly.”

Training the AI model on individual human chess moves, rather than on the larger problem of winning a game, taught the computer to mimic human behavior. It also created a system that is more adjustable to different skill levels – a challenge for traditional AI.

Within each skill level, Maia matched human moves more than 50% of the time, with its accuracy growing as skill increases – a higher rate of accuracy than two popular chess engines, Stockfish and Leela. Maia was also able to capture what kinds of mistakes players at specific skill levels make, and when people reach a level of skill where they stop making them.

The research was supported in part by a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, a Multidisciplinary University Research Initiative grant, a MacArthur Foundation grant, a Natural Sciences and Engineering Research Council of Canada grant, a Microsoft Research Award and a Canada Foundation for Innovation grant.

For additional information, see this Cornell Chronicle story.

-30-

withyou android app