

We choose to analyse how chess engines respond to Plaskett’s Puzzle, one of the most well-known endgame studies in history. As such, they provide a good benchmark for AI studies. They are a notoriously difficult kind of puzzle, involving detailed calculations and tactical motifs. Chess studies originally became popular in the 19th century. There are various schools of thought in chess composition, each of which placing different emphasis on the complexity of problems. You may not be able to work out the answers, but in theory, there must be a solution, a right procedure in any position-John von Neumann Chess is a well-defined form of computation. To conclude, we discuss the implications of our work on artificial intelligence (AI) and artificial general intelligence (AGI), suggesting possible avenues for future research.Ĭhess is not a game. On the theoretical side, we describe how Bellman’s equation may be applied to optimize the probability of winning. Drawing inspiration from how humans solve chess problems, we ask whether machines can possess a form of imagination. We examine the algorithmic differences between the engines and use our observations as a basis for carefully interpreting the test results. Our experiments show that Stockfish outperforms LCZero on the puzzle. We use Plaskett’s Puzzle, a famous endgame study from the late 1970s, to compare the two engines. Two of the leading chess engines, Stockfish and Leela Chess Zero (LCZero), employ significantly different methods during play. We find that they can serve as a tool for testing machine ability as well. Endgame studies have long served as a tool for testing human creativity and intelligence.
