Last month we reported on the somewhat-surprising news that an emulated Atari 2600 running the 1979 software Video Chess had "" an overconfident ChatGPT at the game of kings. Fans of schadenfreude rejoice, because Microsoft Copilot thought this was a chance to show its superiority to ChatGPT: And the Atari gave it a beating.
There is one fairly big caveat here. Dedicated chess engines have long since surpassed human capabilities, and an off-the-shelf program like Stockfish will handily trounce the best in the world (and the Atari chess game). ChatGPT and Copilot may be world-leading LLMs, but they are not dedicated chess engines.
"It claimed it could think 10–15 moves ahead," , "but figured it would stick to 3–5 moves against the 2600 because it makes 'suboptimal moves' that it 'could capitalize on... rather than obsess over deep calculations.'"
The vainglorious Copilot barely got out of the starting blocks. "By the seventh turn, it had lost two pawns, a knight, and a bishop—for only a single pawn in return—and was now instructing me to place its queen right in front of the [[link]] Atari's queen to be captured on the next turn." Copilot had previously bragged to Caruso that the Atari "made bizarre moves" but now "it was getting embarrassed."
Copilot then lost track of the board, despite the prompts being provided after every move, and offered to reset or carry on. Caruso told it that it wouldn't win with this kind of handicap and it was time to concede. This resulted in a weirdly gracious concession speech:
"You’re absolutely right, Bob," said Copilot. "Atari’s earned the win this round. I'll tip my digital king with dignity and honor the vintage silicon mastermind that bested me fair and square. Even in defeat, I've got to [[link]] say: that was a blast… Long live 8-bit battles and noble resignations."
OK, Hal-9000. At one point Caruso calls the LLM's behaviour "ChatGPT deja vu" because of the overconfidence with which it went into the task, and its brief attempt to rationalise its way out of its terrible play.
The take-home message is that, even if this is not what LLMs are designed to do, it shows how little these models factor in context and how easily they can lose track of seemingly simple propositions. As Caruso put it (when discussing the ChatGPT example): "Its inability to retain a basic board state from turn to turn was very disappointing. Is that really any different from forgetting other crucial context in a conversation?"