- The number of possible, unique chess games is greater than the number of electrons in the universe.
- The Isle of Lewis chess pieces are the oldest surviving complete chess set known, dating back to the 12th century. Discovered in the Outer Hebrides of Scotland and crafted from walrus ivory, they are thought to have originated from Norway.
- The word “checkmate” comes from the Persian shah mat, which means “the king is defeated”.
- The rook’s name comes from the Arabic word rukh, meaning chariot. During the Middle Ages, when chariots were no longer in use, the rook was gradually modified to resemble the turret of a castle.
- The shortest recorded stalemate took place in just 10 moves.
- The longest chess game on record took 269 moves to produce a draw.
- The longest game of chess theoretically possible is 5,949 moves.
- The shortest chess game on record produced a checkmate in only 2 moves.
- There are 318, 979,564,000 possible ways of playing the first four moves for both sides in a game.
- There are approximately 169,518,829,100,544,000,000,000 ways to play the first 10 moves in a game of chess.
- IBM’s Deep Thought became the first computer to beat a chess grandmaster by defeating Garry Kasparov in 1988.
What does it mean to be a good chess player? Well, I guess you could say good players win and bad players don’t, which is perfectly fair, and it serves me right for asking such a dick stupid question, but what happens when experts match wits with experts? What forces are at play that ultimately decide who wins and loses? When a human beats a computer, by what virtues of his game does he do so? And when a computer loses to a human, what deficiencies in its play caused its downfall? Research suggests that the gap between expert and novice players - and human wetware and computer software - exists at a deeper level of play than previously thought. Expert (human) chess players use a failure-based model to scientifically evaluate their available moves, rejecting one only when it proves itself to be untenable. By contrast novice players (and computers) tend to think themselves logically through the game, evaluating possible moves based on rough (human) or precise (computer) statistical calculations of the probable success of a move in the context of preceding play and projected future play.
In a study conducted by Cowley and Byrne of Trinity College, Dublin in 2006, the two researchers analyzed the playing styles of xxx novice and expert players. They found that novice players tended to establish their strategy early in the game, sticking to it even when it started to work against them. Established players, by contrast, consistently re-evaluated their approach every time the board changed, quickly rejecting any untenable strategy they had been utilizing. Cowley and Byrne propose that these results are evidence that domain expertise may facilitate falsification, which is a crisp, scientific way of saying that expert players generate move sequences that falsify their plans more readily than novice players, who tend to favor move sequences that confirm their plans.
The behavior of the novice player corresponds to what is called confirmation bias, our tendency to search for confirming evidence that reinforces our beliefs, and avoid information that contradicts what we think we know. Confirmation bias contributes to irrational decision-making and tends to reinforce prejudices, yet it appears to be the most common method of evaluation even for hypotheses amenable to easy scientific testing (say, the taste appeal of an unfamiliar food someone offers you). So why, given the fallibility of confirmation testing as a way of vetting a proposition, do we prefer it to the much more reliable method of falsification? It would seem at first glance that confirmation is easier than falsification because it’s less rigorous and requires only a perfunctory rifling through one’s memory banks rather than the formal process of scientific vetting. But actually, for confirmation to be appropriately strong, one would have to dredge up dozens, if not hundreds - or more, who really knows - of experiences that confirm, without doubt, the conjecture at hand.
Corroboration relies on a vast bank of knowledge, consisting mostly of experience, from which we select the events and outcomes that we believe are applicable to our hypothesis. However, we’re much more likely to choose events and outcomes that support our hypothesis rather than challenge it. Why? One possibility is that we find the task of filtering our memories much easier than the task of using deductive logical strategies to come up with statements that correctly reflect the states of things in the world, and so we deliberately choose friendly occurrences in order to eliminate the need for additional hypotheses to explain the events that disprove our “rule”. Another possibility is that we may trust experience over logic, or we know (and this is true) that we are better at sorting through our experiences than thinking rationally. Therefore, we may place more trust in our ability to verify claims by appealing to past experience than in our ability to formulate strong, valid rational arguments to support a claim - and rightly so. Corroboration puts us into contact with our past and our humanity and reinforces the link between what we’ve experienced and what we believe. Elimination is more difficult, more alien, less engaging, and regularly forces us to admit that a piece of information we’ve inculcated is in fact incorrect, leading to uncomfortable cognitive dissonance. So it’s easy to see why we might prefer the familiar conditions of corroboration to the antiseptic conditions of logic. But this does not mean we have a license to ignore reasoning in favor of empirical verification.
While novice players seem to rely on a single hypothesis to formulate their strategy, expert players rely on a mental “file” of hundreds (perhaps thousands) of moves and strategies to inform their play. When a game changes course, the expert player re-shapes his attack, readjusts his play and second-guesses his assumptions about the course of the game. Instead of relying on a single “good” strategy, the expert advances by sifting through a plenum of known strategies and filtering the good from the bad according to the development of the game in play. Good players, in other words, do not win games by optimistically projecting themselves ahead in the game in order to identify winning strategies, but by scientifically considering and rejecting bad strategies. Their careful hypothesis testing brings to mind the edict that good science does not attempt to corroborate theories. It attempts to destroy them, to falsify the prime theses.
Like corroboration, falsification also relies on vast banks of knowledge. However, this strategy demands that instead of selecting facts and outcomes that support a given conjecture, we instead seek out instances that violate it. The advantage of falsification is that, as Popper points out, it eliminates false premises, thereby disallowing further conjectures that inevitably lead to false conclusions.
While falsification is indeed better at weeding out incorrect assumptions, it relies on the assumption that all information is (theoretically) falsifiable. It is true that most information is of the sort that is falsifiable (say, statements regarding historical facts or the principles of fixed systems), but this is not the case for all information. Some statements can be proven neither true nor false within a given set of parameters, such as propositions indefinable within a certain system of the sort identified by Alan Turing and Kurt Godel (which led to the development of the latter’s “Incompleteness Theorem”). There’s also a whole class of scientific hypotheses that can be neither confirmed or denied because they can’t be tested, and a host of metaphysical conjectures for which there is no real information to accept or reject. Furthermore, falsification itself provides no strategy for “moving forward” - that is, producing a statement of intent based on one’s conclusions about a certain situation. It contains no instructions about how to make the all-important leap from choosing to acting. It eliminates bad guesses, but cannot certify good ones. Because of its inability to select a course or narrow viable options down to a manageable set, falsification is inappropriate as a general strategy. If falsification was able, as a principle, to limit our choices of actions to a workable few, then it would be advantageous. But, used as a general principle for determining a rational course of action, it is capable of eliminating only a few possibilities at a time, and at such mental cost that it hardly seems efficient. If there are literally thousands of possibilities open to us in a given situation, falsification seems radically inefficient.
Strong chess players behave like good scientists, falsifying their own hypotheses in order to strengthen their position. Karl Popper proposes that falsification is better than confirmation for strengthening a hypothesis, because no matter how much evidence you collect in support of a theory, there’s always the possibility that it might be refuted by some additional information you haven’t yet encountered. Falsification turns uncertainty into verification by seeking out facts that explicitly disprove a hypothesis. A conjecture supported only by confirmation is never absolutely certain - even when supported by a bounty of evidence - but a hypothesis canceled out by a single piece of falsifying information is unquestionably invalid. Because corroboration can lead to the acceptance of untrue ideas, it’s a less reliable strategy for building arguments to support the beliefs that guide our responses to the typical demands of world (and help to illuminate its more corners). Corroboration is irrational and tends to confirm an individual’s own biases, lending credibility to ideas that may be based on bad information or questionable interpretations. But it’s the strategy we overwhelmingly use to make sense of the world.
While falsification is not an appropriate strategy for largely unregulated decision-making with high numbers of possibilities, it is appropriate in highly regulated situations where strategies can only be formulated according to a fixed and narrow body of possibilities - which is where chess comes in. Eliminating bad guesses is only a helpful strategy when it is able to generate a range of “good” guesses small enough for the human brain to process one-by-one. Choosing the “best” from among these “good” guesses then becomes a scientific process where each in turn is tested against a fixed body of consistent information (the rules of the game) and a flexible and expandable mental “file” of mostly consistent information (game training and previous games). This file contains facts which, while still open to interpretation, are easily sortable according to context and applicability. Thus, using the very specialized information contained in these two caches, a good player can make appropriate use of the method of falsification to select workable strategies that strengthen his position in the game. The novice player, while he may grasp the rules of the game, nevertheless lacks the specialized reserve of moves and strategies that the experienced player has, and therefore falsification is not an appropriate method of play for him.
So it seems that limitation is, paradoxically, the key to chess proficiency. But this kind of limitation refers to limits in the mathematical sense; limits that act as boundaries which differentiate chess knowledge from other kinds of knowledge, and not the “limited” knowledge of a chess beginner. The kind of “limit” I’m referring to is categorical, and its function is to distinguish chess-specific knowledge from generalized knowledge. In other words, the stronger the distinction, the better the player. Rational thinking must go hand-in-hand with “chess thinking” in order to produce winning strategies. Computer chess foundered for a number of years because the machines relied only on algorithms to determine their moves, and were easily outwitted by players with the ability to creatively employ strategies they learned in play or by studying the strategies of others. When computers were given the ability to cross-check their opponents’ moves with a database containing thousands of actual games played by chess masters, their success rates improved dramatically. Employing simple deductive logic led to losses, but using deduction to select from among a fixed reserve of possible moves produced, if not victories, at least much better performances. Because the novice player’s repertoire is limited, he is forced to fill in the gaps in his knowledge using a mix of logical deduction, psychological probing and educated guessing, all of which are likely to produce different conclusions between which he has little ability to distinguish the best.
The upshot? Falsification as a general strategy for confirming knowledge is inappropriate because it cannot produce a sufficiently narrow range of possible actions within a timely manner. Corroboration, despite its tendency to produce only propositions that confirm our irrational hypotheses, is oftentimes the only strategy available to us. However, I propose that we can make the most of our tendency to corroborate and reduce error by sorting our experiences carefully into narrow and well-defined categories, thus replicating the conditions of “expertise” that allow experienced individuals to operate successfully within a certain domain. As a strategy for determining best actions in the highly structured context of such pursuits as chess, calculus, politics or military engagement, falsification - paired with expertise - is a particularly potent instrument for picking out strategies that lead to success.