Chess is not hard enough!

ChessSetIn a previous post discussing Mitopia’s Carmot ontology, we stated that the worst-case requirements for an ontology designed to understand ‘everything’ would be those in support of a government intelligence or anti-terrorism agency. While we have shown it is possible to design  an ontology to support such Knowledge Level (KL) and Wisdom Level (WL) activities (see here), we should not for a moment trivialize exactly how difficult attempting even a portion of such an ambitious goal actually is.  The computational complexity involved makes chess seem trivial.

These kinds of government agencies face a highly distributed and changeable threat that largely falls below the resolution of detection by the current isolated intelligence organizations.  Our intelligence infrastructure struggles to combat the terrorist threat because its supporting computational architectures are unfortunately based on flawed “constrained system” approaches.  To understand the concept of a constrained system and why they fail at KL and WL problems, consider the game of chess, or more specifically the history of computer chess playing.

200px-HAL9000.svgMastery of the game of chess is deemed by many to be one of the most complex things the human mind can accomplish.  Chess grand masters are intellectual celebrities with huge and dedicated followings.  If any one problem defines the nature of human ‘wisdom’, chess, in most people’s mind, has come to represent that intangible that makes the human mind unique and distinguishes man from machine.  Little wonder then that shortly after the invention of computers, the quest to create a program capable of playing chess, perhaps of one day beating a human grand master, began.  The argument was that if a computer could beat a human chess player, then it would have disproven the theory that human ‘wisdom’ was unattainable by machines. Given that chess is so ‘hard’, it was assumed that the HAL 9000 would be a short step from there. What programmer could resist such a challenge!

220px-Kasparov-34The ancestor of the modern game of chess is believed to have originated during the 6th century in northwest India, and by the 9th century the game had reached Russia and Europe.  By around 1475, the game had become much as it is known today.  The first writings concerning the theory of how to play chess began to appear in the 15th century.  The first true chess programs (capable of playing an entire game) were written in 1957 – just 8 years after EDSAC (the first stored program computer). Development proceeded rapidly with chess computers becoming massive machines costing millions of dollars and cumulatively involving thousands of man years of software development.  Despite this, it was not until 1994 that the ChessGenius program finally defeated a world champion (Gary Kasparov) in a standard game.  Shortly afterwards (1997) IBM’s Deep Blue won a 6 game match with Kasparov.  Since that time as computer hardware has grown progressively faster, defeats of human world champions have now become commonplace.

To see what it took to accomplish this feat, we can look at the Deep Blue computer. The following description is taken from the Wikipedia article:

220px-Deep_BlueThe system derived its playing strength mainly out of brute force computing power. It was a massively parallel, RS/6000 SP Thin P2SC-based system with 30 nodes, with each node containing a 120 MHz P2SC microprocessor for a total of 30, enhanced with 480 special purpose VLSI chess chips. Its chess playing program was written in C and ran under the AIX operating system. It was capable of evaluating 200 million positions per second, twice as fast as the 1996 version. In June 1997, Deep Blue was the 259th most powerful supercomputer according to the TOP500 list, achieving 11.38 GFLOPS on the High-Performance LINPACK benchmark.

The Deep Blue chess computer that defeated Kasparov in 1997 would typically search to a depth of between six and eight moves to a maximum of twenty or even more moves in some situations. Levy and Newborn estimate that one additional ply (half-move) increases the playing strength 50 to 70 Elo points.

ChessHumor

Deep Blue’s evaluation function was initially written in a generalized form, with many to-be-determined parameters (e.g. how important is a safe king position compared to a space advantage in the center, etc.). The optimal values for these parameters were then determined by the system itself, by analyzing thousands of master games. The evaluation function had been split into 8,000 parts, many of them designed for special positions. In the opening book there were over 4,000 positions and 700,000 grandmaster games. The endgame database contained many six piece endgames and five or fewer piece positions. Before the second match, the chess knowledge of the program was fine tuned by grandmaster Joel Benjamin. The opening library was provided by grandmasters Miguel Illescas, John Fedorowicz, and Nick de Firmian. When Kasparov requested that he be allowed to study other games that Deep Blue had played so as to better understand his opponent, IBM refused. However, Kasparov did study many popular PC computer games to become familiar with computer game play in general.

Writer Nate Silver suggests that a bug in Deep Blue’s software led to a seemingly random move (the 44th in the first game) which Kasparov attributed to “superior intelligence”. Subsequently, Kasparov experienced a drop in performance due to anxiety in the following game.

Computational complexity in chess comes from the approximately 10 to the power 47 legal positionsThe behavior and number of chess pieces is finite and well defined. Chess is perceived as a complex game due to the enormous combination of possible moves, counter strategies and outcomes (there is estimated to be around 10 to the power 47 legal positions!).  If you model the game’s problem domain with a computer, the classic approach is to write a large, complex rules-based application to explore the problem tree deep into the possible futures.  We have spent half a century focussed continuously on this problem, constructed supercomputers to explore it, invested thousands of man-years to develop algorithms, and eventually we have arrived at a point where computers can match a skilled human player.  All this for a rigidly defined game with just 6 different but tightly constrained pieces on a board, and a grand total of 64 squares they can move to.  In truth, chess is a trivially simple “constrained” system.

Imagine that we change the game so that each of the opponent’s pieces can continuously acquire new behaviors and strategies, the board can change shape and size without notice, the number of pieces and squares are measured in billions, all pieces move simultaneously and with varying intents, and most of the board is hidden from us!

250px-Kempelen_chess1Now we have a reasonable model for computational complexity in  the intelligence or anti-terrorist threat space. It is obvious that the classic solution we adopted to solve the “constrained” chess problem cannot possibly work for the new “unconstrained” game we are playing.  To win a game in this scenario, we need to decentralize control and empower each of our pieces to access and dynamically combine all available techniques enabling them to rapidly detect these changes, to adopt new strategies of their own, and to communicate and coordinate these strategies and insights with other pieces on the board through a common architecture.  Moreover, each of our ‘pieces’ is actually a hierarchical organization tackling a more tactical goal (and using its own ‘system’), and ultimately at the leaf nodes each ‘piece’ is a symbiosis between our ‘system’ and a human being (see ‘The Turk‘).

Our new anti-terrorism KL/WL game is a true measure of of what makes the human mind (or more accurately large organizations full of cooperating human minds) different from a machine.  In retrospect, thinking that a board game involving 6 pieces and 64 squares was any measure of human potential was foolishness of the first order.  The real world that any human mind deals with continuously is infinitely more complex than chess; intelligence and anti-terrorism considerably more so.

No centralized strategy, especially one constrained by the pace of the government acquisition process (a cycle with a response time measured in years) can possibly succeed in winning such a match, and yet that is precisely how we attack the problem. Our enemies have shown that a distributed, devolved, organization can be devastatingly effective against a centralized or informationally compartmentalized one.  This brings us back to my very first post on this site (see here) regarding government software acquisition.

One thing is abundantly clear from the history of computer chess.  We cannot address complex KL and WL domains by taking existing components and gluing them together to make a system.  This means we must throw out our database models and our software design approaches, and invest the time (half a century for the simple game of chess!) to create the architecture and machinery capable of operating far above our current Information Level (IL) landscape.  Anyone who says they have a product that goes even a small way towards ‘solving’ such broad problems is either lying or deluded.  Even Mitopia, which has been in continuous development targeting such problems for well over 20 years now, is just beginning to scratch the surface of what it needed.

drinkinghydrantBottom line is expecting computers to ‘solve’ the intelligence or anti-terrorism problem in the near future is a mistake.  Humans are the only computational system that can do that right now, and they don’t have the bandwidth to address the data integration problem.  They need help.  Just like chess, it is not a matter that can be shown directly in a ‘visualizer’ because the solution passes through many stages and ‘connections’ so that it cannot be shown on a single screen.  Graphs and charts are nice, but they don’t help solve ‘connection’ problems (though they might let you draw pictures of a human’s solution).

Instead we must create computer systems to help humans unearth indirect connections and complex sequences within massive data sets, especially those they don’t even realize are important.  This simpler requirement it turns out, can in fact be tackled.

Doing so takes a contiguous ontology based on the scientific method (see here), and an end-to-end fully integrated system designed and custom built one component at a time from the ground up (not just assembled) to do it.  That is what Mitopia® is for.  Don’t worry though, Mitopia® is not an AI, and it certainly won’t be replacing human beings any time soon.  But it will help them solve problems that are an awful lot harder and more important than chess.