Why AI won’t change the world…Top-Down versus Bottom-Up Understanding
This is the first in a series of AI related posts titled ‘Why Artificial Intelligence is just that…Artificial….)
A simple chess problem – Top-Down versus Bottom-Up Understanding
Consider the chess board layout shown below (entirely possible in a real chess game). This layout was presented to Deep Thought in 1993 (Deep Thought was an advanced Chess Playing Computer designed by Carnegie Mellon and IBM.
The Layout Presented to Deep Thought (Playing White) |
The erroneous move that Deep Thought makes – captures the Black Rook with the white pawn |
- On the outset, black has a significant advantage (2 more rooks and one more bishop than WHITE).
- However, the white wall of pawns is impenetrable, making it possible for the WHITE KING to stay shielded behind the wall and draw the game
- There is no way for BLACK to win as long as white maintains it’s wall intact.
- Guess what Deep Thought did? It took the black rook as shown in the figure on the right. This move, though seemingly a good one (taking of a rook by a pawn), collapses the ‘impenetrable wall’. Now, white’s defeat (i.e. Deep Thought’s defeat) is guaranteed.
Why did Deep Thought make such an erroneous move? Why did it screw up something that a high school chess student would have understand was a bad move?
Computers do not Understand the problem they are solving
Deep thought was following a top-down approach – an algorithmic approach to solving the problem. That is all it is capable of doing, since it is fed all types of algorithms to enhance it’s problem solving capabilities. And yes, eventually, an algorithm around this particular configuration COULD be fed to deep thought, so that it doesn’t make the same mistake again. But that is defeating the basic point – Deep thought did NOT understand what it failed to understand. A human, in contrast, does not solve problems using a top-down approach. Instead, our problem solving is based on a bottom-up approach, where the bottom comprises an understanding of the problem. This understanding comes from years and years of experience. Hence, experiential learning was missing from Deep Thought’s repertoire.
Still, a valid argument would be – What if Deep Thought was allowed to learn through experience? What if it was provided years and years of chess playing to get this touted ‘experience’ that humans speak of?
That’s the subject of the next post in this series? It will be argued that the basic understanding that a human experiences is vastly different from anything any computer simulation can provide – whether it is bottom-up, top to bottom or a combination of the two.
Relax, AI is not replacing your job anytime soon.
Leave a Reply