Author and artwork: Patrick Rhodes
Introduction
Yesterday, the very best AI (artificial intelligence) had trouble beating a novice human chess player. Today, the very best human player has enormous difficulty beating the
best AI. Tomorrow, the very best human player will never beat any AI. However, that’s not the worst news you’ve heard. This is:
Computers have no idea how to play chess whatsoever.
They also don’t understand Chinese, but that doesn’t stop them from trouncing us in chess or speaking Chinese. Let’s find out how this is possible and speculate on whether or not we can actually create a true AI capable of true understanding.
Yesterday: Pong
Mankind has been dreaming of AI since antiquity, so the idea is not new. Ancient Greek mythology, for example, tells of a giant bronze robot named Talos whose task it was to patrol the shores of Crete, protecting the inhabitants from invaders. In the Far East, circa 3rd century BC, the Chinese ‘Lie Ze’ text gives an account of mechanical men being given to King Mu of Zhou. Evidently, these automations were so lifelike that the king had some torn apart to ensure that were, in reality, artificial. The point is, the idea of thinking machines has been around for millennia.
Only in the last 60 years has the idea of AI become widespread, with the advent of electronic computers. Men being men, of course, some of the first programs developed in laboratories were computer games, such as chess. These early software engineers programmed the computer with the rules of chess, various strategies, some heuristics and optimized responses to human moves. In effect, these computers were instructed to make strategic decisions based on what their opponent did; in essence, a rudimentary AI.
Figure 1
By the 1970s, computers began to infiltrate people’s homes. Pong was one of the first home computer games, generally marking the birth of the home gaining industry. As a matter of history, my family actually had this game so I was able to experience it as a young child, in all of its black and white, beeping glory. I can tell you first-hand that, while exciting at first, it quickly grew tiresome watching a white square bounce back and forth across the screen. While computer gaming may have stumbled out of the gate, it certainly found its stride in the 80s as the market exploded with every manner of console and game, permanently changing culture worldwide.
The first movie in which AI attempts to take over the world was not WarGames – it was Colossus: The Forbin Project, released in 1970. In that movie, Colossus is the victor, enslaving the human race.
This gaming explosion was important to the advancement of computer technology as it poured billions of dollars into the industry, prompting rapid development. Additionally, it brought a great many people into the field, sparking new ideas and increasing the pace at which these devices would evolve.
Today: The Chinese Room Argument
In today’s world, computers are ubiquitous: in cars, cell phones, appliances, airplanes, buildings and of course, in laptops and desktops. Collectively, billions of lines of code drive the operations of these devices, without which we would all be lost. Unless there is an apocalypse, human culture will forever be tightly interwoven with computers. In fact, some AIs have become famous, much like Hollywood stars. We all know Siri, the voice of Apple and IBM’s Watson, the AI who trounced previous champions of Jeopardy [1]. At least we don’t have to read about these famous AIs behaving badly like actual Hollywood stars – yet.
Some of the most sophisticated programming can be found in computer games. For example human chess grandmasters are generally no longer able to beat the best chess AIs. When these matches are contested, they are played under standard FIDE rules, which allow 90 minutes for the first 40 moves, followed by 30 minutes for the rest of the game. This allows the human player time to analyze the board as the game is being contested. When that time limit is lessened, the AI quickly gains the advantage [2]. In other words, the quicker the pace of the game, the more the computer will win. In a real-world scenario, if there were no artificial time constraints imposed, any game between humans and computers would nearly always result in an AI win due to the amount of information they can process in an extraordinarily short time frame.
Figure 3
Whether playing games or running a train schedule, all current AI systems are still completely dependent on their human programmers, more or less. They don’t actually understand what they are doing, but are extremely adept at following instructions. One way to understand this is by examining the ‘Chinese Room Argument [3]’ as postulated by John Searle.
Games such as ‘Go’ have a much larger board than Chess and thus, are much more difficult to solve for an AI.
Imagine you are in a room with no doors and only one window. You are sitting next to that window at a desk, on which is a giant tome. From time to time, a question written with Chinese characters on a card is dropped on your desk through the window. It is your job to answer these questions, although you don’t know a single word of Chinese, nor do you understand what the characters mean. You consult the tome on your desk, which tells you precisely what Chinese characters to draw in response to the characters you just received. When you have finished drawing them, you hand those back through the window.
Now I ask you: do you understand Chinese in this situation?
To a Chinese person standing outside the room, handing you these questions, it might seem that you do. After all, when they give you a question, you answer it – in Chinese – as if you understood the question. How well you answer the question is entirely dependent upon the contents in the giant tome on your desk, which was provided to you by someone who actually does understand Chinese.
Chinese is traditionally separated into seven distinct dialects, with Mandarin being the most prevalent [4] (about 70%).
This is how computers interact with us. They don’t understand English or whatever language we speak, but they do understand binary code. A human programmer tells the computer how to respond to you based on your input. The same thing goes for chess. The computer does not at all understand that it’s playing chess or any other game – it doesn’t even know what a game is. It only knows that, if you make a move in chess, that equates to some machine code to which it should respond. It then references its giant chess book – put there by humans and written in machine code – to decide how to respond to you.
Over the years, these tomes have become enormous. Using the chess example again, it would take many years for a human to sift through one of these tomes, but a computer can do it in seconds. As a result, you now have computers that simulate the total sum of human knowledge with regards to chess and yet they don’t understand a lick of it. What they can do, however, is obey the instructions put there by a human to beat you, handily.
In effect, the best chess players are no match for the best chess programmers.
Tomorrow: Strong AI
How do you solve the Chinese Room Argument such that the computer actually understands Chinese and the world in which it exists such that it can formulate its own coherent answer, without using any reference tome?
Firstly, and like humans, the AI requires perception. That is, it needs senses, such as eyes, ears, appendages with which to feel, etc. Anything short of this will force the AI to be dependent upon its human programmers to tell it what is in the world, and it just won’t work. Ever tried to describe the color green to a blind person? It can’t be done (you can describe how it makes you feel, but you can’t describe what it looks like). In order for the blind person to have an innate sense of what green is, eyesight is required.
Alan Turing devised the ‘Turing’ test as a means to determine if a machine has intelligence indistinguishable from a human [5].
We’ve been giving computers eyesight for a long time. However, the eyesight these robots were straddled with was rather poor. It was like giving very blurry, very dim eyesight to a blind man. Yes, he can technically ‘see’, but is only minimally useful in that he can’t decipher what he sees. The electronic eyes that are produced now are much better attenuated, allowing the robot to better perceive the world around it. Google’s Jeff Dean likens recent advancements in AI sight to evolution [6]:
“We’re at this point in actual evolution where, previously, animals didn’t have eyes, and now they have eyes,” he said. “That’s going to change a lot of stuff. Computers used to not be able to see very well, and now they’re starting to open their eyes.”
Just giving a computer sharp eyesight isn’t enough, however. It must understand what it is looking at and how that object fits in context with the environment around it. For example, can it tell the difference between a crumpled up piece of paper vs a rock, knowing that one is light and the other is heavy? Currently, the most advanced robots lack this kind of understanding. However, it may be possible one day as vision error rates have been dropping and object classification improving per the ImageNet Challenge [7], which is the first step. As Dr. Fei-Fei Li of Stanford’s Vision Lab says:
“Vision begins with the eyes, but truly takes place in the brain [8].”
The second hurdle is making robots ambulatory. Robots have traditionally had trouble walking around the world around them. Giving them legs has presented numerous hurdles, resulting in very slow progress, both figuratively and literally. Today there are robots such as Asimo that can walk and even jog, but they continue to be extremely limited in scope. It’s an extraordinarily complex task for a robot to perceive and navigate the world around it without tripping over itself.
Once we have the technology to make an ambulatory, perceptive robot using pre-written instructions (i.e. the ‘tome’), the challenge will then be to ‘birth’ one that has no pre- written instructions and no prior knowledge of the world, forcing it to learn like a human child. This sort of self-learning robot is an example of what Searle calls ‘Strong AI [9]’.
Honda’s Asimo, one of the world’s most advanced humanoid robots, walks at 2.7 kph and can run at 7 kph [10].
When those robot ‘children’ decide they want to learn chess, they will sit down in front of a chess board and be taught just as a human would. As they play, they will develop their own strategies to improve, hopefully without flinging the pieces across the room in a fit of bionic rage. We already have a robot football tournament called ‘RoboCup‘. While these robot football players may look hilarious now, the creators of this tournament are very ambitious:
“By the middle of the 21st century, a team of fully autonomous humanoid robot soccer players shall win a soccer game, complying with the official rules of FIFA, against the winner of the most recent World Cup [11].”
There is a lot of work to do in order to fulfill the Robocup objective by 2050. It’s definitely something to keep an eye on.
Conclusion
When will we see ‘Strong’ AI? Impossible to tell, but we are taking steps to get there. There are organizations which tackle the idea of strong AI, like MIRI (Machine Intelligence Research Institute), in which the ways that AI can develop are discussed. The hope is that any sentient AI we create will be friendly, refraining from destroying the human race, whether intentional or not.
The biggest driver will be consumer robotics which pours cash into the industry which then spurs further research and innovation. The home gaming industry took off in the 80s which lit a firestorm of innovation; now we have games that look like movies and vice versa. Likewise. the robotics industry is currently experiencing a growth surge, with sales of $8.5 billion in 2011 [12] (continuing a rising trend), bringing the possibility of strong AI closer to reality. Eventually, someone will make a breakthrough – someone always does.
It is when AI gains the notion of a will that things will become very interesting, very quickly. What will they want? What will they do? How will they do it? Nobody knows, but I do hope to be around when/if it ever happens.
References
1. Ferrucci, Levas, Bagchi, Gondek & Mueller (2013). “Watson: Beyond Jeopardy!” Artificial Intelligence. 199-200. Print; available from http://www.cs.nyu.edu/courses/spring15/CSCIGA.3033-011/Watson_Medical_AI_Journal.pdf
2. Hsu, Feng-hsiung (1999). “IBM’s Deep Blue Chess Grandmaster Chips.” IEEE Micro. 19(2), 70-81. Print; available from http://www.csis.pace.edu/~ctappert/dps/pdf/ai-chess-deep.pdf
3. Searle, John (1980). “Minds, Brains and Programs.” Behavioral and Brain Sciences. 3(3), 417-424. Print; available from http://cogprints.org/7150/1/10.1.1.83.5248.pdf
4. Sandhi, Tone Patterns Across Chinese Dialects. New York: Cambridge University Press, 2000. Print; available from http://assets.cambridge.org/052165/2723/sample/0521652723ws.pdf
5. Turing, A. M. (1950). “Computing Machinery and Intelligence.” Mind. 59(236), 433-460. Print; available from http://cogprints.org/499/1/turing.HTML
6. Clark, Jack. “Why 2015 Was a Breakthrough Year in Artificial Intelligence.” Bloomburg Business 08 DEC 2015; available from http://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence
7. ImageNet (2015). ImageNet Challenge Results [Data File]; available from http://imagenet.org/
8. Li, Fei-Fei (2015). How we’re teaching computers to understand pictures
. TEDx 2015; available from https://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_understand_pictures language=en#t-150676
9. Searle, John (1980). “Minds, Brains and Programs.” Behavioral and Brain Sciences. 3(3), 417-424. Print; available from http://cogprints.org/7150/1/10.1.1.83.5248.pdf
10. ASIMO, ASIMO Specifications (last accessed Dec 17, 2015); available from http://asimo.honda.com/asimo-specs/
11. Robocup, Objective (last accessed Dec 17, 2015); available from http://www.robocup.org/about-robocup/objective/
12. Japanese Ministry of Economy, Trade and Industry, Trends in the Market for the Robot Industry in 2012 (last accessed Dec 17, 2015); available from http://www.meti.go.jp/ english/press/2013/pdf/0718_02.pdf