What artificial intelligence can actually do today
What artificial intelligence can actually do today

Spoiler alert: It's still a long time before the uprising of the machines.

What artificial intelligence can actually do today
What artificial intelligence can actually do today

When Elon Musk introduces the humanoid robot Tesla Bot, it seems that a new scientific revolution is just around the corner. A little more - and artificial intelligence (AI) will surpass human, and machines will replace us at work. However, professors Gary Marcus and Ernest Davis, both renowned AI experts, are asked not to rush to such conclusions.

In Artificial Intelligence Reboot, researchers explain why modern technology is far from ideal. With the permission of the publishing house "Alpina PRO" Lifehacker publishes an excerpt from the first chapter.

At this point, there is a huge gap - a real chasm - between our ambition and the reality of artificial intelligence. This chasm has arisen due to the unresolvedness of three specific problems, each of which must be honestly dealt with.

The first of these is what we call gullibility, which is based on the fact that we humans have not really learned to distinguish between humans and machines, and this makes it easy to fool us. We attribute intelligence to computers because we ourselves have evolved and lived among people who largely base their actions on abstractions such as ideas, beliefs, and desires. The behavior of machines is often superficially similar to the behavior of humans, so we quickly assign machines the same type of basic mechanisms, even if the machines do not have them.

We can't help but think of machines in cognitive terms (“My computer thinks I deleted my file”), no matter how simple the rules that the machines actually follow. But the conclusions that justify themselves when applied to humans may be completely wrong when applied to artificial intelligence programs. In deference to a basic tenet of social psychology, we call this the fundamental validity error.

One of the earliest instances of this error occurred in the mid-1960s, when a chatbot named Eliza convinced some people that he really understood the things they were telling him. In fact, Eliza just picked up keywords, repeated the last thing the person said to her, and in a dead-end situation she resorted to standard conversational tricks like "Tell me about your childhood." If you mentioned your mother, she would ask you about your family, although she had no idea what family really is or why it is important to people. It was just a set of tricks, not a demonstration of true intelligence.

Despite the fact that Eliza did not understand people at all, many users were fooled by the dialogues with her. Some spent hours typing phrases on the keyboard, talking in this way with Eliza, but misinterpreting the chatbot's tricks, mistaking the parrot's speech for helpful, sincere advice or sympathy.

Joseph Weisenbaum Creator of Eliza.

People who knew very well that they were talking to a machine soon forgot this fact, just as theater lovers cast aside their disbelief for a while and forget that the action they witness has no right to be called real.

Eliza's interlocutors often demanded permission for a private conversation with the system and after the conversation insisted, despite all my explanations, that the machine really understood them.

In other cases, the error in assessing the authenticity may be fatal in the literal sense of the word. In 2016, one owner of an automated Tesla car relied so much on the seeming safety of autopilot mode that (according to stories) he completely immersed himself in watching the Harry Potter films, leaving the car to do everything on its own.

Everything went well - until at some point it got bad. Having driven hundreds or even thousands of miles without an accident, the car collided (in every sense of the word) with an unexpected obstacle: a white truck crossed the highway, and Tesla rushed right under the trailer, killing the car owner on the spot. (The car appeared to warn the driver several times to take control, but the driver appeared to be too relaxed to react quickly.)

The moral of this story is clear: the fact that some device may seem "smart" for a moment or two (and even six months) does not mean that it is really so or that it can cope with all the circumstances in which a person would react adequately.

The second problem we call the illusion of rapid progress: mistaking progress in artificial intelligence, associated with solving easy problems, for progress, associated with solving really difficult problems. This, for example, happened with the IBM Watson system: its progress in the game Jeopardy! seemed very promising, but in fact, the system turned out to be much further from understanding human language than the developers had anticipated.

It is possible that DeepMind's AlphaGo program will follow the same path. The game of go, like chess, is an idealized information game where both players can see the whole board at any time and calculate the consequences of moves by brute force.

In most cases, in real life, no one knows anything with complete certainty; our data is often incomplete or distorted.

Even in the simplest cases, there is a lot of uncertainty. When we decide whether to go to the doctor on foot or take the subway (since the day is cloudy), we do not know exactly how long it will take to wait for the subway train, whether the train gets stuck on the road, whether we will cram into the carriage like herring in a barrel or we will get wet in the rain on the street, not daring to take the subway, and how the doctor will react to our lateness.

We always work with the information that we have. Playing Go with itself millions of times, the DeepMind AlphaGo system has never dealt with uncertainty, it simply does not know what a lack of information or its incompleteness and inconsistency, not to mention the complexities of human interaction.

There is another parameter that makes mind games like go very different from the real world, and this again has to do with data. Even complex games (if the rules are strict enough) can be modeled almost perfectly, so the artificial intelligence systems that play them can easily collect the huge amounts of data they need to train. Thus, in the case of Go, a machine can simulate a game with people by simply playing against itself; even if the system needs terabytes of data, it will create it itself.

Programmers can thus obtain completely clean simulation data with little or no cost. On the contrary, in the real world, perfectly clean data does not exist, it is impossible to simulate it (since the rules of the game are constantly changing), and all the more difficult it is to collect many gigabytes of relevant data by trial and error.

In fact, we have only a few attempts to test different strategies.

We are not able, for example, to repeat a visit to a doctor 10 million times, gradually adjusting the parameters of decisions before each visit, in order to dramatically improve our behavior in terms of transport choice.

If programmers want to train a robot to help the elderly (say, to help put sick people to bed), every bit of data will be worth real money and real human time; there is no way to collect all the required data using simulation games. Even crash test dummies cannot replace real people.

It is necessary to collect data on real elderly people with different characteristics of senile movements, on different types of beds, different types of pajamas, different types of houses, and mistakes should not be made here, because dropping a person even at a distance of several centimeters from the bed would be a disaster. In this case, at stake is a certain progress (so far the most elementary) in this area has been achieved using the methods of narrow artificial intelligence. Computer systems have been developed that play almost at the level of the best human players in the video games Dota 2 and Starcraft 2, where at any given time only part of the game world is shown to the participants and, thus, each player faces the problem of lack of information - that with Clausewitz's light hand is called "the fog of the unknown." However, the developed systems still remain very narrowly focused and unstable in operation. For example, the AlphaStar program that plays in Starcraft 2 has only learned one specific race from a wide variety of characters, and almost none of these developments are playable as any other race. And, of course, there is no reason to believe that the methods used in these programs are suitable for making successful generalizations in much more complex real-life situations. real lives. As IBM has discovered not once, but already twice (first in chess, and then in Jeopardy!), Success in problems from a closed world does not at all guarantee success in an open world.

The third circle of the described chasm is an overestimation of reliability. Over and over again, we see that as soon as people with the help of artificial intelligence find a solution to some problem that can function without failures for a while, they automatically assume that with revision (and with a slightly larger amount of data) everything will work reliably. time. But this is not necessarily the case.

We take again cars without drivers. It is relatively easy to create a demo of an autonomous vehicle that will correctly drive along clearly marked lane on a calm road; however, people have been able to do this for over a century. However, it is much more difficult to get these systems to work in difficult or unexpected circumstances.

As Missy Cummings, director of the Humans and Autonomy Laboratory at Duke University (and a former US Navy fighter pilot), told us in an email, the question is not how many miles a driverless car can travel without an accident. but in the extent to which these cars are able to adapt to changing situations. According to her Missy Cummings, email to authors on September 22, 2018., modern semi-autonomous vehicles "typically only operate in a very narrow range of conditions, which say nothing about how they can operate under less than ideal conditions."

Looking completely reliable on millions of test miles in Phoenix does not mean performing well during the monsoon in Bombay.

This fundamental difference between how autonomous vehicles behave in ideal conditions (such as sunny days on suburban multi-lane roads) and what they might do in extreme conditions can easily become a matter of success and failure for an entire industry.

With so little emphasis on autonomous driving in extreme conditions and that current methodology has not evolved in the direction of ensuring that the autopilot will work correctly in conditions that are only just beginning to be considered for real, it may well soon become clear that billions of dollars have been spent on methods of building self-driving cars that simply fail to deliver human-like driving reliability. It is possible that to achieve the level of technical confidence that we need, approaches that are fundamentally different from the current ones are required.

And cars are just one example of many similar ones. In modern research on artificial intelligence, its reliability has been globally underestimated. This is partly because most of the current developments in this area involve problems that are highly error-tolerant, such as recommending advertising or promoting new products.

Indeed, if we recommend five types of products to you, and you only like three of them, no harm will happen. But in a number of critical AI applications for the future, including driverless cars, elderly care and healthcare planning, human-like reliability will be critical.

No one buys a home robot that can safely carry your elderly grandfather to bed only four times out of five.

Even in those tasks where modern artificial intelligence should theoretically appear in the best possible light, serious failures occur regularly, sometimes looking very funny. A typical example: computers, in principle, have already learned quite well how to recognize what is (or is happening) in this or that image.

Sometimes these algorithms work great, but often they produce completely incredible errors. If you show an image to an automated system that generates captions for photographs of everyday scenes, you often get an answer that is remarkably similar to what a human would write; for example, for the scene below, where a group of people are playing frisbee, Google's highly publicized subtitle generating system gives it exactly the right name.

Fig 1.1. Group of young people playing frisbee (plausible photo caption, automatically generated by AI)
Fig 1.1. Group of young people playing frisbee (plausible photo caption, automatically generated by AI)

But five minutes later, you can easily get an absolutely absurd answer from the same system, as happened, for example, with this road sign, on which someone stuck stickers: the computer called The creators of the system did not explain why this error occurred, but such cases are not uncommon. We can assume that the system in this particular case classified (perhaps in terms of color and texture) the photograph as similar to the other pictures (from which it learned) labeled as "a refrigerator filled with lots of food and drinks." Naturally, the computer did not understand (which a person could easily understand) that such an inscription would be appropriate only in the case of a large rectangular metal box with various (and even then not all) objects inside. this scene is "a refrigerator with lots of food and drinks."

Rice. 1.2. Refrigerator filled with loads of food and drinks (totally implausible headline, created by the same system as above)
Rice. 1.2. Refrigerator filled with loads of food and drinks (totally implausible headline, created by the same system as above)

Likewise, driverless cars often correctly identify what they “see,” but sometimes they seem to overlook the obvious, as in the case of Tesla, which regularly crashed into parked fire trucks or ambulances on autopilot. Blind spots like these can be even more dangerous if they are located in systems that control power grids or are responsible for monitoring public health.

To bridge the gap between ambition and the realities of artificial intelligence, we need three things: a clear awareness of the values at stake in this game, a clear understanding of why modern AI systems do not perform their functions reliably enough, and, finally, a new development strategy machine thinking.

Since the stakes on artificial intelligence are really high in terms of jobs, safety, and the fabric of society, there is an urgent need for all of us - AI professionals, related professions, ordinary citizens and politicians - to understand the true state of the art in order to learn critically assess the level and nature of the development of today's artificial intelligence.

Just as it is important for citizens interested in news and statistics to understand how easy it is to mislead people with words and numbers, so here is an increasingly significant aspect of understanding so that we can figure out where artificial intelligence is. only advertising, but where is it real; what he is able to do now, and what he does not know how and, perhaps, will not learn.

The most important thing is to realize that artificial intelligence is not magic, but just a set of techniques and algorithms, each of which has its own strengths and weaknesses, is suitable for some tasks and not suitable for others. One of the main reasons we set out to write this book is that much of what we read about artificial intelligence seems to us to be an absolute fantasy, growing out of an unfounded confidence in the almost magical power of artificial intelligence.

Meanwhile, this fiction has nothing to do with modern technological capabilities. Unfortunately, the discussion of AI among the general public has been and is heavily influenced by speculation and exaggeration: most people have no idea how difficult it is to create universal artificial intelligence.

Let's clarify further discussion. Although clarifying the realities associated with AI will require serious criticism from us, we ourselves are by no means opponents of artificial intelligence, we really like this side of technological progress. We have lived a significant part of our life as professionals in this field and we want it to develop as quickly as possible.

American philosopher Hubert Dreyfus once wrote a book about what heights, in his opinion, artificial intelligence can never reach. This is not what this book is about. It focuses in part on what AI cannot currently do and why it is important to understand it, but a significant part of it talks about what could be done to improve computer thinking and extend it to areas where it now has difficulty doing first. Steps.

We don't want artificial intelligence to disappear; we want it to improve, moreover, radically, so that we can really count on it and solve with its help the many problems of mankind. We have a lot of criticisms about the current state of artificial intelligence, but our criticism is a manifestation of love for the science we do, not a call to give up and abandon everything.

In short, we believe that artificial intelligence can indeed seriously transform our world; but we also believe that many of the basic assumptions about AI must change before we can talk about real progress. Our proposed "reboot" of artificial intelligence is not at all a reason to put an end to research (although some may understand our book in this spirit), but rather a diagnosis: where are we stuck now and how can we get out of today's situation.

We believe that the best way to move forward may be to look inward, facing the structure of our own mind.

Truly intelligent machines don't have to be exact replicas of humans, but anyone who looks at artificial intelligence honestly will see that there is still a lot to learn from humans, especially from young children, who are in many ways far superior to machines in their ability to absorb and understand new concepts.

Medical scientists often characterize computers as “superhuman” (in one way or another) systems, but the human brain is still vastly superior to its silicon counterparts in at least five aspects: we can understand language, we can understand the world, we can flexibly adapt to new circumstances, we can quickly learn new things (even without large amounts of data) and can reason in the face of incomplete and even conflicting information. On all these fronts, modern artificial intelligence systems are hopelessly behind humans.

Artificial Intelligence Reboot
Artificial Intelligence Reboot

Artificial Intelligence: Reboot will interest people who want to understand modern technologies and understand how and when a new generation of AI can make our lives better.