學術, 敎育

Artificial Intelligence Is a House Divided

이강기 2021. 1. 22. 21:17

Artificial Intelligence Is a House Divided

 

A decades-old rivalry has riven the field. It’s time to move on.

 

By Michael Wooldridge

The Chronicle of Higher Education

January 20, 2021

 

 

The sun is shining on computer science right now, especially the subfield of artificial intelligence. Not a day goes by with out the press breathlessly hailing some new miracle of intelligent machines. The leaders of the field are garlanded with honors, and seem to enjoy a status few academics have ever reached. Eye-watering amounts of money pour into AI, and new technology empires are being forged before our eyes. In 2014, DeepMind, a U.K. company with apparently no products, no customers, no obvious technology, and only about 50 employees, was acquired by Google for the reported sum of $600 million; today, DeepMind employs more than 1,000 people.

 

Given all this, along with the financial hard times that have hit so many other fields, AI, from the outside, must appear a happy ship. In fact, it’s hard to imagine how things could be rosier. But look a little closer, and you’ll see that all is not well in the field. AI is a broad church, and like many churches, it has schisms.

 

The fiercely controversial subject that has riven the field is perhaps the most basic question in AI: To achieve intelligent machines, should we model the mind or the brain? The former approach is known as symbolic AI, and it largely dominated the field for much of its 50-plus years of existence. The latter approach is called neural networks. For much of the field’s existence, neural nets were regarded as a poor cousin to symbolic AI at best and a dead end at worst. But the current triumphs of AI are based on dramatic advances in neural-network technology, and now it is symbolic AI that is on its back foot. Some neural-net researchers vocally proclaim symbolic AI to be a dead field, and the symbolic AI community is desperately seeking to find a role for their ideas in the new AI.

 

Tribalism and mindless dogma are not the way forward.

 

The field of AI was given its name by John McCarthy in 1956. The founder of Stanford University’s AI lab, McCarthy was the most influential and outspoken champion of the idea that the route to AI involved building machines that could reason. AI requires that we have computer programs that can compute the right thing to do at any given moment. In McCarthy’s view, computing the right thing to do would reduce to logical reasoning: An AI system, according to him, should deduce the correct course of action. (If this makes you think of a certain Mr. Spock, well, you are in good company.)

 

McCarthy’s version of AI is called symbolic AI because the reasoning involves manipulating expressions that are the mathematical equivalent of sentences. These expressions are made up of symbols that mean something in the real world. For example, a robot built according to the McCarthy model might use the symbol room451 to refer to your bedroom, and the symbol cleanUp to refer to the activity of cleaning. So when the robot decides to cleanUp(room451), we can immediately see what it is going to do: clean your bedroom.

 

 

There is lots to love about McCarthy’s dream. It is simple, elegant, and mathematically clean, and it is transparent. If we want to know why one of McCarthy’s robots cleaned your room, we can simply examine its reasoning. McCarthy’s dream of AI was at the rather extreme end of the symbolic AI spectrum — it was not even widely accepted in the symbolic AI community, many of whose members believed in slightly “weaker” (and more practical) versions of the dream. But his basic ideas formed the AI orthodoxy for 30 years, from the founding of the field through the late 1980s. And while symbolic AI is no longer center stage for academic programs today, it remains an active area of research.

 

AI excels in developing ideas that are beautiful in principle, but which simply don’t work in the real world, and symbolic AI is perhaps the canonical example of that phenomenon. There are many problems in making McCarthy’s vision a reality, but perhaps the most important is that while some problems are well suited to this version of AI (proving mathematical theorems, for example), it just doesn’t seem to work on many others. Symbolic AI has made only limited progress on problems that require perceiving and understanding the physical world. And it turns out that perceiving and understanding the physical world is a ubiquitous requirement for AI — you won’t get far in building a useful robot if it can’t understand what is around it. Knowing where you are and what is around you is by far the biggest obstacle standing in the way of the long-held dream of driverless cars.

 

By the late 1980s, the problems with the purest versions of symbolic AI caused it to drift out of favor. (McCarthy, a remarkable individual by any standards, never gave up on his dream: He remained committed to it right until his death in 2011 at the age of 84.)

 

A natural alternative to symbolic AI came to prominence: Instead of modeling high-level reasoning processes, why not instead model the brain? After all, brains are the only things that we know for certain can produce intelligent behavior. Why not start with them?

 

In AI, this approach is called neural networks. The name derives from neurons, the massively interconnected cell structures that appear in brains and nervous systems. Each neuron is an extremely simple information processing device. But when huge numbers of them are connected together in massive networks, they can produce the miracle that is human intelligence. Neural-net researchers build software versions of these networks, and while they aren’t literally trying to simulate brains, the idea is that their networks will learn to produce intelligent behavior, just as in humans.

 

Neural networks are actually a very old idea — they date from the 1940s, and the work of Warren McCulloch and Walter Pitts, who realized that the natural neural networks that appear in human and animal brains resembled certain electrical circuits. However, McCulloch and Pitts had no means to actually build the structures they hypothesized, and it was not until the 1960s that the idea began to take off.

 

 

Frank Rosenblatt, a Cornell psychology professor, developed a model of neural networks that goes by the gloriously retro name of perceptrons — this was the first neural-network model to actually be built, and the model remains relevant today. But work in the nascent field was effectively snuffed out by the publication of a 1969 book, Perceptrons, by MIT professors Marvin Minsky and Seymour A. Papert, who were staunchly in favor of the symbolic approach. Their book drew attention to some theoretical limitations of Rosenblatt’s model, and it was taken to imply that neural models were fundamentally limited in what they could achieve. Rosenblatt died in a boating accident just two years later, and neural networks lost their most prominent champion. Research into neural networks went into abeyance for nearly two decades.

 

There is still palpable bitterness about Minsky and Papert’s book today. When the book came out, the church of AI was divided, and the two sides have never quite reconciled. When symbolic AI began its slow decline in the late 1980s, neural nets swung into favor for a decade, when new techniques for “training” neural nets were developed, and computers were at last powerful enough for neural nets big enough to do something useful. But the resurgence was short lived. By the end of the 1990s, neural nets were yet again in decline, having again hit the limits of what computers of the day could do. A decade later, however, the pendulum swung again, and this time the interest in neural networks was unprecedented.

 

Three ingredients came together to drive the new neural-network revolution. First were some scientific advances, called “deep learning” (basically, bigger and richer neural networks). Second, computer-processing power got cheap enough to make large neural networks affordable. Third, and just as important, was the availability of lots and lots of data: Neural networks are data hungry. And we are, of course, now in the era of “big data.”

 

The last decade has seen an unprecedented wave of success stories in AI, and it is these successes that have led to the current AI frenzy. In 2016, DeepMind famously demonstrated a Go-playing program that could reliably beat world-champion players. This fall, a DeepMind project called AlphaFold made a giant step forward in biology by better predicting protein structures (“‘It Will Change Everything,’” began a headline in Nature). Elsewhere, rapid progress has been made on driverless-car technology — last year, Waymo, Google’s driverless-car company, launched a completely driver-free taxi service in Phoenix.

 

Recognition for the leaders of the new AI came in 2018, when Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, three of the most prominent champions of neural networks, who had stuck with the technology throughout the lean years, were awarded the Turing Award — often described as the Nobel Prize for computing — which comes with $1 million in prize money. There could have been no clearer signal that, at last, neural networks had been accepted into the mainstream.

 

All these successes are predominantly the successes of deep learning. Symbolic AI has played a part in some of these — but strictly in a supporting role, never center stage.

 

 

While the media tend to generically apply the “AI” label to all recent advances, some members of the deep-learning community profoundly dislike it. They identify it with a long list of failed ideas that have characterized the history of AI, of which the symbolic AI project, they believe, is the most prominent, and most egregious.

 

The successes of deep learning this century are real and exciting and deserve to be celebrated and applauded. And those researchers that stuck with neural networks through the lean years deserve our admiration for their vision and determination in the face, at times, of ridicule or scorn from their academic peers. But it is easy to get overexcited by recent progress. Deep learning alone will not take us to the ultimate dream of AI. It is surely one of the key ingredients, but there will be many others — some of which we probably cannot imagine right now. For all the progress we have made, we will not achieve the dream soon — if we achieve it at all. There is, I believe, no silver bullet for AI. Neural networks and symbolic AI each succeed with different aspects of intelligent behavior. Tribalism and mindless dogma are not the way forward: We must consider each other’s ideas and learn from them. And to do this, we must first cast away the bitterness of ancient rivalries.

 

This essay is adapted from the book A Brief History of Artificial Intelligence (Flatiron Press).

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.

 

Michael Wooldridge is the head of the department of computer science at the University of Oxford. His new book is A Brief History of Artificial Intelligence (Flatiron Press).

'學術, 敎育' 카테고리의 다른 글

He Wants to Save Classics From Whiteness. Can the Field Survive?  (0) 2021.02.03
The problem with prediction  (0) 2021.01.26
On Language Evolution  (0) 2021.01.20
Without God or Reason  (0) 2021.01.12
Against Academic Book Reviews  (0) 2021.01.09