The narrower the scope of a conversation, the easier it is to have. If your interlocutor is more or less following a script, it is not hard to build a computer program that, with the help of simple phrase-book-like templates, can recognize a few variations on a theme. (“What time does your establishment close?” “I would like a reservation for four people at 7 p.m.”) But mastering a Berlitz phrase book doesn’t make you a fluent speaker of a foreign language. Sooner or later the non sequiturs start flowing.
Even in a closed domain like restaurant reservations, unusual circumstances are bound to come up. (“Unfortunately, we are redecorating the restaurant that week.”) A good computer programmer can dodge many of these bullets by inducing an interlocutor to rephrase. (“I’m sorry, did you say you were closed that week?”) In short stylized conversations, that may suffice. But in open-ended conversations about complex issues, such hedges will eventually get irritating, if not outright baffling.
To be fair, Google Duplex doesn’t literally use phrase-book-like templates. It uses “machine learning” techniques to extract a range of possible phrases drawn from an enormous data set of recordings of human conversations. But the basic problem remains the same: No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.
So what should the field of artificial intelligence do instead? Once upon a time, before the fashionable rise of machine learning and “big data,” A.I. researchers tried to understand how complex knowledge could be encoded and processed in computers. This project, known as knowledge engineering, aimed not to create programs that would detect statistical patterns in huge data sets but to formalize, in a system of rules, the fundamental elements of human understanding, so that those rules could be applied in computer programs. Rather than merely imitating the results of our thinking, machines would actually share some of our core cognitive abilities.
That job proved difficult and was never finished. But “difficult and unfinished” doesn’t mean misguided. A.I. researchers need to return to that project sooner rather than later, ideally enlisting the help of cognitive psychologists who study the question of how human cognition manages to be endlessly flexible.
Today’s dominant approach to A.I. has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can’t get us any further than a restaurant reservation, even in the hands of the world’s most capable A.I. company, it is time to reconsider that strategy.
By GARY MARCUS and ERNEST DAVIS
https://www.nytimes.com/2018/05/18/opinion/artificial-intelligence-challenges.html
Source link