Previous chapter: AVA in a Nutshell: Part I – Overview
In this part, we will deep dive into NLU and Artificial Intelligence relationship, with all of its paradoxes and plot twists that occurred in its 80-years history.
Millennia of Evolution
Back in the days, people were trying to teach machines to speak, or, at least, teach them to understand what we are saying - to learn a natural language.
Verbal, language interface. Not text one, since text interfaces were used almost from computers very beginning. Besides, the words might be understandable for non-specialists; their meaning is hardcoded, not that obvious, leaving no space for interpretation. It wasn't Natural Language Understanding!
The First human "understanding" program was probably the STUDENT written in '64. It could "decode" simple language (basic algebra) input and trow out the answer. Another programs followed the STUDENT. With better or worse results, they explored the NLU field until the first breakthrough: introducing machine learning (widely speaking Artificial Intelligence) techniques to human language recognition.
From the perspective of AI studies, I'm pretty sure it went like this: 'We don't know what intelligence is for sure, but speaking is one of the first things we are learning as babies, so it has to be something basic, fundamental, right?'
Fundamental - this one is ok, but definitely not basic, nor easy.
We understand language before we are taught to write, read, or count. Before we are learning chess or Go, that's for sure. These two are usually the leading benchmark of artificial intelligence applications. Are told to be the most challenging (and at some point measurable) tasks humans can perform. If one has beaten somebody in chess (or Go - depending on the culture), automatically means that he is smarter, more intelligent.
So if we can write a program that beats worldwide champion in these games, does it mean that we finally created 'Intelligence,' and what's more' - we excelled humanity? No, Deep Blue and Alpha Go cannot understand even a single word comprehensive to STUDENT. Speaking that, prototype straight from the sixties, introduced eight years before the term "Artificial Intelligence," is more intelligent? Or we just understand the term intelligence wrong?
Intelligence is something way more complicated and unspecified than we might think at the first approach. And for sure, human and machine intelligence should be treated differently.
Moravec's paradox says that, against what we know on common sense, high-level understanding (necessary to beating world-class master at go) requires far less computing power than natural, intuitive skills, and basis perception.
In other words: tasks barely, or impossible for humans, are, in fact, simple for machines. In the meantime, task challenging for the machine is relatively easy for us - require minimum effort and is almost reflexive.
Honestly speaking: I'm not sure if this excites or fears me more.
Millennia of a trainee
Certainly not only one, but the leading explanation might be years of evolution, standing behind these basic human skills. We underestimate the fact that every single human who lived on earth since the wake of humankind influenced the development of these plain brain skills. Simultaneously, we underestimate how critical they ware for humanity to excel and - finally - led us to the state where we don't use our brains just to survive, and we have time for such inventions as chess.
Face recognition is a perfect example of such a skill. Take a look - recognizing each other faces comes with ease... until we are about to distinguish faces of people within another ethnic group. Our brains are well trained, but on a given data set - faces or types of faces we saw in our life. Still - we are damn good at face recognition, but as you see, this skill is no longer evident and natural as we thought when we change the environment.
Natural language understanding is difficult in the same way, but way, much more.
Indioms, mathaphors, irony & context.
Communication itself contains verbal and non-verbal parts. According to the state of knowledge, the non-verbal share is much more critical to communicate efficiently. However, AVA will not use it in action (biological customer care employee neither).
Human language, only, at first sight, seems logical and unequivocal. Idioms, metaphors - we use them almost unaware, still being understandable for listeners. The irony, answers dependant on context - same story. Taught by life and experience, we wield the language flexibly, in case of some misunderstanding, simplifying speech, asking questions, filling default gaps in the sentences.
Our immersion in reality is, in this case, beneficial. We can connect words with existing objects, colors, events, feelings. Simultaneously, over the years, we meet their new applications daily, slightly broadening contained meaning. Words are just a description of reality. The process of their evolution is strictly connected to consciousness - a phenomenon mysterious, still uncovered, which we shouldn't expect soon to be explained.
Bane of Virtual Assistant
There is no trap in "I want to buy a red dress." Interpretation is easy because every word creating this sentence means exactly what it means, and it's clear what is describing or referring to.
On the second hand: "I'm searching for Isofix car seat." causes more problems. "Isofix" is not the name of the seat, but the mounting type.
Catching this type of information and the ability to use it as an essential part of the query is not a problem for the biological staff. Customer care specialists know the convention, and, what is even more important, thanks to context awareness, can relevantly use it in practice.
For machine, it's not that easy. Despite a common understanding of Artificial Intelligence, it's not the fairy entombed inside a steel box, magically solving problems. AI lives inside a computer, which is basically a calculator.