Facebook Pixel Code

 

This is the thirteenth article in a series dedicated to the various aspects of machine learning (ML). Today’s article will introduce the concept of natural language processing (NLP) and cover the role of natural language processing in machine learning

 

“Alexa, play Dua Lipa.”

 

“Opening DuoLingo.” 

 

“No, Dua Lipa. Please.”

 

“Opening DuoLingo.” 

 

“No! Dooh-uh, Lee-puh.” 

 

“I’m sorry, I didn’t quite catch that. Can you ask again?” 

 

“DuaLipa!” 

 

“I’m sorry, I didn’t—” [sound of Alexa being unplugged]. 

 

We’ve all had trouble at some point with getting a tech product to work correctly, and many of our frustrations might involve tools like Siri or Alexa mishearing us. Our frustrations when these agents mishear us speaks to the hidden success of AI, because it shows that the otherwise solid performances of these products has built an expectation among humans that a computer can indeed have a conversation with a human being. 

The success of these products speaks to the major advances that have been made in natural language processing (NLP), which is the field of machine learning dedicated to programming agents to communicate using natural (AKA human) languages. 

NLP is necessary for computers because it allows them to communicate with and learn from humans, which can make understanding and completing goals much easier. In addition to that, AI can help advance findings in fields like linguistics and neuroscience when scientists study how computers use and understand language. 

Although mainstream products like Alexa are fairly new, NLP is not a recent development in the field of AI. In fact, natural language processing has been a valued aspect of machine learning since the humble beginnings of artificial intelligence.

Ever heard of the Turing test? When AI trailblazer Alan Turing chose to test the intelligence of a computer, he set the benchmark of the test as the computer’s ability to use language in an “intelligent,” humanlike way. 

The Turing test itself is simple: A human has a virtual conversation with someone else, which could either be another human or a computer. In cases where the computer is indeed the other, and the human can’t tell if they are talking to a human or a computer or believes that the other is a human being, then the computer passes the Turing test. Essentially, the test is whether the computer can use language in a way that is convincing to a human, which also involves the ability to comprehend what the human is saying.

 

Machine Learning and Natural Language Processing

 

Machine learning is a necessity for NLP, because it would be virtually impossible for any mortal team of developers to teach a computer every word in the world, what those words mean, the numerous combinations of those words, and the different ways that the meaning of the same set of words can change depending on the context in which they appear. 

When it comes to language, most humans are on “autopilot” in the sense that they don’t question or realize how strongly influenced their interpretation and navigation of the world depends on language. Most humans are raised on language and grow up around it, and a baby’s first words are often seen as a major step in its development, like the first rite of initiation into the world of the verbal. 

For most of us, it is a very strange notion that a nonverbal life is even possible for humans, or society, but looking around the earth shows our feeling to be biased. Most animals live a nonverbal live, or exercise an incredibly primitive sort of proto-language to communicate (think of birds chirping, apes screeching, dolphins singing, etc.) But still, their means of communication is far from the vast toolbox of vocabulary that humans pick from on a daily basis to describe, communicate, and interpret things. 

We say all this because we want to show that computers, despite their highly technical makeup and functions, are typically designed for “primitive” languages, like binary (1s and 0s, e.g. 10101011001010) or first order logic (e.g, A V B, which in human language translates to “[Whatever ‘A’ stands for] or [Whatever ‘B’ stands for]”). Due to this, a computer will not “pick up” on language as a baby would, without being programmed to do so. 

Additionally, the ambiguity and vagueness of language is not as easy to sift through for a computer as it is for a human. For example, Alexa won’t be able to detect if you are being sarcastic. 

Even being programmed to know the basics, and then some, of a language like English, does not mean that a computer will become proficient on its own. Accordingly, it will need the help of machine learning algorithms to ensure that it can keep up with a human in conversation, and keep expanding its vocabulary and grammar skills. 

 

Machine Learning Summary

Natural language processing is a field dedicated to teaching computers how to partake in communication involving natural, or human, languages like English or Swahili. Though humans are able to pick up on and navigate the complexities of language due to how pervasive language is in human society, it is quite a task to make a computer learn a natural language because they are more used to primitive languages, like animals are. Due to this, it is necessary for developers to give AI agents the tools to effectively analyze language and communicate it.