Facebook Pixel Code

This is the thirty-fourth article in a series dedicated to the various aspects of machine learning (ML). Today’s article will give a brief overview of the history of machine learning.

Our previous article dealt with the capability of a machine learning agent to reach back through history—art history, to be exact—and restore a work of art, detailing Google’s efforts to rescue with machine learning tools the fire-damaged paintings of renowned artist Gustav Klimt.

Portraits destroyed by fire were rendered anew through recoloration informed by a massive data search across newspapers, journals, and other sources on the details of Klimt’s painting, which include the unintuitive (for a restorer) fact that the sky in one of the paintings was not painted blue but rather a light sea green kind of color. 

Why Klimt chose that color, we may never know. Who knows what goes on in the minds of artists. We find it easier to focus on what goes on in a machine learning robot. 

So, instead of going into art history, this article will give you a concise, though by no means comprehensive, overview of some of the most significant points in the history of machine learning. Enjoy!

Baby Steps

The idea that a machine could be built to think and reason like a human had long been a dream of the philosophers. 

In the twentieth century, science began to take the dream seriously, and see if it can be realized in real life. Just as the early aviators took the philosophers’ or poets’ fantasia that man could possibly fly through the sky like a bird and developed the airplane, so did computer scientists realize the applicability of the inspired thoughts of philosophers that would speak about a “thinking machine.” 

Among the leading figures in artificial intelligence is Alan Turing. The Turing test has come up before in this series, but its namesake and creator, Alan Turing, and his contributions to the field of machine learning has not been detailed. 

Alan Turing was a mathematician and philosopher whose work in computer science brought significant advances to the field of AI. 

Like many early innovators in the AI field, what Alan Turing wanted to do was make a computer resemble a human in its functioning. And what better way to test that than to see if a computer can converse like a human, with a human!

Thus, the Turing test was born, marking a world-famous instance of a conversational computing platform attempting to hold a conversation with a human being, without the human being knowing whether they are talking to a computer or a fellow human. The success rate of the conversational computing platform’s “deception” gave computer scientists everywhere the confidence that this whole “artificial intelligence” thing may be worth pursuing after all. 

Crawling into the Future

It wasn’t long before humans were doing more than just talking to computers. Arthur Samuel, a few short years after the Turing test captivated the world, tested an IBM computer’s checkers-playing ability against human opponents. It was the first domino that fell to lead to chess champion Gary Kasparov getting absolutely smoked by a computer in a chess match in the 1990s. 

The significant thing about the checker-playing computer is that it wasn’t preprogrammed to choose certain winning strategies. Rather, it was a true machine learning agent in that it paid attention to which of its attempted moves were the best to make, building its own strategy through practice. 

The ‘50s innovations don’t stop there, either—it was a very happening time for computers. By 1957, neural networks, that staple of deep learning, arrived on the scene. 

Walk the Walk, Talk the Talk

Further developments through the 1960s were dedicated to helping computers’ pattern recognition skills, where they could, for example, figure out the best way to map a travel route. By 1979, a group of Stanford students have made a self-driving “Stanford Cart” that can recognize and avoid hitting objects in a room. 

The 1980s saw advancements in training agents and conversational computing, but it wasn’t until the 1990s when data-driven methods for machine learning took off. From then on, the machine learning approach of analyzing scores of data to learn about environments and actions has been the modern standard. 

Almost Grown

Since the 1990s, we have seen more and more impressive, and concerning, advancements in the fields of AI and machine learning. We may be amused at IBM Watson’s ability to beat players at Jeopardy, but darker uses are being found for machine learning agents, such as automatic weapons that are being used in wars and political assassinations across the globe. We can only hope that the developers of AI and the people funding these projects have a sense of responsibility in choosing how to advance the field.