Facebook Pixel Code

This is the tenth article in a series dedicated to the various aspects of machine learning. Today’s article will step back from the specific types of learning algorithms and focus on the problem of overfitting and the solution of generalization, the latter of which is a goal shared by all such algorithms.

Over the past few articles, we have been teaching you about the various ways that AI agents can learn. A detail you may have picked up on is that these agents are not ready to go as soon as they are designed, but that they actually need to be trained before being let loose in the real world.  

So, Domino’s delivery robot R2’s very first pizza (nor second, nor third, and count until n, where n represents the number where the R&D team is confident in R2’s ability to deliver pizzas) will not be delivered to an actual paying customer, but will practice its job in what are likely to be controlled environments, at least at first, before taking it into real life environments. 

We’ve mentioned that R2 will be encountering loads of input data—sights, sounds, and the like—that are processed by machine learning algorithms to decide on its next action, or simply make sense of what the data means. Being able to quickly process the data and correctly classify it, and make predictions based on that classification, is the essence of decision-making in AI, and any agent that has a poor ability to do this will be destined to disappoint. 

 

It would be nice if an AI agent like R2 could simply be programmed to be a perfect learner so that it could go straight from the lab to the streets, but this is just not the case. They say that practice makes perfect, and the same goes for robots. Hence the need for training, and training sets of data that an agent will learn how to, well, learn from. 

 

In a training space, there is no risk that R2 will accidentally drive over the toes of an innocent pedestrian in an effort to cut 1.5 seconds off its delivery time, but there is the risk of overfitting. 

 

Divorce, Machine Learning Style

 

Overfitting is a term we’ve mentioned before, but here’s a brief definition for it: Overfitting is what happens when an agent becomes too used to the set of data that it is trained on and struggles with working with never-before-seen data. So, the solution, or (alternatively) preventative measure, to this problem is to “divorce” the agent from its training set and stop its obsession with the set so it can get used to other data sets. 

 

So, let’s say R2 was overfit for a training route that had it confront a golden retriever that charges it, and must swerve left (there’s a pedestrian to its right) out of the way of the golden retriever to prevent a robot-dog collision. 

 

If it is overfit for this route, then any number of problems could occur out in the street if it encounters a golden retriever. For one, it could falsely believe that the dog is charging it, as dogs do in its training route. Or, in the case that the dog does charge it, R2 may swerve left and hit a pedestrian when it should have gone right or just stopped, because it was trained that swerving left is the appropriate response in the original situation. Or, it may only believe that golden retrievers are threatening, and all other dogs peaceful!

 

Now, it is pretty likely that a delivery bot was trained on multiple data sets with some variance in the set pieces along the route, like having to swerve right instead of left when confronted with the dog, but the point should be clear that certain false/dangerous “beliefs” about what should be done when delivering a pizza can arise from overfitting. 

 

The key to preventing overfitting is generalization, which we’ll spend the remainder of this article discussing. 

 

There’s Plenty of Data in the Sea

 

The gist of generalization is in the name, which is that it denotes the quality of an ML agent to have a general ability to work with foreign data sets, as opposed to being specialized/overfit for just one or a few specific sets. 

The goal of achieving generalization in a ML agent is that it will be able to understand why certain actions are optimal and why some aren’t. It is like a student who takes an ACT practice test over and over again to prepare for the ACT, but still ends up struggling with the actual test on the day of the test, AKA “College GameDay.” The student only learned how to get the right answers for the practice test, rather than learn how to solve the kinds of problems that will generally be seen on those tests. 

There are various methods for bolstering generalization in an agent, but the main idea is that the agent will rely on a supervised learning algorithm that will help it discover “beliefs” that can be applied to new input data. So, when R2 sees a dog, golden retriever or not, it will know that it should swerve (if the coast is clear) or stop if the dog begins to charge at R2. 

 

Machine Learning Summary

ML agents must practice accomplishing whatever tasks it was designed to accomplish on training sets before being let loose in the real world. One of the biggest risks posed during training is overfitting, where agents become too used to just one, or a few, training sets and thus struggle with new data. The key to preventing or overcoming overfitting is through generalization, which is the process or quality of an agent’s ability to work with unseen and diverse data sets.