This is the twenty-first article in a series dedicated to the various aspects of machine learning (ML). Today’s article will explain another fundamental aspect of machine learning, which is concept learning. We’ll tell you how it is that a machine learning agent, well, learns from what it encounters in the environment, and how it chooses the concepts by which it operates as time goes on.
Most people, after they are dragged kicking and screaming into the real world, don’t make too much of an effort to keep learning about stuff that is non-pertinent to more immediate concerns like their job. Even if you aren’t hitting the books every day after a long day at the office, that doesn’t mean that you aren’t learning anymore. We learn things every day, and introduce new concepts to ourselves from our experience. You don’t need a tweed-jacket wearing professor following you everywhere in order to learn.
The things learned and concepts formed, like “The cookies that Subway sells are pretty tasty” and “Whenever I am in a Subway I should buy a cookie,” may be quite general and picayune compared to what can be learned from an organic chemistry textbook, but that still doesn’t mean that the process of discovery and idea formation still aren’t alive within you. It is happening constantly, day in day out, even if we don’t necessarily feel like or realize we are learning as we go about our day.
Why we choose to pay attention to and dwell on certain things and not others may be best explained by our own whims and psychologies, but when it comes to AI agents the aim of their learning and concept-formation is for one purpose only: To improve their efficiency at performing whatever task it is they are programmed to accomplish. It is unambiguous why a bourbon-bottling robot would note that pressing too hard on a cork will submerge it in the bourbon below the bottleneck. What this article will do is tell you how it learns such concepts on its own.
Let’s name our bourbon bottling bot BBB, for BourbonBottlingBot. A machine learning agent like BBB will usually learn in an inductive sense, meaning that it will discover general truths through specific examples.
While BBB is learning about the subtleties of bottling during its training phase, it is faced with the task of bottling 100 bottles over the course of an hour. That’s plenty of time there for a robot, so the training wheels are on, so to speak. For the first seven bourbons it bottles, four out of seven times it pushes the cork way too far down, to the point where the cork floats in the bourbon and can’t be reached by neither a human nor robot finger. These four mishaps will be called instances, which denotes the specific input-output examples from which an agent induces a concept.
Now, BBB knows from its developers what a well-bottled bourbon looks like, so the value it assigns its performances throughout these instances is a low one. It knows that it is failing the task, and so it will begin to search for what is called an output hypothesis, which is a concept that will allow it to adequately perform its task.
What the agent does to arrive at a good output hypothesis is to search through a wide range of hypothesis and consider the most likely ones, and out of those pick the ones that best explain what happened in the instances.
Keeping in mind that the overall arc of BBB’s learning method is inductive, since it is using specific instances to find a general concept, when it searches through hypotheses it begins at the most general ones to arrive at specific ideas—a deductive process. Consider the ordering of hypotheses, represented in English rather than a more basic/hard-to-understand computer language, below to demonstrate this.
H1 – The cork fell because I pushed it
H2 – The cork fell because I pushed it hard
H3 – The cork fell because I pushed it too hard
You’ll see that H1 is the most general one, because in every hypothesis the cork fell because the agent pushed it. H3 is the most specific because its conclusion is not necessarily represented in the first too.
There will be many more hypotheses, and they are unlikely to be so general. Perhaps BBB decides on a specific amount of force that it will use before pushing in the cork, and how deep it’s robot thumb will go when pushing it down, and so the ultimately chosen hypothesis would be (in our plain English) “I should exert X amount of force while pushing down the cork, and only push my thumb Y centimeters down.”
BBB would eliminate other less likely hypotheses by comparing the hypotheses’ assertions with the data culled in the instances. In our example, BBB would know the amount of force it is exerting when the corks go too low, and the amount of force it exerts when the corks stay. It will likely choose the hypothesis that recommends something equal to or close to the amount of force it exerted during its successes. And, since, it is a machine learning agent, it will continue to analyze its moves during the next 92 bottling to better improve its method.
Concept learning for machine learning agents is the process of taking specific instances, e.g. attempts at bottling bourbon, and finding a general concept from them. It does this by searching through a number of hypotheses, starting at the most general and searching from there to a more general one, to ultimately choose one that best agrees with the evidence/data from the instances.