This is the forty-fifth article in a series dedicated to the various aspects of machine learning (ML). Today’s article will introduce a concept that is called “Analytical Learning,” which includes a popular form of learning called “explanation-based learning,” which will be the main focus of this essay.
We begin learning pretty much immediately after being born, and from that point on knowledge accumulates in our brain much like a snowball, or at least the effect that a snowball is named after, where a mass of information is accumulated, and along the way some things stick, some edges get roughed, and other things fall off, or just can not be picked up at all.
The bigger that mass of information becomes, the more readily we can guide ourselves in a variety of situations. It is true that knowledge is power, and that the more one knows, the easier a time that person will have in navigating life.
As is almost always the case when discussing human decision-making, there is a machine learning analogue to point to. And, since this is a series on machine learning, it is perhaps best that we jump from the realm of the human, and into that of the machine.
In machine learning, knowledge is stored in much the way us humans store knowledge in our minds, except human memory tends to be less reliable in solid compared to a computer’s memory, unless the computer is in some way corrupted.
Both humans and computers use their past knowledge to solve present problems, but it is not always the case that past knowledge is used in present learning, though for humans this is more rare.
In machine learning, many agents learn inductively, meaning that the move from a specific example to form a general hypothesis about its environment, its actions, etc.
Though widely popular, inductive learning methods have their drawbacks. Notably, these methods can run into trouble when there is simply not enough data encountered in a specific example for a satisfactory hypothesis to be formed.
That is why a certain method of learning exists to enable machine learning agents to draw on past knowledge, or their “memory,” to deal with the data presently encountered. This method is called “analytical learning.”
It may come as a surprise to learn that machine learning agents are not always drawing on prior knowledge to analyze data, largely because us humans tend to almost always carry past knowledge into new learning instances. This is a teachable moment, however, because every learning-focused process that is natural in human learning must be programmed into a machine learning agent. In other words, it does not come natural to them (the computers).
A form of analytical learning called explanation-based learning exemplifies the method. It is pretty simple, and just about defined in the title, though possibly not. In EBL, prior knowledge provides certain explanations that can help an agent better interpret the data in front of its literal or figurative face.
With regular inductive methods, the agent will pretty much be focusing on only the currently observed data in its interpretation, whereas in EBL the prior knowledge, or explanations, will be considered input data, along with what is currently observed. What this leads to is a hypothesis that is more readily applicable to real-world scenarios.
The word has been mentioned many times in this series, but “overfitting,” where an agent becomes far too attached to input data, is what EBL can help prevent.
Process of Explanation-Based Learning
EBL is a pretty simple, easy-to-understand process.
An agent in EBL is supplied with what is called a “domain theory,” which is a fancy term for the “explanation” mentioned previously throughout this article. The domain theory can be a highly accurate, perhaps perfect explanation of data that the agent will observe.
Once data is observed, the agent will draw on the domain theory to learn as much as it can about the data, then do its own work to form a hypothesis that contradicts neither the information given by the domain theory nor the observed data.
It forms the hypothesis by figuring out the certain verities to be found in the domain theory, and to see which parts of the explanation generally apply to the various observed data.
After that work is finished, the hypothesis will be retooled to fit what has been learned.
Humans typically use previous knowledge when learning new concepts, and this may actually be inherent to the process itself. However, this aspect of learning is not exactly inherent to machine learning agents. As it turns out, the ability to use prior knowledge to explain new data is something that must be programmed into an agent. The method for this is called analytical learning. One of its forms is explanation-based learning, where an explanation, or domain theory, is supplied to an agent, who in turn observes the data through the lens of this explanation, in an effort to find principles that can shape a better hypothesis.