Facebook Pixel Code

This is the forty-seventh article in a series dedicated to the various aspects of machine learning (ML). Today’s article will move our discussion of the main topic of the last few essays, a concept that is called “Analytical Learning,” which includes the popular form of learning called “explanation-based learning,” into a territory that discusses how it is merged with inductive forms of learning. We will introduce why such a combination is desirable, and the goals of an algorithm that seeks to combine the two. 

We tend to combine things because the result is either pleasurable, or provides a new and important use to us that we could not quite get from either element on its own. 

A sterling example of this is peanut butter and chocolate, whose perhaps definitive combination has been realized in the enduring Reese’s peanut butter chocolate cup, along with Reese’s other world-famous confections. 

The best combinations are not limited to gastronomy, either, for many a moviegoer may recall the grand entertainment of Godzilla vs. Kong

But, we believe that the best combinations of them all are to be found in the field of machine learning. 

We mentioned in our past few articles, which were dedicated to the method of analytical learning (which, if you have not got the definition down by now, is a learning method where an agent is supplied with prior knowledge about the data set in order to better form hypotheses), that analytical learning exists because traditional forms of inductive learning in agents do not specifically involve the use of prior knowledge, but rather by observing the features of the presented data. 

It does not take a genius to figure out that the two methods are ripe for combination, which is exactly what some computer scientist geniuses set out to do (as they say, genius is all about the perspiration, while anybody get get inspired).

The result of combining these two methods can, of course, make these methods of learning more powerful, but it can also compensate for the biggest disadvantages each method faces on its own. 

With analytical learning, the biggest setback is when the prior knowledge is too limited to cover the training data, so the agent is left with incomplete and ill-informed hypotheses. 

In inductive learning, the setback is when the training data is simply too limited to learn generalizable rules from. 

Since we named the setbacks, we may as well refresh you on the strengths as well. 

Analytical learning methods can extract a pretty good hypothesis from a pretty limited data set, because of the agent’s available prior knowledge. 

Inductive learning, on the other hand, does not need to rely on prior knowledge to form a pretty good hypothesis, provided, of course, that the training data is sufficient for learning. 

With these setbacks and strengths in mind, let us consider what goal an algorithm that combines these methods should strive for. 

Combining the Inductive and Analytical Methods

The kind of algorithms that reaches the desired combo’s sweet spot will be classified as an independent, general-use algorithm, but one that also uses prior knowledge, but as a piece of input that is less a “lens” that the agent relies on to read data, and more of a resource that it considers, alongside the observed data. 

So, what we are basically looking for is an algorithm that can operate independent of any prior knowledge, but still expertly employs a good domain theory in the way that an analytical algorithm would allow for. 

With such an algorithm, the agent will be able to learn even in the absence of any given prior knowledge. The standard is that it will perform just as well as a regular inductive algorithm. 

Alternatively, if the agent is working with a perfect algorithm, it should be able to perform just as well as a regular analytical algorithm. 

In cases where the agent has both limited prior knowledge and limited data, the algorithm should allow the agent to perform better than a regular analytical or inductive algorithm. In addition to that, it should be able to operate with data sets or prior knowledge that has a certain margin of error. 

Now, the big and ugly question, or set of questions, we must ask ourselves is: Does such an algorithm exist? Can it even be possible to create such a thing, or is it too good to be true?

Important questions, all of them. It must be mentioned that an algorithm that can meet such high standards is a goal for many researchers, but one that has not quite been reached to full satisfaction. However, the fact that it is being researched at all ought to tell you that it is not a quixotic quest after all. 

So, keep an eye out for our next article, where we will discuss the approaches in research to realize this dream.