Facebook Pixel Code

This is the forty-sixth article in a series dedicated to the various aspects of machine learning (ML). Today’s article will continue our discussion of a concept that is called “Analytical Learning,” which includes a popular form of learning called “explanation-based learning,” which was the main focus of last essay. 

Our last article introduced analytical learning, which is a method of machine learning where an agent does not go in “blind” to new data, but rather has in its memory a “domain theory,” or explanation for data that will be observed, which will in turn help it form a better, more all-purpose hypothesis for the agent’s tasks. 

What the agent does is analyze data through the “lens” of the domain theory, gaining greater insight into its environment. This is important because many methods of machine learning, specifically inductive methods, do not employ prior knowledge in learning, but rather have the machine learning agent face the observed data independently. 

This article will expand upon our discussion of analytical learning in significant ways. What we will first consider is a comment we made in the last article, that some domain theories are “perfect,” which the following section will explain the meaning of. 

“Perfect” Domain Theories

The qualities that make a domain theory “perfect” is correctness and completeness. 

Correctness means that the domain theory’s claims about the data is accurate. 

Completeness means that the domain theory covers all of the data the agent will observe in a space, though it must be specified that what the agent needs to know is only the things related to doing a good job with regards to its goal. In other words, a machine learning agent does not need to know the precise shade of wallpaper of a room if all it needs to do is vacuum the floor. 

You may be scratching your head at this point, wondering why it would even be necessary for an agent to learn from data if it already has a perfect domain theory to draw upon. 

Well, using an analogy from games (and let it be said that AI agents often do play games, like when an AI agent was a competitor in a Go tournament), sometimes a domain theory will provide an agent with the rules, but not a strategy. So, an agent will know how an environment works, and the variety of “moves” it can make, but it will still need to figure out for itself an exact way to achieve a goal based on examples. 

Also worth mentioning is the fact that, for a good deal of situations, it just is not possible, or reasonable, to develop a perfect domain theory, so what is available to an agent is unavoidably imperfect. 

So, we can see that the process of learning with a domain theory is not as “perfect” as we may imagine it to be. We will go deeper into the process of explanation-based learning to show this. 

Explanation-Based Learning: A Closer Look

We will try to keep this short and sweet. 

When the domain theory is perfect, meaning complete and correct across all examples, it more or less “proves” whether or not the correct hypothesis is formed or not. 

When the domain theory is imperfect, the explanation the agent is given for data examples does not exactly prove a hypothesis, but rather makes a case for it being correct. Often times, the case made is actually quite accurate, and still useful for the agent. 

When it comes to the actual analysis of the data, the agent needs to decide ultimately what is relevant and what is not for forming its hypothesis. Many algorithms can provide something like a ranking to certain elements of a domain theory’s explanation to identify what is the “weakest preimage,” which, contrary to the name, is actually of use and interest to an agent.  

The “weakest preimage” can be one or many statements in the explanation that are useful and general (meaning “universal,” in a sense). So, these are basically the rules that can be applied most generally to the environment and the agent’s actions. 

Most agents have a hypothesis that they rework as they continue to learn, rather than start from scratch at each new move. So, the rules learned will be carried from instance to instance, but it is important to realize that agents, or their algorithms, rather, will always be singling out new and significant data, and seek to explain it, which can result in alterations in the hypothesis. 

Summary

Domain theories are perfect if they provide accurate information about all of the data relevant to an agent’s goal. Not all domain theories perfect, and it is often the case that domain theories cannot be perfect in certain situations. The process of explanation-based learning consists of stacking up data against a domain theory’s explanation, seeing what parts of the explanation can be thought of as general rules, and then reworking an evolving hypothesis based on the things learned from the newly observed data.