Facebook Pixel Code

This is the forty-second article in a series dedicated to the various aspects of machine learning (ML). Today’s article will continue the discussion of genetic algorithms, which is a form of learning that finds its basis in the biological concept of evolution. We will go into the specifics of how genetic algorithms worth along with the theories of evolution that influence genetic algorithms. 

Our last article was a lesson in genetic algorithms, which is a kind of machine learning algorithm that mimics biological evolution in that it creates and discards hypotheses based on a “survival of the fittest” method, where only the strongest hypotheses are kept and combined with each other to create even stronger hypotheses. 

This article will dive deeper into genetic algorithms by offering a more specific account of how they function. Then, we will further illustrate the ideas behind them by detailing a few theories of evolution that influence the creation of genetic algorithms.

Genetic Algorithms: How Do They Work? 

All genetic algorithms start with a population of hypotheses. 

These hypotheses comprise a population, P, and the number of hypotheses in the population tends to maintained through each “generation” of hypotheses. 

So, a certain number of hypotheses are scrapped while a certain number are retained, and to fill in the gap left by the scrapped hypotheses, new hypotheses are formed. 

Hypotheses are retained or scrapped according to how they measure up to what is called a “fitness function,” which is an implement that probabilistically determines which hypotheses are the best in the bunch for reaching the machine learning agent’s desired goal. 

Some hypotheses are retained because they are strong on their own, while others are retained because they will be good as the “base” for a new hypotheses. 

We mentioned that new hypotheses need to be formed to meet the fixed population number, and these new populations are generated from recombining certain retained hypotheses. 

Concepts from evolution are used in this process, such as “crossover,” which is the recombination feature we have mentioned, and “mutation,” where an existing hypothesis is modified, even if slightly, to better fit a certain threshold of fitness. 

In crossover, two hypotheses are combined to create two brand new ones. 

The repopulation and revising of hypotheses can be a continuous process, going on until a certain degree of excellence is achieved across the population, and that a usable hypothesis or hypotheses can be selected for real world use. 

Biological Evolution and Machine Learning Evolution

Sometimes, what does not hold in biological science can be useful in computer science. A good example of this is genetic algorithms that create “offspring” hypotheses that are informed by the experiences (i.e., successes and failures) of “parent” hypotheses. This way, the new hypotheses will have the knowledge of the past hypotheses simply passed down. 

In biology, the experiences of a parent does not affect the makeup of a child. For example, a person who loses an arm in a boating accident is not at risk to parent a child born with one arm, any more than any parent is at risk to having a kid born with one arm. 

Another example could be a parent that is an expert in physics will not parent a child that will genetically be ahead of the curve in terms of physics knowledge. Though this idea was put forth in early evolutionary theories, notably by a scientist named Lamarck, it is widely acknowledged as being a demonstrably false hypothesis. 

But, would it not be ideal for it to be so? That what we learn in our lifetimes can be genetically passed down to our kids? Computer scientists think so, which is why genetic algorithms have “parent” hypotheses pass down information about the world and actions to new hypotheses.

Another big biological influence on machine learning is a theory called the Baldwin effect, which states that the members of a species that are better at learning are more likely to survive than those that are not. 

This may seem simple and obvious, but what is crucial about it is that it speaks to the notion of adaptability. Creatures that are genetically hardwired to be physically strong may thrive in some environments, but if a stronger, deadlier creature is introduced into its environment, and the former creature does not learn, with the rest of the population, to fear the creature, then it will be in mortal danger, and may not pass on its genes. 

In machine learning, the Baldwin effect is found in the observed principle that agents who are given more power (i.e., through the type of algorithm it uses) to modify their hypotheses tend to  perform better overall. 

Summary

Genetic algorithms improve upon a population of hypotheses by selecting a number of hypotheses within the population to get rid of, based on how it measures up to a fitness function, then repopulating the lost space with modified or combined versions of the remaining hypotheses. Machine learning evolution is based on both rejected and accepted biological principles, as what matters is how useful a concept is for creating machine learning algorithms, rather than if a process/theory is correct in the world of biology.