Facebook Pixel Code

This is the twentieth article in a series dedicated to the various aspects of machine learning (ML). Today’s article will introduce a “basic” language of many machine learning agents, which is First Order Logic (FOL). We’ll show how this language allows for logical agents to reason to make, well, logical decisions that lead to favorable results. 

Ever wish you could read minds? No? Don’t lie, everyone has wished it at least once. What goes on in the heads of others is a bit of a mystery for us, and though advances in neuroscience allow researchers to analyze when a decision is made in someone’s mind, most of us don’t have access to neuroimaging devices to scan when someone we know makes a decision. Still mysterious however is the nature of how exactly humans reason. 

Though we often “think through” our decisions in our mind in the form of sentences that assert things and perhaps react or revise each other, there is plenty evidence out there that human reasoning is an unconscious process, and that perhaps we’ve made up our mind unconsciously before we’ve even begun to “think through” in our conscious mind. In fact, it is possible that the whole decision process may not even involve language. Additionally, it’s been speculated that the unconscious may be able to do math problems without using numbers! It’s bizarre to think about, but it goes to show just how deeply removed that our conscious mind work may be from our unconscious mind’s processes. 

Anyone who attempts to understand how humans’ minds work will have to contend with the messy but profound decision-making hub that is the unconscious, but anyone who wants to see how computers think, at least a good deal of them, will have an easier time. Since computers are a human-made invention, their decision-making processes are accessible and readable to humans, who, in turn, designed those decision-making processes. 

We’ve discussed natural language processing, where computers are taught to read and communicate in a “natural” language, i.e. a language like English that a good bulk of human beings use on a daily basis. However, many logical agents don’t need to understand a language as complex as English to complete human tasks, but rather makes use of a language that beyond computer science is used in certain fields of philosophy: First Order Logic. 

Mind Your Ps and Qs

FOL is sometimes called the “language of thought,” which brings us back to the idea that the exact words we use in the process of thought matter less to us than we realize, and that our unconscious mind prefers representations rather than words. In FOL, the “language,” though it will use letters and numbers, is concerned with primitive representations and structures of thought, using only a handful of symbols in an entire single sentence. 

The goal of FOL is to eliminate ambiguity. A symbol corresponds to one thing, and one thing only. Here is a typical sentence of FOL: 

Child(P, Q) <=>Parent(Q, P)

Don’t run away just yet! This weird, apparent amalgamation of math and language is not the nonsense that it appears to be, but rather a simple (once you understand it, at least) representation of an assertion: P is the child of Q if and only if Q is the parent of P. 

Without making you figure it out for yourself, here are a few of the key symbols of FOL, and their translation: 

^ = And 

V = Or

<=> = If and only if (sometimes abbreviated to “iff”)

=> = If 

¬ = Not

With just these few basic building blocks, you can build a good deal of sentences in FOL, and have a computer understand it. 

When it comes to the use of parentheses, what is inside and outside them often communicates what something is. Just one thing inside the parentheses communicates something different from two or more things being inside. See below: 

Parent(Q) means “Q is a parent”

Parent(Q,P) means “Q is the parent of P”

The ordering of the elements P and Q matter as well. If we want Q=John and P=John Jr., and we input Parent(P,Q) into an AI agent, then the agent will think that John Jr. is the father of John, when it is really vice versa! Here we see that precision is key when using FOL, as a computer will not see what is obviously wrong with such a statement, but will take it in blind faith. 

However, it can still reason it’s way out of this, such as if it had the information that ¬Parent(P), so if it was subsequently told that Parent(P,Q), it would know the sentence is false. This is an example of knowing the truth value of a sentence, something that computers assess every day in their readings of inputs. 

Much of a machine learning agent’s reasoning depends on the knowledge that it was programmed with or acquired, so it can update information accordingly, but some truths are more or less unchangeable, such as the fact that John is the parent of John Jr. 

Reasoning with FOL

To illustrate an example of how an agent reasons using FOL, we’ll use what is perhaps the most famous syllogism of all time, a staple of philosophy introduction courses, though slightly revised so that we don’t introduce more FOL symbols than necessary: 

Man(P) => Mortal(P)

Man(Socrates)

______________

Mortal(Socrates)

Our (revised) syllogism says that if P is a man then P is mortal. We offer the information that Socrates is a man, which the machine concludes (the conclusion is under the line) that Socrates is mortal. 

Summary

Computers don’t really work well with ambiguous languages like English, and much prefer languages like First Order Logic, where there is a strict unilateral meaning to each element in a sentence. Though Parent(Q,P) <=> Child(P,Q) seems like a sentence having an identity crisis between English and math, it is actually a symbolic language that works to simplify and disambiguate languages in the process of thought, making it easier for computers to read information and make decisions based off of it.