Facebook Pixel Code

This is the thirty-first article in a series dedicated to the various aspects of machine learning (ML). Today’s article will cover confidence intervals, a key aspect of the machine learning process of evaluating a learned hypothesis. Additionally, this article will cover how a ML agent derives confidence intervals. 

How confident are you in your ability to walk a tightrope wire across two skyscrapers? 

How confident are you in your guess that your favorite football team will win the Super Bowl? 

How confident are you that you can eat nine McRibs in one sitting? 

These questions make it clear that when it comes to our estimations we always believe in them with some degree of confidence. Sometimes we are very confident and other times we are not very confident, like with the McRib guess. 

“Confidence” can often be inflated with “cockiness” in common usage, but when it comes to a more scientific definition of the word its meaning is a bit more humble. “Confidence,” as we will define it, is the degree of certainty with regards to a certain claim. 

So, when we ask you how confident you are that you can nail a triple backflip off your living room coffee table, we are simply asking for your degree of certainty that you can nail the backflip, in light of or in absence of crucial data such as your history with backflips of any kind. 

In machine learning, confidence is incredibly important in decision-making. Every decision that is considered by an AI agent will be influences by the agent’s confidence in the hypotheses it is founded on. For example, a robot vacuum will be quite confident in the hypothesis that if it cannot sense any dirt or garbage underneath it, then it is okay to move on to find another spot to vacuum. 

However, not every hypothesis is one that garners high certainty, and in this case it is necessary for a machine learning agent to rely on something called a confidence interval. 

Hold on! WHAT is a CONFIDENCE INTERVAL? 

A confidence interval gives an agent a picture of its own uncertainty with regards to a hypothesis or estimate. 

Recall from our previous article that machine learning agents evaluate a hypothesis by estimating its rate of error. “How often will I make the wrong decision based on this hypothesis?” is the question an agent is posing to itself when evaluating a newly formed hypothesis. 

In a confidence interval, the “interval” marks the gap where the “real” value of one such estimate is supposed to lie in. Additionally, there is typically a probability estimate that is attached to this interval that denotes how likely this interval is to contain the “real” error value. 

Your typical confidence interval would resemble the following: There is a 97% probability that the true value of the estimate is 0.25 +/- 0.12 

In easier terms, that is basically saying that there is a high chance (97% by the agent’s estimate) that the hypothesis will lead to an error somewhere between 13% and 37% of the time, with a good guess being 25% of the time. 

Wait! HOW Do Machine Learning Agents Derive Confidence Intervals?

Basically, the work of deriving a confidence interval for some error estimate of a hypothesis across a number of instances involves processes typically associated with statistics. In short, it involves drawing an estimation of the expected value from a sample size. Here’s a more detailed version: 

  • Choose what is being estimated, like the “real” error value of a hypothesis 
  • Use an estimator that will predict e.g. the “real” error value
  • Figure out the probability distribution that underlies the chosen estimator
  • Lastly, find the upper and lower limits of the confidence interval

That, in a nutshell, describes the step-by-step process of determining the confidence interval, without the gory, overly-technical details included. 

Summary

For a machine learning agent to trust a hypothesis, it needs to determine the level of confidence in can have in the hypothesis. Since emotionally-based self-esteem is not something the typical AI agent possesses, it must determine its self-confidence through confidence intervals. A confidence interval tells predicts the upper and lower limits of an hypothesis’ tendency to lead to error. The confidence interval is discovered through a method of estimation involves means, variance, and other statistical concepts, but for the most part it involves using an estimator to figure out the probability that a percentage rate of error is accurate within a lower and upper bound.