Facebook Pixel Code

“Intelligence” tends to denote how smart a person is, which in turn typically refers to a person’s aptitude for the traditional school subjects like math and science. 

A person’s ability to understand knowledge and, importantly, apply it, is the measure of their intelligence. This is the typical answer, yet there are things that suggest that this may be a flawed, anthropocentric (i.e. human-oriented) view of things. 

This article will not go down a philosophical rabbit hole about the problem of conceptualizing intelligence itself, but will rather focus on one of the central problems that has cropped up due to technological advances over the past century or so, which is the difference between human intelligence and artificial intelligence. 

A faulty, but common, understanding of AI is that the “intelligence” that its developers are trying to replicate is human intelligence. 

Though it is true that researchers tend to stack AI’s capabilities against human intellectual performance—“superintelligent” AI is AI whose performance exceeds even the best human intellect—, this does not necessarily mean that AI is modeled after human intelligence. 

Recent research has shown how AI differs from humans in thinking. Humans love to make the “uneducated guess,” choosing statistically nonsensical answers to questions with too little information to inform their answers. AI, on the other hand, does not reach in its estimates, and instead hews closely to the data to create more statistically probable estimates. 

This does, however, shine a light on a strength of human intelligence that is lacking in AI, which is subjectivity. Think of the multitude of choices that are made to complete any goal, and consider that not all of them are totally objective and dependent on statistical analysis. Rather, a good deal of our thinking involves subjectivity, because sometimes certain choices just feel right, like determining how big the font size should be in an ad. 

Subjectivity is especially important to moral and ethical thinking, which it is fair to say is often founded on empathy and other emotions. With the development of technologies like self-driving cars or, troublingly, automated weaponry, it is perhaps not soothing to know that such AI agents only value life to the extent that they have been programmed by developers to, and that they will not develop moral sentiments on their own. 

One could argue that such worries are anthropocentric, and therefore part of the intelligence-question issue that was pointed out at the beginning of this article, but even if that is so that does not make the issue of AI’s capability for moral thinking any less important, and it is certainly arguable that AI’s moral thinking should indeed be modeled after human thinking.

The scales tip back to AI, however, when we consider the non-moral elements of tasks. One of the reasons that it is becoming increasingly outdated to think that the endeavor of AI is to simply match human intelligence is that AI truly surpasses human intelligence when we consider the ability to process, analyze, and make decisions based off of information, and quite easily at that. And, after all, is that not one of the fundamental measures of intelligence? 

Another considerable aspect of AI is its ability to handle massive quantities of data that the human brain would simply quake before. 

Take the example of persona modeling, an AI process where cognitive insights are gained about a given company’s customer base through analysis of both company records and self-authored customer data (e.g., a tweet or Facebook post). 

Like many AI tools, the task at hand is one that a human could certainly do, but the gap in information processing capabilities between the AI agent and the human is so wide that it is much better to rely on the former than the latter. 

While a human would take days upon days to comb through thousands of customers’ social media accounts, a persona modeling tool could do that in just a portion of a single afternoon. 

Then, returning to the objectivity question, here is where AI’s objectivity, and lack of subjectivity, becomes a strength rather than a weakness. One of the many goals of persona modeling is to discover what aspects of marketing customers most respond to, including but not limited to the aesthetic qualities of advertisements. Though the humans on your marketing team may feel that orange ought to be the dominant color of your in-development Instagram ad, but an AI tool, after analyzing thousands upon thousands of self-authored posts from your customers, will know, objectively, that the color green recurs again and again across those posts, indicating a preference for colors in ads that customers will respond to. 

In sum, the biggest problem AI faces is moral and ethical thinking, which humans are much more readily equipped for, but in terms of aspects of intelligence such as information processing and calculations, AI far surpasses human abilities, and are likely to do so as the field of AI continues to grow.