Facebook Pixel Code

Reasoning models are a hot topic in A.I. These models take more time to answer prompts than large language models that generate instant answers. The result ought to be a better answer. Copilot’s Think Deeper feature, powered by OpenAI’s o1 model

The Five Most-Key Takeaways from This Blog Post

  • Think Deeper is meant for extended conversations and complex prompts. 
  • For businesses that are coming to rely on A.I. for getting quick questions for answers, this Copilot feature can offer a bump in quality to responses from the A.I. 
  • The responses from Think Deeper tend to be medium-length. Not quite a blog length, but not like a summary either. Maybe intentional, generating responses that are size-wise more compact than an article you’d find on the internet, yet more lengthy–for some users, added length may give the appearance of added depth–than a short search-engine summary. 
  • The problem of hallucination, whether it is inaccuracy or just outright fiction in answers, is a persistent problem in A.I. 
  • Overall, the hallucination issue makes the use of A.I. for answering tough questions a gamble on whether the answer will be wholly correct. 

The Significance for Business Owners

Consider the problem of hallucination. 

In a significant amount of cases this will pose nontrivial issues for users. 

Users who are looking for answers to questions they do not have the time to thoroughly answer for themselves by combing through research will have to basically gamble on the A.I. being correct. 

Is that a gamble you are willing to make? The writer of this blog post will take you through what that would resemble below. 

An Example of Using Think Deeper

The writer of this blog post asked Copilot for a summary of recent developments of the hard problem of consciousness, then asked what Copilot thought the answer was. 

If you have no clue what that means, just know that it is an unsolved problem that involves philosophy and neuroscience, among other fields. In other words, Copilot was given some difficult questions to answer. 

So here really is the reaction of the writer of this blog post: at the very first line of the A.I.’s response, the vagueness of its summary of the hard problem of consciousness inspired a trip to a reputable human-written source, namely a long detailed article written by an academic

Already, the “is this right?” doubts started to kick in. 

That is the thing about these tools–these sort of just make you want to read something that is fact-checked by a human being and published by a reputable source. 

And so much for the deep research synthesizing tons of sources: Copilot here just offered one measly link to a Psychology Today article that is said to be “updated” in 2023, yet the body text of the Copilot response references a supposed work by “Chalmers” published in 2024, unlinked and unnamed by the A.I. 

Also, the hard problem of consciousness is not even the main topic of that PT article, whereas there are plenty of standalone articles that have it as the main topic. 

As for the A.I.’s attempt to answer the question, how could this writer know whether the A.I. gave a great answer or not? Surely a human expert in the subject would have a better shot at surveying the answer. This points to the problem of trust–how likely are you to trust that the A.I. is actually giving a supportable answer to a tough question? 

The Last (But Not Least) Key Takeaway from This Blog Post

For people like the writer of this blog post, A.I.’s responses to prompts will always be haunted by the possibility of hallucinations by the A.I. 

Truly, it was hard to get through Copilot’s responses without wondering frequently whether any of the information would be contradicted by a fact-check. 

If it was indeed all correct, kudos to Copilot. But if not, then really, who has the time to figure out whether it is or not? Plus, not being given direct links to cited sources is also an issue, because it makes things harder to quickly corroborate. 

For business owners, that is the crux of the issue: asking these deep-thinking A.I.s questions that could deeply affect, negatively or positively, your bottom line if answered correctly or incorrectly can be a major risk when you factor in hallucination and even minor errors. 

However, A.I. companies do not need to worry about customers like that: so long as enough people are out there to put a lot of faith into the response of deep-thinking A.I.s, then the money will roll right in. 

But will money roll into the businesses that use these deep-thinking A.I.s? Based on this writer’s experience with Copilot’s tool, it feels more like a gamble. Hopefully the A.I. will get better, because businesses will certainly be relying on it. 

Other Great GO AI Blog Posts

GO AI the blog offers a combination of information about, analysis of, and editorializing on A.I. technologies of interest to business owners, with especial focus on the impact this tech will have on commerce as a whole. 

On a usual week, there are multiple GO AI blog posts going out. Here are some notable recent articles: 

In addition to our GO AI blog, we also have a blog that offers important updates in the world of search engine optimization (SEO), with blog posts like “Google Ends Its Plan to End Third-Party Cookies”