Facebook Pixel Code

Recently, some of the foremost A.I. companies in the world, including Microsoft and Google, decided to join forces to found the Frontier Model Forum.

Like a group of superheroes putting their hands in a circle and promising to look out for the safety of mankind, this Silicon Valley Justice League venture is dedicated to the pursuit of A.I. safety research. 

It is an exercise in self-regulation, which has much precedence and inspiration based on certain recent events in the world of artificial intelligence. 

Why Did These Companies Form the Frontier Model Forum?

The formation of this coalition for safety in artificial intelligence was not so sudden as it may seem to some people who would suppose that these companies are locked in a merciless dog-eat-dog (or, rather, a chatbot-eat-chatbot) competition with each other.

In fact, it does not take much to recognize that this development is likely a direct consequence of these A.I. titans’ recent meet-and-greet with the President of the United States of America.

At that meeting, those companies made a promise to President Biden that they would all put in place a series of self-imposed regulations on their own A.I. development.

The necessity of this meeting is due to the rapid pace of A.I. development alongside the considerably slower pace of legislation to create A.I. development regulations that is gradually gaining traction. With lawmakers still in the preliminary stages of A.I. regulations, and A.I. companies coming out with new and improved versions of their products seemingly every week, it was only inevitable that the government would ask the companies to play a more prominent role in their own regulation. 

Is This A.I. “Supergroup” Mutually Beneficial for the Members? 

Yes, and here is why: Had all of the respective company representatives stood up from that meeting with the P.O.T.U.S, rebuttoned their respective sport coats, shook hands with each other goodbye, then went back to their respective companies and said that the company needed to create its own individual policies for self-regulation, then that increases the chances of falling behind in the competition.

When everybody agrees to play by the same rules, instead of their own rules, then the self-regulation commitment becomes much more fair to each individual company, for the company that implements the bare minimum of self-imposed regulations will doubtless surpass the more restrictive companies. 

So, the mutual risk of falling behind the rest of the pack motivates the creation of something like the Frontier Model Forum. 

What Are the Specific Goals of this Group? 

With a mutual goal as broad as “promote safety in A.I.”, there is going to be a call to, you know, specify what exactly these companies are actually going to be doing. 

The Frontier Model Forum has identified its core objectives as the following:

Safety research on artificial intelligence

Identifying best practices for developing and deploying A.I. models

Educating the public about A.I.

Collaborating with academics, activists, legislators, and businesses to find the most effective ways to implement A.I. across society

Create A.I. that is for the “greater good”, such as mitigating environmental change, increasing the sophistication of cybersecurity systems, and aiding medical efforts to fight cancer.

What Has the Frontier Model Forum Done So Far?

Since the Frontier Model Forum is still in its infancy, it is still in the phase of simply getting the organization up and running. 

For instance, it has yet to establish its promised Advisory Board that will outline plans of actions for the company. 

However, certain areas will require attention over the next year or so. This involves knowledge sharing across the industry and outside it, so that there is greater transparency in society about the risks of A.I. systems. 

Another item on the agenda for the next year is to identify what “open” research questions are most urgent, and therefore should be the focus of pursuit. Part of this involves establishing an open “forum” where technological insights and findings about specific A.I. systems are shared.

Will the Frontier Model Forum Benefit Your Business? 

It is very likely that this Frontier Model Forum will indeed end up helping business owners. 

This means that many A.I. systems will be safer to use, with one example being increased data privacy and security that can keep your business’ data given to A.I. platforms much more safe. 

Something else that will certainly help out business owners is that A.I. companies will be actively seeking business’ insights into how to make A.I. platforms safer to use for businesses. Subsequently, the creation of A.I. systems that offer enhanced long-term advantages to your business will occur.

As a company that offers A.I. services to our clients, Guardian Owl Digital wants to hear from you about how to improve A.I. systems for business owners. Interested in helping make A.I. work better for your business? Reach out to us today to GO AI.

 

GO AI Articles

Guardian Owl Digital is dedicated to helping businesses everywhere learn about and implement A.I. 

For continuing your AI education and keeping up with the latest in the world of AI, check out our AI blog

New Year, New AI: Here Are the Biggest Trends in AI Coming in 2023

How AI Could Have Helped Southwest Avoid Its Holiday Disaster

IBM Watson vs. Microsoft’s ChatGPT: The AI Chat Matchup of the Century

AI on the Stand: Explaining the Lawsuit Against the Microsoft Automated Coder

AI and You: What Determines Your AI Recommendations in 2023?

How AI Could Have Foreseen the Crypto Crash—(It Already Analyzes Exchange Markets)

Google’s Response to ChatGPT: What the Tech Giant Is Doing to Improve Its Own AI Efforts