Facebook Pixel Code

Recently, tech leaders from seven of the biggest names in A.I.—Microsoft, Anthropic, Meta, Google, OpenAI, Amazon, Inflection—met with President Biden to reach an agreement on how A.I. companies should operate, with restrictions in place. 

What this resulted in is a number of commitments that the tech companies have made a verbal agreement to follow. Some of these restrictions are quite important, and can have a huge impact on the future of A.I. as we know it. 

Below, we rundown some of the biggest restrictions that tech companies have agreed to, as well as how they can have an impact—positive or negative—on your business’ operations. 


Security Testing and Restrictions by Third Parties 

Security vulnerabilities can turn otherwise innocuous A.I. systems into huge liabilities. For any sophisticated hacker, being able to access the code of an A.I. system and alter it for nefarious ends can be deadly.

This is especially true when the military utilizes A.I., as they can literally weaponize it.

Another concern is that adversaries of the United States may access American A.I. systems and use that info to create their own A.I. 

As a result, businesses will actually enjoy a net positive here. Greater security measures means that your A.I. is safer to use. 


Checking for Bias

This one is another big positive for businesses. 

Let’s suppose that you run an insurance company that uses A.I. to screen for potential clients. 

If that A.I. system exhibits bias, it will reject deserving clients based on metrics like location, race, education level, and other demographics.

Not only can this be a PR disaster, but it can cost you money in the long run by turning down potentially lucrative clients. 

When A.I. companies must actively mitigate bias to the highest degree possible, then your business will obtain a superior A.I. system to integrate into your operations.


Data Privacy Questions 

Many large companies do not use ChatGPT because of concerns about data privacy. Allowing ChatGPT to access company secrets exposes those secrets to potential theft in data breaches.

Again, businesses can only benefit from this. Increased data privacy measures with restrictions means that it will be safer to use A.I. without the worry of losing your data in a breach. 


Acknowledging Risks of A.I. with the Government 

Another win for businesses. Just as you would not want to buy a car without knowing about the CarFax, in the same way you do not want to use A.I. without knowing the risks. 

By requiring information sharing with the government about the risks of A.I., business owners have a greater chance of finding out the ways that A.I. can help or hurt their business. 


Transparency for Identifying A.I. 

Here is one that might end up hurting businesses that rely on A.I.-generated content. 

The method here involves creating A.I. “watermarks” that makes A.I.-generated content easier to identify. 

These watermarks can be subtle uses of language in a chatbot’s text output, or visual cues within an A.I.-generated photo, that makes it easy to identify by other A.I. systems. 

One such system may be Google’s search engine algorithm, which could be trained to identify watermarks in content. That may affect your SEO efforts if search engine algorithms are trained to rank automated content lower than more “organic”, human-written content that do not have the watermarks. 


Focusing A.I. Development to Benefit Humanity 

This last one, as you can imagine, is beneficial for businesses. 

Examples here involve focusing on the development of A.I. tools that can help solve environmental issues. Another example would be A.I. that helps keep people safer, such as A.I. in automobiles that enhance safety standards.

Though such measures may not be as directly beneficial for business owners as, say, A.I. that can produce marketing content quickly, it can ensure a better world for business owners and their customers to live in, so we consider that a win. 


Are These Restrictions Legally Binding? 

Right now, these companies are taking these measures as a sort of handshake agreement. A company may not get in any legal trouble as of yet, but what is important about this meeting is that President Biden has basically told these companies to adapt to these restrictions now, because before long they will indeed be instituted as federal regulations.

Though only seven A.I. companies were at that meeting with President Biden, once federal regulations are in place to make these handshake agreements federally binding regulations, then every A.I. company will be made to follow suit. 


GO AI Articles

Guardian Owl Digital is dedicated to helping businesses everywhere learn about and implement A.I. 

For continuing your AI education and keeping up with the latest in the world of AI, check out our AI blog

New Year, New AI: Here Are the Biggest Trends in AI Coming in 2023

How AI Could Have Helped Southwest Avoid Its Holiday Disaster

IBM Watson vs. Microsoft’s ChatGPT: The AI Chat Matchup of the Century

AI on the Stand: Explaining the Lawsuit Against the Microsoft Automated Coder

AI and You: What Determines Your AI Recommendations in 2023?

How AI Could Have Foreseen the Crypto Crash—(It Already Analyzes Exchange Markets)

Google’s Response to ChatGPT: What the Tech Giant Is Doing to Improve Its Own AI Efforts