Facebook Pixel Code

The A.I. Act will bring with it a host of regulations on artificial intelligence. 

The European Union (E.U.) drafted this and expects it to become effective within 12 to 24 months after passing. That alone is considerable, as the A.I. industry has been developing at remarkable speed. It is truly unknown what A.I. will resemble in just two short years. 

Even for business owners not based in Europe or conducting significant business operations there, these regulations will likely have a big impact because of their influence on international A.I. policies, which figures like the Pope are calling for

For one, it is likely that legislators in the United States will be looking over these policies when drafting the legislation for the U.S.A. 

So, these regulations and their impact on business owners will likely serve as a sneak preview of what will happen in the United States. 

Transparency Requirements

Requiring “watermarks” for A.I.-generated content has become one of the most significant developments in recent A.I. legislation. 

The reason for this is that political and commercial incentives for A.I. watermarks simply outweigh the benefits of no watermarks. 

For instance, a politician will not benefit from people using deepfakes to make it seem as if said politician has said or done anything that did not in fact occur. Individuals who are unaware that they are observing A.I. may tend to vote against the candidate in question during the upcoming election cycle.

As for business owners, it is a little more of a toss-up. Some may mourn the loss of being able to easily rely on generative A.I. to create marketing content to present to a customer base none the wiser that they are looking at A.I. Truth in advertising will become a little more true thanks to these regulations. 

A.I. Generated Images

But, this has another side to it that connects to the politicians’ worry, which is the use of deepfakes and misinformation that could do reputation damage to a business. 

Consider the fictional example of a restaurant that just started up. One night, a customer goes there and does not like the food. In addition to that, this customer comes to believe that the wait staff was condescending and kept pouring lime-scented water instead of the lemon-scented water that this customer requested. 

Imagine that person with no photo editing skills actively creating numerous generative A.I. images of botched dishes and disgusting-looking meals, then posting the images online and claiming that these represent real meals served by the restaurant.

The restaurant would need to expend a lot of energy doing damage control, for one. Does free speech cover this very unhappy customer, or could they receive a cease-and-desist for the slanderous A.I.-generated content?

Instead, wouldn’t it be nice for the customer to be unable to convince others that the images are indeed real, as a watermark informs everyone that the pictures of meals are indeed A.I.-generated?

 

A.I. Safety

Protecting citizens from some of the more dystopian-flavored effects of A.I. is a top priority for the E.U. 

For instance, ensuring that police do not solely rely on A.I. to identify suspects in camera footage and use that as the sole basis for charging citizens with crimes. 

Facial recognition software will be regulated pretty strongly in general, as it is one of those areas of A.I. that has gotten a lot of public attention for its potential to discriminate against certain people and groups. 

ChatGPT Restrictions

OpenAI, creator of ChatGPT, has faced regulations before in Europe. 

Italy, for instance, had enacted a temporary ban of ChatGPT earlier in 2023. Some of this had to do with content that was inappropriate for children. Another reason connected to the issue of misinformation. 

Regulators anticipate imposing transparency requirements on ChatGPT, similar to those outlined above. The extent of regulations for this chatbot and others like it remains unknown.

 

 

Training Restrictions

One of the biggest points of contention in the world of A.I. is where to draw the lines in data collection for creating A.I. training data sets. 

For instance, creatives worldwide are filing numerous copyright lawsuits because OpenAI trained ChatGPT on copyrighted works.

The crucial question here is whether the use of copyrighted works in training an A.I. can be considered fair use or not.

The E.U. is expected to completely prohibit practices such as wide-scale image-scraping across the Internet, where individuals take pictures from the Internet to train things like facial recognition algorithms.