Blogs

The EU AI Act—What’s The Real Deal?

Few topics have generated as much buzz as artificial intelligence, with both positive and negative implications. Responding to public apprehensions surrounding AI, the European Union acted promptly to introduce new regulations.

Having released its first draft in 2021, the European Parliament resolved on 14 June 2023 to craft definitive statutes for the final act. This act will undergo approval by the European Council and is expected to be adopted no later than 2026.

Much like the Digital Markets Act, the AI Act will profoundly impact businesses across various sectors. But let us explain why, and how you can prepare.

What Is The EU AI Act?

The EU AI Act will be the world's first comprehensive law on artificial intelligence, designed to ensure the safety of AI systems operating in the EU market. Its current draft aims to promote trustworthy AI systems whileblocking the use of potentially harmful applications. Crucially, the act seeks to ensure that all AI developed and used in the European Union will comply with the established rights and values already enshrined in EU law.

Leading AI experts have warned that unregulated AI poses a fundamental threat to democracy and humanity. Many companies have voiced similar warnings, calling for stronger regulations for these technologies. Among those asking for stricter legislation on AI are the two leaders in the field: Microsoft and Google.

During the 2023 Yale CEO Summit, 42% of surveyed CEOs agreed that AI has the potential to destroy humanity within five to ten years from now. At the same time, many leading figures believe that the response should not lead to overregulation of AI; one worry is that introducing regulations deemed too strict may simply push AI developers to leave Europe.

What regulations are included in the EU AI Act?

The AI Act classifies AI systems into three categories:

  1. Banned are systems that pose an unacceptable risk to people, such as by classifying people based on certain characteristics, or social scoring. Such systems have faced criticism for their potential to manipulate and harm individuals.

  2. Regulated are systems that may negatively impact safety or fundamental human rights, including automated tools for assessing CVs submitted by job applicants. These systems are subject to regulatory oversight to mitigate their potential adverse effects.

  3. Permissible and unregulated are systems that are not considered risky. Examples include commonly used systems like spam filters, which do not raise concerns regarding safety or human rights violations.

Additionally, the EU AI Act aims to enhance transparency in generative AI through several measures:

  • Mandated tags to indicate AI-generated content
  • Differentiation between real and fake images
  • Protective measures for preventing the creation of illegal content
  • Demanding AI developers publish summaries of copyrighted data used to train AIs

These regulations are designed to apply universally to all AI systems. In short, users must be made aware when they are interacting with AI, as the likelihood of encountering deepfakes is particularly high with image, audio, or video content providers.

What are deepfakes?

Deepfakes are media content generated or altered by artificial intelligence (AI) In other words, they aren't real, even though they may appear real enough.

While the concept of media manipulation is not novel, advancements in machine learning have elevated the sophistication, speed, and prevalence of deepfake production. Today, creating deceptive yet convincingly realistic images, audio clips, and video footage is more accessible than ever before, amplifying the ease with which false information can be disseminated.

The implementation of the AI Act will necessitate that every EU Member State establishes a Digital Sandbox, offering companies a controlled environment for testing AI systems prior to their official release. Additionally, national-level administrators responsible for regulatory oversight will be appointed to address complaints from local citizens.

How can ChatGPT boost your marketing strategy?

What impact Will The EU AI Act Have On Businesses?

Businesses that fail to adhere to the EU AI Act should anticipate significant financial penalties. For instance, utilizing prohibited AI designs or applications could result in fines amounting to up to 40 million euros or 7 percent of the company's global annual revenue. This represents a considerable departure from the GDPR, which recently imposed a 1.2 billion euro fine on Meta.

However, there is a recognition that excessively penalizing small and medium-sized enterprises (SMEs) or startups engaged in innovative ventures could stifle growth in an evolving market. Regulators have signaled a willingness to exercise leniency, particularly towards smaller companies with limited market shares compared to larger entities like GAMAM. The objective is to impose fines that are "proportionate" to the scale of the offense, thereby safeguarding the competitive edge of European innovation while providing room for experimentation and growth.

Outlook On AI Regulation In The Future

While the exact timeline for the enactment of the EU AI Act remains uncertain, one fact is undeniable: AI regulation is on the horizon. It is plausible that new international standards will emerge in response, akin to the developments following the implementation of the GDPR.

Given the rapid evolution of AI technology, it is expected that the current draft of the EU AI Act will undergo further refinements before its eventual adoption. Numerous amendments have already been incorporated since the initial draft was introduced two years ago.

Nevertheless, regulators maintain a steadfast objective: establishing a global framework for AI systems to ensure their contribution to enhancing human life, rather than posing a threat to our collective future.

Explore our AI-powered multi-location marketing platform