Europe’s Ai Crackdown Starts This Week And Big Tech Isn’t Happy
It is a little more than four years since the European Union first proposed legislation to govern tech companies that build AI systems and how users deploy them. A lot has changed since then.
In November 2022, OpenAI launched ChatGPT, which – as well as being able to write convincing poems – tempted tech businesses to question how they were going to make money from the software.
Fast forward a couple of years, and nearly every enterprise application builder is including generative AI, promising enhanced productivity. Once it became clear how models like ChatGPT were trained – including those used for visual and musical content – copyright holders began to question whether they were being rewarded for their creations, leading to a string of court cases.
At the same time, Europe and America began to diverge on their approach to tech regulation with the re-election of Donald Trump as US president in late 2024.
The EU’s AI Act was enacted in March last year, making it the first legislation designed specifically to address the risk of artificial intelligence, including biometric categorization and manipulation of human behavior, as well as stricter rules for the introduction of GenAI.
The legislation comes into force in stages. Later this week (August 2), a set of rules will be in place for builders of generative AI models, such as ChatGPT, meaning developers will need to evaluate models, assess and mitigate systemic risks, conduct adversarial testing, report serious incidents to the European Commission, ensure cybersecurity, and report on their energy efficiency.
To help ease compliance with the new rules, the EU has launched a code of conduct and guidelines for developers of large language models (LLMs) and other kinds of GenAI. Instead of directly complying with articles set out in the Act, GenAI developers can reference the code in their dealings with EU’s AI Office, set up to oversee the introduction of the Act.
But not everyone is happy, or willing to take part. Social media giant Meta, which built the Llama LLM, said the guidance introduced “legal uncertainties” beyond the law’s scope, and refused to sign up to the code of practice.
Joel Kaplan, chief global affairs officer at Meta, said in a LinkedIn post that “Europe is heading down the wrong path on AI.”
The guidance was delayed – the European Commission had planned to publish it in May – and it is yet to be endorsed by EU member states and the Commission, which has left some developers concerned about how little time they have to adopt the code and whether it might change.
Nils Rauer, Pinsent Masons partner and joint lead of its global AI team, told The Register that makers of GenAI models have an obligation to make sure they comply with the Act, and they are generally well prepared for it. However, there are reservations about the guidance, he said.
“It needs to be fit for purpose. It needs to be highly practical. If a high number of lawyers work on such an important document, lots of views come together, and you can see that by reading the code of practice. They’ve tried their best, but it remains quite generic on a number of issues, including copyright issues. You need to be generic to some extent, but you need to cover a lot of cases, a lot of situations, in order to be able to make that guidance precise and practical. It’s a difficult task but we had hoped for more specific, more practical guidance.”
Monika Sobiecki, partner at law firm Bindmans, said that while users might not have much sympathy with Meta and other AI builders, the fact that the guidance landed on July 10 gave them about three weeks to understand it before the law comes into effect.
“The complaint that has come from some of these big general-purpose AI producers is, ‘Well, you’ve given us all of about three weeks.’ Also the guidance hasn’t fully gone through the trialogue [discussion between the European Parliament, the Council of the European Union, and the European Commission]. Although we’ve got most of the shape of it, you would have expected it to be approved by the European Commission and parliament before it is implemented and compliance is expected,” she told us.
However, the EU was propelled by a sense of urgency in introducing AI legislation to make it distinctive from the US, as the technology become adopted.
“There’s a sense of there being two wider trends happening at the same time: one being the proliferation of AI tools, especially GenAI tools and a sense of a regulatory lacuna around it. The EU Commission, for example, has stated that they want to see the EU market as a place of developing safe AI,” Sobiecki said.
Last week, the Trump administration introduced an AI Action Plan, which backs a more hands-off approach to regulation compared with the EU.
“When we’re talking in geopolitical terms, you’ve got the US market, which wants to move fast and break things,” Sobiecki said. “Donald Trump’s vision of US tech dominance is to deregulate everything and give AI a free rein. The EU has prioritized the idea of AI safety.”
While critics argue the AI Act only addresses transparency and bias and not the wider impacts of AI, it does at least nudge AI producers into making sure they fully explained what their AI is doing, she said.
The EU and the US announced a trade deal at the weekend. It is expected to impose 15 percent tariffs on European exports of automobiles, pharmaceuticals, and semiconductors to the US, but it is still a high-level political agreement with the details yet to be finalized.
While Pinsent Masons’ Rauer agreed that the AI Act may be dragged into trade negotiations between the EU and the US – under the current deal or later arrangements – developers should not let this sway their approach to compliance.
“If there is a huge gap in the regulatory framework between the US and the EU, this causes trade barriers, and therefore, they will negotiate in one way or another, certain elements of watering down [the AI Act],” he said. “Having said that, as we stand right now, the big US companies that are engaged in AI, they all have done their homework, and they [will not] like waiting until the last minute to see any type of appeasement [from the EU]. They cannot run their businesses [assuming] there will be some deal.”
At the same time, businesses deploying AI have been trying to understand how the law applies to them and their supply chains.
Rauer also said clients were asking his team to produce more bespoke guidance on how to comply with the Act, down to the level of applying AI to drug discovery or HR and recruitment. And there has been greater scrutiny of supply contracts.
“You have standard contracts in place that do not necessarily reflect AI being used in the supply chain,” he said. “We’ve drafted clauses which ask whether suppliers are using AI when they supply goods and services and how these AI systems have been trained.”
Others were concerned about whether they could copyright content produced by marketing and advertising agencies using AI.
“For example, big brands in the automotive sector, have been confronted with lots of queries from their marketing agencies asking if they can use AI to create campaigns. The video footage using AI can be fantastic, for example, and it’s cheaper than flying down to South Africa, shooting a car, driving along the coastline. The question that those clients want to address is whether they can use AI in that context,” said Rauer.
“The tricky thing here is, if you create something by means of AI, it’s not a human creation, and therefore you do not get copyright for it. There’s a huge discussion, how much do you need to do in terms of human influence, human input in order to ensure the output of the work product being eligible to copyright… and that is highly important and affects pharmaceutical and consumer goods industries as well as automotive.”
The AI Act is being introduced in phases. In February, it prohibited activities including biometric categorization systems that claim to sort people into groups based on politics, religion, sexual orientation, and race. The untargeted scraping of facial images from the internet or CCTV, and emotion recognition in the workplace and educational institutions, were also banned, for example.
After the rules for general-purpose AI come into force later this week, the next category will be high-risk systems. From August 2026, systems with the potential to cause significant harm to health, safety, fundamental rights, environment, democracy, and the rule of law will need to comply.
While the introduction of GenAI guidance has been welcomed by some – and rejected by others – it is only one milestone in a long journey. As the US has chosen a different path from the EU, organizations building and deploying AI can only strive to follow their progress while complying with the law as it stands. ®
A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.
If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below
To keep up to date follow us on the below channels.