Trump Ai Plan Rips The Brakes Out Of The Car And Gives Big Tech Exactly What Itwanted
The White House on Wednesday announced its AI Action Plan, unveiling a sweeping anti-regulatory approach that disengages the brakes from AI development and datacenter construction in the US. The plan also promises to clamp down on what it called “ideological bias” in AI models.
The document envisions AI development as a race between those on America’s side and those who aren’t, and frames domestic and foreign policy in that context.
“We need to build and maintain vast AI infrastructure and the energy to power it,” the Plan states says. “To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day. Simply put, we need to ‘Build, Baby, Build!'”
Big Tech got exactly what it wanted in this action plan
The plan comes seven months after President Trump revoked his predecessor Joe Biden’s Executive Order on AI. His administration has since focused on walking back regulations.
AI is “far too important to smother in bureaucracy at this early stage, whether at the state or Federal level,” the new Action Plan states.
The essence of the plan is ferreting out domestic regulations that hinder AI development and killing them with fire.
The plan extends to state-level AI rules, which Trump had attempted and failed to ban in his recent One Big Beautiful Bill Act. Now, the Office of Management and Budget (OMB) will direct federal AI funding away from states with regulations that it considers too strict. The Action Plan also calls on the Federal Communications Commission to examine whether state regulations interfere with its operations, and for the Federal Trade Commission to defang itself and sideline investigations that it sees as a burden to AI innovation.
Drawing distinct lines between the US and Europe
The call for deregulation highlights a cultural difference between the US and Europe, said Ronan Murphy, chief data strategist at cybersecurity company Forcepoint and a member of the Irish government’s AI Advisory Council.
“The [US] core philosophy is innovation first, market first, heavily deregulated. If you compare that with the European Union, it’s regulation first. It’s safety, it’s precautionary,” he said.
The focus on deregulation is equaled only by the push for adoption. The US plan calls for industry-specific regulatory sandboxes to help AI innovators experiment, and for creation of testbeds for piloting AI systems in real-world settings.
There’ll also be a push to use AI in the executive branch, including a secondment program for AI talent so remaining US federal government employees can go where they’re needed to work their AI magic.
Just as the Biden EO did, the AI Action Plan will standardize federal AI procurement. This time it will do so using a “procurement toolbox” led by the General Services Administration (GSA). This will include an OMB-run network that provides “High-Impact Service Providers” (presumably foundation model operators) with fast access to agencies.
From now on, the US government will deal only with AI that pursues truth
However, the evaluation criteria for buying AI products and services will be markedly different from the risk-focused criteria specified in Biden’s Executive Order. The government will only procure LLMs that are “objective and free from top-down ideological bias” as part of what it calls a free-speech push.
“It’s impossible to get rid of bias in general,” responded Cathy O’Neil, CEO of the algorithmic auditing firm ORCAA and author of Weapons of Math Destruction. “It’s only possible to decide whether a certain way of thinking is acceptable. Which is to say, we would need to share norms and have debates and modify things over time, and even then it would be really hard, just like history is hard and social science is hard. These guys like to simplify everything to being either right or wrong, but it’s not that simple.”
Trump did his damnedest. In a speech announcing the Plan that also included remarks on transgender athletes and President Biden’s use of an autopen, he signed an executive order that in his words bans Washington from “procuring AI technology that has been infused with partisan bias or ideological agendas such as critical race theory, which is ridiculous. From now on, the US government will deal only with AI that pursues truth, fairness and strict impartiality.”
“It’s so uncool to be woke,” he added.
The Plan also calls to remove diversity, equity, and inclusion, and climate change references, from the National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework. It also specifically mandated looking for bias in Chinese models.
Mia Hoffman, research fellow at the Georgetown University’s Center for Security and Emerging Technology (CSET), warned that the elements of the EO that address bias might present practical difficulties for foundational model operators who still need to comply with EU regulation. On August 2, new transparency requirements on LLMs come into force under the EU AI Act.
“We would expect these regulations to have a pretty outsized impact on US developers, because the regulation applies at the model level,” she told El Reg, pointing out the huge expense of training a foundational model and the unlikeliness that they’ll train separate ones for each region.
“So there’s limits to how much deregulation the AI Action Plan in the US generally can have, as long as developers have an interest in having their models in the EU market,” she added.
The policy of targeting information unacceptable to the government extends to rooting out AI-generated images that the plan says could hinder legal investigations. It floats a possible NIST-controlled “Guardians of Forensic Evidence” deepfake evaluation program and a deepfake standard for the DoJ.
The government’s AI adoption push extends into the military. The DoD gets a “virtual proving ground” for AI and autonomous systems and must prioritize and migrate workflows to AI. Given the plan’s mandate to “transform both the warfighting and back-office operations” of the DoD, we can assume that some of those AI workflows might involve the pointy end of the department’s activities.
The plan also recommends the development of open financial markets for compute, unlocking what it sees as a market captured by hyperscaler providers. It will connect researchers to AI resources through a resource network and promote open-source and open-weight models among SMBs.
Build, baby, build – on federal land
The ‘build, baby, build’ language really kicks in on the infrastructure side. Datacenter operators can expect more leeway in construction, with permits loosening restrictions when building around wetlands and other protected waters. It will also grease the wheels by slimming down environmental air and water regulations. Agencies with a lot of federal land will have to allow datacenter operators to build facilities, including power generation plants.
Kate Brennan, associate director of the AI Now Institute, called the whole plan a gift for the big tech companies that will build these datacenters. “Big Tech got exactly what it wanted in this action plan, and we’re poised to see an acceleration that is built on deregulatory principles and very little consideration for the public at large,” she warned.
Trump backed up the language in the plan by signing an executive order to fast-track datacenter development.
All the electricity these datacenters chew through must come from somewhere. The plan recommends a widespread grid modernization program, bringing it all up to baseline standards for resource adequacy. It calls out geothermal and nuclear energy as focus areas.
The Action Plan also continues support for domestic semiconductor manufacturing to support the AI industry, but will strip away some of the CHIPS Act’s funding conditions. It doesn’t specifically call it out, but it mentions “saddling companies with sweeping ideological agendas,” which might refer to inclusivity requirements [PDF] for chip companies.
The plan nods to the American worker with a training program to develop more skilled workers in supporting roles such as electricians and HVAC specialists. This will go from adult to high-school level.
Us v them
The diplomacy section has a definite “with us or against us” vibe. It describes an American AI alliance (a club of allies that get access to US AI tech stacks). There will be a set of export packages to support this. It proposes measures to stop these reaching countries it doesn’t like, using location verification features and intelligence community monitoring.
Jacob Feldgoise, senior data research analyst at think tank CSET, put this in the context of the Biden-era AI diffusion rule, which governed chip exports according to a three-tier system. That left countries like China in the red ‘no export’ zone but created yellow and green zones for semi-trusted and fully trusted countries.
The current administration revoked that rule just before it went into effect in May this year. Feldgoise expects the new controls to stay strict on China but to loosen the controls that would have affected other parts of the world from US chip companies. “If things are relaxed the way that we’re expecting, it would mean that many of these companies can export greater quantities to more destinations than they previously would have been able to.”
Too many of these efforts have advocated for burdensome regulations
Trump signed an EO promoting the export of American AI models after his Wednesday speech.
The administration expects allies to toe the line on export controls, and this will all be governed by quiet agreements between small numbers of allies. The document explicitly states that the government is backing away from broader multilateral treaties.
Hence, international AI governance gets short shrift: “Too many of these efforts have advocated for burdensome regulations, vague ‘codes of conduct’ that promote cultural agendas that do not align with American values, or have been influenced by Chinese companies attempting to shape standards for facial recognition and surveillance,” the plan states. Consequently, Washington will work with its allies to “promote innovation, and American values”.
Risk, schmisk
Aside from its deregulatory largesse and diplomatic insularity, the big takeaway from the plan is its myopic approach to risk. Many other documents including the Biden EO took a rounded approach to risk by considering issues such as civil rights, employee rights, and data protection. Bias was discussed properly in terms of its effect on individuals and the public good.
This plan’s conception of risk is more singular. It revolves mainly around bad actors co-opting AI, calls for work with frontier model providers to harden their LLMs, and makes much of the need for secure DoD AI datacenters.
On the cybersecurity side, it calls for creation of an AI-Information Sharing and Analysis Center (ISAC) that would join the existing network of such centers. There will be a DoD-led secure AI push and a standard on information assurance led by the ODNI. It will also work to fold AI-specific language into existing incident response doctrine, it says.
None of these security and protection measures are bad things. Indeed, they’re necessary. But there’s a solid corpus of existing work from across the globe that looks at the social and ethical risks of AI, not to mention the inherent power structures that enabled development of the technology and what it might mean for the future. That’s nowhere to be seen here. In a country that’s leading in the field and harboring most of the investment capital for AI, that’s concerning. ®
A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.
If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below
To keep up to date follow us on the below channels.