The United States has unveiled a bold AI Action Plan under former President Donald Trump’s leadership, positioning America as a frontrunner in the global artificial intelligence race. While the strategy outlines a path for technological dominance, critics warn that it risks sidelining two essential factors – market competitiveness and safety standards.
America’s Race-Focused AI Strategy
The new US plan is structured around three primary objectives:
- Boost AI innovation through rapid research and deployment.
- Develop AI-ready infrastructure, including energy and data centres.
- Set global AI standards that allies and partners can adopt.
To achieve this, the US is using four main policy levers:
- Export controls on advanced chips and manufacturing tools to limit rivals’ capabilities.
- Regulatory easing across agencies to speed up AI integration in various sectors.
- Massive infrastructure expansion, from power grid upgrades to cloud facilities.
- Promotion of an “AI export stack” – hardware, cloud services, models, and frameworks – to friendly nations.
However, this “race-first” narrative is influencing global AI policy in ways that extend far beyond US borders. China is heavily funding AI startups, the EU is setting up large-scale AI production hubs, and global AI safety discussions are receiving less attention.
Risks to Fair Competition
High-end AI development demands enormous capital, specialised chips, and stable energy supplies. Such requirements naturally favour a handful of deep-pocketed tech giants. With Washington’s policy funneling public resources into infrastructure and easing regulations, these incumbents may strengthen their dominance.
Big tech already leverages network effects – for example, Google integrating AI models into Workspace and Search, or X embedding its AI assistant Grok into the platform. This gives them a head start in adoption, data access, and refinement.
There’s also a geopolitical dimension. By controlling the AI supply chain and export conditions, the US indirectly shapes global tech dependencies. India, for instance, experienced constraints when advanced chip exports were restricted under earlier US rules. Under the new plan, export restrictions on “countries of concern” could tighten further, creating strategic vulnerabilities for partners.
Safety Taking a Backseat
The plan allocates funding for research on model evaluations and robustness – positive steps in theory. However, it simultaneously downplays regulatory guardrails, promoting a “test first, regulate later” culture. This shift could set a weaker global safety benchmark, especially if US-developed AI systems are widely exported.
Nations that import these AI solutions may inadvertently inherit their light-touch regulatory standards, leading to a global race to the bottom in AI governance. The EU’s stricter AI laws, for instance, are already facing pushback from European companies citing competitiveness concerns.
The Case for Open AI Models
A balanced global AI ecosystem requires more than two superpowers dominating the field. Open-source artificial intelligence models such as Meta’s LLaMA and BigScience’s BLOOM are two examples of models that could serve as a viable alternative. The fact that these projects make their training data, model weights, and research accessible to the general public enables researchers, companies, and countries with a smaller population to contribute to the advancement of artificial intelligence.
Transparency, competition, and collaborative safety evaluations are all encouraged by open models, which in turn reduces the monopolistic influence that a few powerful technology companies have. This strategy could be supported by governments through the provision of financial incentives and the relaxation of laws for open-source artificial intelligence ventures.
Looking Beyond Large Language Models
AI achievements like large language models (LLMs) are amazing, but agentic AI could be the real game-changer. These systems can take care of things like arranging meetings, booking services, or working together across departments on their own. They might change industries like healthcare, logistics, and banking in big ways. To make this future safe and accessible, global leaders must focus on open protocols for data sharing that ensure privacy, interoperability, and accountability.
The Road to Delhi 2026
A opportune chance to turn attention away from an arms race and toward responsible global AI development is presented by the impending AI Summit in Delhi in February 2026. In order to guarantee that innovation benefits society as a whole and not just a small number of powerful individuals, policymakers can pledge to promote open, transparent, and secure AI ecosystems.
It is possible that Trump’s AI Action Plan will provide the United States an advantage in international competition; yet, if it does not include controls to ensure justice and safety, it runs the risk of producing a future that is more unequal and less secure using AI. The global community requires an artificial intelligence policy that is not only focused on winning the race, but also on constructing an AI economy that is secure, inclusive, and sustainable.