A Race for AI: But Which One?

Chloé Goupille is Secretary General of the Paris AI Action Summit.

Arthur Barichard is French Deputy-Ambassador for Digital Affairs. This essay reflects the personal views of the authors and is not an official position of the institutions with which they are affiliated.

In 1986, historian Melvin Kranzberg included a striking premonition in his First Law, stating that “Technology is neither good nor bad; nor is it neutral.” Nearly 40 years later, observers around the world continue to argue over whether artificial intelligence is good or bad. One thing is certain, however: it isn’t neutral.

 

What does that mean? It implies that AI and its multiple uses have a tangible impact on real life, as it will most likely shape the future of industry, the way we approach education and learning, our access to culture(s), and it will not be painless for our planet.

French President Emmanuel Macron during the 2025 AI Action Summit | Source: Philemon Henry/MEAE

 

It is not neutral in the sense that AI is a reflection of human nature, in all its fundamentally good and bad aspects. It is a source of enormous potential for human progress. It could bring about considerable scientific breakthroughs at a fast pace—for instance, in the field of health research, where important leaps could be made, or in the automation of tedious routine tasks. AI will act as a game-changer in the way we work and think, bringing positive change to our daily lives.

 

However, when one trains AI with human data, the end result inevitably involves associated biases, which no one has thus far managed to fix. We should remember the instance in 2018, when Reuters revealed that Amazon’s AI recruitment tool systematically downgraded résumés from women applying for technical roles, perpetuating existing gender biases within the technical workforce of the American company for the last few decades. Due to its profound impacts, universal nature, and potential—whose limits remain arduous to discern—AI is undeniably a political object.

 

AI is political, and this is why world leaders have moved it to the top of their agendas in recent years. No one wants to miss this bandwagon. Call it the AI effect: in February 2025, representatives from over 90 countries—including French President Emmanuel Macron, Indian Prime Minister Narendra Modi, U.S. Vice President J.D. Vance, and Chinese Vice-Premier Zhang Guoqing—along with more than 300 corporate executives and 500 representatives of civil society, gathered in Paris to discuss this very issue. This unprecedentedly broad format is highly revealing, as the interdependence between those who build the technology in the private sector and those who define the rules governing it has grown fast—and with good reason: AI has everything to do with geopolitics, strategic autonomy, clout, and power—especially in the current international circumstances.

 

It is clearly a rapidly developing technology, which requires both hard and soft infrastructure, increasing quantities of energy, and huge financial investment. All these elements easily lead to a race to the top, especially when military applications and important economic gains are at stake.

 

The Drivers of the U.S.-China AI Race 

The global race for dominance in the AI industry is driven by a complex interplay of geopolitical and technological factors. These factors are shaping the competitive landscape and influencing how nations and corporations strategize to gain the upper hand in AI development and deployment. The notion that there may be some kind of AI supremacy contributes to intensifying geopolitical rivalries and shifting alliances. States are compelled to adapt to a rapidly evolving international landscape, where technological capabilities are central to national strategies.

 

Regulatory evolution can rapidly alter this landscape. For example, China’s data localization policies require data generated within the country to be stored and processed domestically, which has implications for global data governance and cross-border collaboration.

 

Regulatory approaches vary significantly across different regions. The United States follows a market-driven model with minimal governmental interference, fostering rapid technological advancement through competition and venture capital investment—although recent announcements have shown the federal government’s interest in accelerating the progress of this technology as well. In contrast, China’s approach is more centralized, with the government playing a significant role in directing AI development.

 

AI is also increasingly linked to national security and military strategies. The development of AI technologies for surveillance, autonomous weapons, and cybersecurity has already become a key area of great power competition. The geopolitical implications of AI in military applications are profound, as it can alter the balance of power and lead to new forms of warfare.

 

Technological innovations—such as increasingly smaller chip designs, larger data centers, and sophisticated AI models—are driving meaningful growth in AI capabilities. This continuous infrastructure buildout and ongoing hardware upgrades are expected to lead to ever more powerful AI tools.

 

The competition for AI talent and resources is intense, with platform companies from the United States and China dominating the market due to their access to data, computing power, capital, and engineering talent. Private investment in AI has reached record levels, with significant funding going into the development of new models and technologies. In 2024, private investment in AI surpassed $150 billion, indicating the high stakes involved in the race for AI dominance.

 

The demand for workers with machine learning skills has spiked, reflecting the growing importance of AI across various industries. This trend is expected to continue as AI becomes more integrated into business operations and strategic decision-making.

 

Lastly, the rapid adoption of emerging technologies, including AI and quantum computing, is complicating the cybersecurity landscape. This has profound implications for organizations and nations, as they must navigate a more complex and uncertain digital environment—one also marked by growing online disinformation (and misinformation) and persistent privacy concerns.

 

An Interdependent Field by Design 

The race has gathered pace over recent months. The figures speak for themselves: 60 countries already have a national AI strategy (OECD), and former OpenAI CTO Mira Murati’s startup, Thinking Machines Lab, is about to raise $2 billion in its seed round—less than two months after it was launched. Similarly, French President Macron announced €109 billion of private investments in France, and European Commission President Ursula von der Leyen expects €200 billion to be devoted to such projects within the EU. Meanwhile, across the pond, the Stargate project will include an investment of up to $500 billion. In this context, the question on everyone’s mind is simple: who will come out on top?

 

The answer is not immediately obvious. On the one hand, rather than a Sputnik-style episode that would launch a new kind of rush to the stars, the “DeepSeek moment” has opened a world of opportunities in the AI realm and sent a clear signal to underdogs that the landscape has not yet been defined.

 

The fact that a Chinese startup could rapidly achieve impressive results with far less initial investment than its giant competitors has had a profound impact on market expectations.

 

Technological leaps forward are rapidly changing the nature of the race. This race is also becoming increasingly global, with emerging technologies and models being developed in regions such as the Middle East, Latin America, and Southeast Asia. This globalization of AI development is creating a more competitive and diverse landscape, challenging the dominance of the leading powerhouses.

 

On the other hand, AI is not a zero-sum game. Increasingly open models, too few open datasets, and open science are playing an instrumental role in ensuring that each new AI advance almost immediately leads to another—each actor benefiting from the latest achievements in what could become a “coopetition” rather than a fierce battle, at least along this segment of the AI value chain.

 

Nowadays, too, crucial actors within the AI production chain are European. The models developed by Mistral AI stand shoulder to shoulder with those of OpenAI, Anthropic, and Google, matching them in both prowess and sophistication. Consider also the Dutch champion ASML. Without the cutting-edge printing machines it produces, tech giants such as TSMC, Samsung, and Intel would not be able to manufacture the advanced chips they are developing today—chips that are indispensable for the most recent AI systems. ASML itself relies on the rich ecosystem of suppliers in neighboring European countries, transforming engineering skills and a culture of entrepreneurship into a key input in the global supply chain. European companies are at the heart of this deeply interdependent network, bringing their knowledge to advance the development of this technology. And many others—such as Helsing, Wayve, or H Company—are poised to command the spotlight on the global stage in the months to come.

 

European Assets and Challenges 

With all these factors in mind, and given the progress made in AI by companies in the United States and China, it is natural to consider building a specific European pathway—different from that of the two aforementioned powers.

 

We should be clear that Europe does not hold a global leadership role in AI, but it has the means to develop it successfully, ensure its citizens know how to use it, and thus remain capable of adapting to the future developments it will bring. The continent has always been a powerhouse for innovation. Few people realize how Europe is a cradle of digital development: the World Wide Web, Bluetooth, Linux, and many open standards and norms are of European descent. From this backbone flow all the latest and most sought-after innovations, such as social media or the Internet of Things.

 

But to remain in the race, Europe must ensure it continues to rapidly develop the right facilities to create new models—not at a national, but at a continental level. The recipe for AI includes a series of essential ingredients, which Europe has within its grasp: brains, infrastructure, energy, and data. Yet, without a conscious drive to ensure these key features are available to innovative companies, there is no path dependency encouraging Europe to pioneer the next important breakthroughs. This is why a deliberate push is necessary to boost these four success factors—and this can only be done at a continental scale.

 

Skills come first. The number of talented researchers and top executives at AI companies who have been trained in Europe leaves no doubt as to the long-standing ability of different European education systems to produce engineers and scientists capable of building the future generations of AI. Innovation is a culture—one that Europe has increasingly embraced in recent years. The fact that many leading global firms have set up offices and research laboratories on the continent is further proof of this trend. While Europe has yet to reach a critical mass of experts capable of contributing meaningfully and profoundly to the fast evolution of this technology, much more support is needed—both to expand training opportunities in the sector and to ensure that these talents, in turn, establish their own ventures in the same place. In France, for instance, initiatives have been set in motion to double the number of professionals trained in AI—an endeavor that must be steadfastly pursued over the long term.

 

Secondly, AI relies on substantial infrastructure, such as data centers, which require space and investment. Whereas Europe benefits from its diversity in fostering many kinds of talent, it suffers from fragmentation when it comes to establishing this kind of deep-rooted structure. This is a fairly common occurrence and has been observed in other strategic fields, such as vaccine production during the pandemic or the fragmented landscape of the military-industrial complex. Yet in both cases, new ways forward have emerged as Europe realized the value of the ability to rely on its own production chains across different member states. Mapping strategic facilities across various territories and being conscious of dependencies is a first step toward this more intentional path.

 

Thirdly, AI requires energy—and lots of it. Demand from the AI sector is expected to be ten times higher in 2026 than it was in 2023, according to the latest forecasts from the International Energy Agency. Reducing the resulting carbon footprint while ensuring that new models benefit from a stable and secure electricity supply also requires a broader plan and a long-term vision for autonomous production. In this field, Europe stands to make tangible gains. Recent investments announced in France and Spain are compelling evidence of how the availability of decarbonized energy can serve as a powerful draw for industry and innovation. Europe’s efforts to both decarbonize its energy mix and strengthen its own resources since the war broke out in Ukraine—whether through renewable or nuclear electricity—are building a strong foundation for the development of a thriving ecosystem in this field.

 

Fourthly—and most crucially—AI requires data, or more specifically, high-quality datasets, which are becoming an in-demand commodity. Despite its widely recognized significance, this aspect has not received enough investment—possibly due to the sensitivity around pouring vast amounts of important information into algorithms without clarity on their use as an output. Yet safeguards can be identified, and the old continent is extremely data-rich, having collected all sorts of official, administrative, and scientific data over centuries, with excellent quality and consistency. Add in the language diversity across its countries, and this key resource becomes not only more interesting but also more likely to drive AI development in directions aligned with our societal fundamentals. A huge space remains vacant in this field—one that deserves much clearer focus. For building AI is not neutral; we have a key interest in injecting our own inputs into future models.

 

The recent plan presented by the European Commission to build an “AI continent” is a step in the right direction when it comes to strengthening the European landscape, and it takes these key components into account. However, we must bear in mind that its success will require strong political commitment and a consistent push to ensure the proposals are swiftly translated into action. Moreover, deeper changes will be needed in a mindset that is still far less prone to risk-taking than in other parts of the world.

 

A European Third Way? 

Europe has assets, but it can also offer a different worldview on how the globalization of AI should be driven. Indeed, it can contribute to building a cooperative global ecosystem, while ensuring it reinforces its own sovereignty in AI. It has the capacity—and the historical perspective—to take into account the deep interdependence of this sector, in order to move forward in a way that is compatible not only with its own values but also with those of many others.

 

As such, Europe should pave the way for a sovereign and open model based on two core principles: strengthening AI autonomy for everyone (through the appropriation and use of this technology on one’s own terms), and enabling more targeted cooperation between actors. Eventually, this model could well be widely adopted, as the vast majority of countries are more than willing to achieve the same ambition: to be proper actors in the AI revolution, rather than rule-takers or late runners.

 

As we prepared for the Paris Summit over a period of more than a year, it became clear that this concept was gaining traction beyond European borders—for instance, in India, Senegal, Japan, Chile, South Korea, Nigeria, and many other places. What would this model stand for?

 

A first principle of a sovereign and open model would be to avoid hyper-concentration, to ensure the value of innovation is shared more broadly—unlike in other sectors, such as social media, where little diversity is promoted. This agenda is not uniquely European; it resonates with the current mood in the United States to support the “Little Tech” movement, which emphasizes the importance of less-established tech actors, and aligns with the historically rich network of small and medium-sized enterprises in the old continent.

 

A second principle—and a condition for this shared global value chain—is to ensure that critical resources are accessible to the majority. Small models have shown that the more talent, data, and computing power are available, the greater the likelihood of new and more elaborate versions of AI emerging. This principle is not the easiest to implement, as it requires targeted cooperation and the building of one’s own autonomy in a way that remains compatible with others, rather than erecting hurdles. Given the necessary commitment from the private sector, the academic community, and states themselves, one way forward would be to entrust third parties with helping to establish such a landscape. This has given rise to the ambition behind the “Current AI” foundation, created during the Paris Action Summit.

 

Third, a balance must be struck between innovation and risk mitigation. Allowing AI to develop entirely freely—without considering the protection of individuals, the risks associated with the use of biometric data, or the presence of a human in the loop for decisions that affect fundamental aspects of our societies, such as justice, security, or the military—is not compatible with the European political and economic model. Trust is a prerequisite for widespread AI adoption and will be achieved through appropriate safeguards for responsible AI. 

 

Others share these views: South Korea (which adopted the AI Basic Act) and Brazil (with the Brazil AI Act) are also pioneers in the field, having built early regulatory frameworks to ensure AI follows specific rules using a risk-based approach similar to that of the European AI Act. Creating these frameworks is indispensable, but it must not result in stifling creativity and innovation. It is this fine line that Europe is striving to draw in order to strike the right balance.

 

Crucially, building a sovereign and open model will not suffer from a theoretical vacuum. Over the past couple of years, its substance has been shaped, and its ambitions heavily advocated for by major civil society organizations—arguably with even greater intensity during the February 2025 RightsCon in Taiwan. Supporters, enablers, and major contributors can also readily be found among the open-source community, public innovators, and academics around the world.

 

Building this eclectic coalition of states, companies, and civil society organizations could allow us to fulfill—through AI—the original promises of the Internet: to encourage individual and collective emancipation, democratize access to knowledge, education, and culture, promote entrepreneurship, and help human beings. In other words, to realize what the renowned poet Oscar Wilde anticipated in his 1891 essay The Soul of Man under Socialism: to accomplish “necessary and unpleasant tasks.”

 

Acting Swiftly

Setting up the key elements to commercially develop AI and learn how to better use it is a good start, but to have an impact on the way AI evolves globally, a clear vision is required. This vision will shape how it ought to be mainstreamed—or not—across different sectors of our economies and societies. It is a prerequisite for tailoring its use to our habits and principles, rather than the other way around. In other words, a deliberate effort is needed in order to become an active driver of this major transformation, rather than a mere consumer of rapidly emerging new applications.

 

One way to ensure our values remain aligned with technological developments is to fully embrace the interdependence of the AI market and use it as a source of cooperation. In this respect, building up the EU’s credibility in the field of AI can go hand in hand with the construction of trustworthy networks capable of promoting collective action. No time should be wasted in deploying the European Union’s own capacities. Yet this effort must be accompanied by a global push toward greater exchange—not only because these connections are already a given, but also because leaning on one another has become a condition for success. This must be done strategically, with careful consideration of which dependencies are being created, and by diversifying them to strengthen the cooperative environment.

 

The Paris Action Summit laid the groundwork for such a cooperative environment within a highly competitive field. A wide range of players announced new tools aimed at making AI more environmentally sustainable, increasing awareness of its impact on work, or ensuring that all continents have the capacity to build their own technologies. But the natural trajectory does not lead to diversity and inclusion, as ongoing developments remind us. As such, international gatherings like the recent one are essential to creating a shared structure and mutual understanding of our joint interests.

 

This is why it is crucial to quickly implement the various initiatives announced in Paris, while strengthening efforts to create a sovereign and strong AI continent in Europe. Without this, the chances of having a distinct European voice heard will be extremely limited. The next AI Summit in India will offer a significant opportunity to shape this new collective path toward cooperative and sovereign models in the field of AI. In the meantime, we must act decisively. This new revolution is too important for states and peoples to overlook.   

Back to Table of Contents