Building the Future Federation - The Rising Calls for Sovereign AI

David L. Shrier is a Professor of Practice, AI and Innovation, with Imperial College London, where he is co-Director of the Trusted AI Alliance (a multi-university collaborative focused on building responsible & trustworthy AI), and author of Basic AI: A Human Guide to Artificial Intelligence (2024). This essay was written with the assistance of, but not exclusively by, ChatGPT. The individual ideas were originated by humans and elaborated upon by machine. It is also indebted to contributions from coauthors on the in-process white paper being developed regarding Sovereign AI: Aldo Faisal, Yves-Alexandre de Montjoye, Ayisha Piotti, and Alex Pentland.


Nation-states are increasingly interested in the concept of Sovereign AI, which is an AI system that is controlled by a government or group of governments, rather than by a single private sector company like OpenAI or Google or Meta. Billions of euros have been allocated by different countries towards these ends, potentially touching off a new arms race of intelligence during a time of strained budgets and economic uncertainty. Avenues for international cooperation are emerging as this richly complex topic takes root in the global discourse. Yet this path is neither clear nor short, and policymakers need to engage in this area with prudence and care.

An AI-generated image of Sovereign AI

Government focus on the subject is driven by a variety of strategic and ethical considerations. One of the primary concerns is the potential for the values embedded in AI systems, particularly those developed by major technology companies, to reflect the interests and ideologies of a few powerful entities rather than the diverse ethical frameworks of different nations. This has led to a desire among many governments to ensure that their national values and ethics are integrated into AI systems like Generative Pretrained Transformers (GPTs) and large language models (LLMs).

Additionally, the rise of unilateral policy actions by some countries, aimed at restricting access to these advanced AI systems, has further fueled the need for a more controlled and nationally aligned approach. As a result, various national AI policy initiatives are emerging across Europe, Asia, and North America, reflecting a global trend toward the development of sovereign AI systems that are tailored to the specific needs and values of individual countries.

The Trusted AI Alliance, anchored at Imperial College London and supported by researchers from six of the top seven universities in the world, is developing a white paper and resource toolkit for policymakers and innovators seeking to develop Sovereign AI. Key findings in the white paper are briefly summarized in this essay.

The essay discusses key components of national AI policy frameworks, the imperatives driving sovereign AI, the resource requirements for developing sovereign AI systems, and the cybersecurity measures that must be in place. Finally, this essay touches on strategic options for nations considering the development of sovereign AI and outlines the next steps for advancing these initiatives, and why these decisions need to be taken now, rather than later.

 

The Fundamentals of AI and Relevance to Sovereign AI

Artificial Intelligence (AI) refers to machines that mimic human cognitive processes, enabling them to perform tasks that typically require human intelligence. One of the most advanced forms of AI is the Generative Pretrained Transformer (GPT), which is designed to generate human-like text based on the data on which it has been trained. Unlike traditional AI systems that require specific instructions for each task, GPTs are capable of learning from vast amounts of data, allowing them to understand and generate language in a way that closely resembles human communication. The initial iterations of GPTs focused primarily on text generation, but newer models have evolved into multi-modal systems, capable of processing and generating content across various media, including text, images, audio, and video. This advancement has expanded the potential applications of GPTs, making them highly versatile tools in numerous fields, from creative industries to customer service and beyond.

Notably, a very small group of individuals, typically employed by a private company, makes the decisions as to which values, ethics, and principals are embedded in an AI system such as ChatGPT (Open AI’s system, as of this writing, powered by GPT 4o) or Gemini (Google Vertex), Claude (Anthropic), and so on. Over the past few years, members of these teams have highlighted cause for concern about how these values and ethics are being assigned.

Values that are appropriate in Silicon Valley or Redmond, Washington might not be equally acceptable in Frankfurt, Germany; Riyadh, Saudi Arabia; Lima, Peru; Kigali, Rwanda; or Kuala Lumpur, Malaysia. Local norms and cultures may embrace different perspectives than the United States or have different restrictions.

In addition, there is a reflexive motion: when a human being interacts with a large language model, the model may learn and adapt, but the thinking of the person also may change. This can be benignly used for improving learning outcomes, or more maliciously deployed as a new attack vector for disinformation and state-sponsored efforts to influence elections and sow discontent in populations.

Other political considerations have arisen. If U.S. government policy, for example, were to place export restrictions on AI technology, that might limit access or use entirely of software from Apple, Google, Microsoft, or OpenAI. Donald Trump has openly stated that if he wins the White House, he intends to engage in some form of trade war. The potential of unilateral action limiting access to key technologies is very much in the realm of possible, rather than theoretical.

Governments around the world have taken note, and have begun to respond with significant resources. Some nations in Europe have committed billions of euros to developing greater local, or sovereign, AI capacity. 

 

Why Sovereign AI?

The motivations behind the push for Sovereign AI are not limited solely to the risk of U.S. trade war or American values, and reflect a wide range of strategic, cultural, economic, and security concerns. Strategic autonomy is a primary driver, as nations seek to reduce their dependency on foreign AI technologies and maintain control over their digital futures. Cultural and linguistic relevance is also crucial, as countries aim to develop AI systems that reflect their unique cultural identities and languages. Privacy protection is another major concern, with governments keen to ensure that citizens’ data is safeguarded from misuse or unauthorized access. The mitigation of biases and discrimination in AI systems is critical to ensuring fairness and equity, while addressing large-scale misinformation and deep fakes is essential for maintaining social stability and trust in information systems.

Nations are also focused on guarding against mass surveillance and economic disruptions that could arise from the unchecked deployment of AI technologies. Additionally, the potential for job displacement due to AI-driven automation is prompting governments to explore strategies for navigating this disruption. Protecting intellectual property rights is another key motivation, particularly in safeguarding innovations from being exploited by foreign entities. Building resilience against trade actions and protecting against existential risks posed by advanced AI systems are further considerations that underscore the need for a robust and well-thought-out national AI policy.

Economically, sovereign AI presents an opportunity to reap significant benefits, from enhancing productivity to creating new industries. Generative AI, particularly GPTs, is poised to revolutionize the global economy by taking over a wide range of tasks that were traditionally performed by humans. This capability allows GPTs to serve as a “force multiplier” for human labor, significantly enhancing productivity and efficiency. However, this shift also poses risks, particularly for economies that have relied heavily on outsourcing services to developing countries. As GPTs become more capable of performing high-value tasks, the economic advantages of outsourcing may diminish, leading to potential disruptions in these markets. On the other hand, GPTs offer an opportunity to democratize access to high-level skills by enabling mid-skill workers to perform at the same level as their high-skill counterparts. This could open new avenues for economic development, particularly in regions that have struggled to compete in the global marketplace. Overall, the impact of generative AI on work and the economy is profound, with the potential to both create and disrupt industries on a global scale.

 

Emerging National AI Policies

More than 100 nations around the world have implemented or are developing policy interventions with respect to AI, and in parallel framework approaches to national AI. Developing a robust national AI policy requires drawing on insights from technology policies in other domains, as well as considering emerging AI-specific policy interventions from around the world. A comprehensive national AI strategic framework typically includes several key components.

Governance and regulatory frameworks are essential for ensuring that AI systems are developed and deployed in a manner that aligns with national interests and ethical standards. A pro-innovation AI policy is also crucial, as it encourages the development and adoption of AI technologies that can drive economic growth and societal benefits. National AI security and safety measures are necessary to protect against the potential risks associated with AI, including cyber threats and unintended consequences of AI deployment. Data sovereignty, authenticity, consent, and provenance are critical considerations for ensuring that data used in AI systems is managed in a way that respects the rights of individuals and maintains the integrity of information.

Finally, digital infrastructure requirements must be addressed to support the deployment and operation of AI systems at scale, ensuring that the necessary technological foundations are in place to enable the benefits of AI while mitigating potential risks. UN SDG 9 is particularly relevant in this regard, ensuring access and connectivity to communications systems.

 

Resource Requirements for Sovereign AI

Building and maintaining a Sovereign AI system is a resource-intensive endeavor, with significant financial and operational costs. The cost to train a state-of-the-art AI system can reach up to $1 billion, while the annual cost to run such a system for a substantial user base can be $1 billion or more to support a volume of queries from a modest-sized population.

These costs are driven by the need for specialized hardware, substantial energy consumption, and other critical resources. For instance, the global supply of the high-performance chips required for AI training and operation is limited, leading to fierce competition for these resources. Additionally, the energy demands of AI systems are considerable, particularly given the current global challenges in energy supply and sustainability. The carbon footprint of AI systems is a significant concern, as the high energy usage often relies on fossil fuels, potentially undermining recent gains in carbon reduction efforts. Google recently announced it had rolled back several years’ worth of carbon mitigation gains, due to rising demands on its systems from AI models. Water usage is also a factor, especially for cooling systems in data centers where AI models are trained and run, with each query processed by an AI system consuming a significant amount of water.

Energy scarcity gets short shrift but is assuming ever-greater importance in the discussion of AI and national interest. Countries around the world have already been struggling to meet rising energy demands, even before generative AI was introduced at scale. Now, straining power grids are quickly approaching the point where difficult decisions will have to be made about how to allocate power. Building new power generation capacity is a decision with a decade-long lead time, and requires capital investment in the hundreds of millions or billions of euros. The accelerating demand for AI is far exceeding the speed with which the energy grid can be upgraded.

These resource challenges underscore the importance of strategic planning and investment in the development of Sovereign AI systems to ensure their sustainability and effectiveness. However, even with the best of planning and the best of intentions, a ‘home grown’ sovereign AI system may exceed the capacity of a mid-sized or smaller economy.

 

AI Cybersecurity and National Security

Cybersecurity is a critical component of any national AI policy, particularly in the context of Sovereign AI initiatives. Ensuring the security of AI systems is essential to protecting national interests and maintaining public trust. Key elements of cybersecurity in the context of AI include regulatory coherence, which ensures that policies and regulations are aligned and enforceable across different sectors. Building awareness of cybersecurity risks and best practices is also crucial, both within government and among the broader public. Multi-stakeholder engagement, involving collaboration between government, industry, academia, and civil society, is necessary to address the complex challenges of AI security.

Securing data pipelines and ensuring the integrity of AI models are fundamental to preventing unauthorized access and manipulation of AI systems. All too often, attention is paid to systems access (who can log into an AI or who has the ability to change its code), but not enough focus is given to the data that powers the AI model. One can fundamentally alter the output of an AI by changing the data that is used by it, meaning that the AI will ‘think’ differently and express different ideas. This creates a powerful new attack vector for malicious actors.

Addressing cybersecurity threats requires a proactive approach, including the development of advanced security measures and rapid response capabilities. Capacity building within the public sector is also important, ensuring that government agencies have the knowledge and resources needed to manage AI-related cybersecurity risks. Finally, international cooperation and alignment with global standards are essential for addressing cross-border cybersecurity challenges and ensuring that AI systems are secure on a global scale.

 

Open Source Opens Options

A number of open source GPTs are being developed, and present an attractive foundation on which a government could begin building Sovereign AI without having to start entirely de novo, but without having to underwrite all of the model training costs. These open source GPTs are fostering ecosystems of AI developers who are building on and enhancing the core technology. Open data, also, is growing in prominence. One should draw particular note to the Data Provenance Explorer, a large-scale collection of audited data repositories that can help those trying to build AI systems to have greater transparency and confidence in the data used in them. This avenue for public-private partnership can address a number of limitations and open greater public discourse and engagement.

 

What to do Instead of Sovereign AI

A nation-state or multilateral creating its own GPT entails significant technological risk. The lead times to develop AI systems and to acquire hardware to power them are considerable. If an incorrect decision is made about a particular software technology to pursue or a particular hardware configuration to use, the effort might fail or be doomed to obsolescence almost at its inception.

And in a time of technological disruption and change, as the kind we are in now, it is easy to make the wrong choice. Recall the debate about VHS versus Betamax, or more recently the mobile operating system wars pitting Nokia’s Symbian mobile OS (loser) or Microsoft Windows Mobile (loser) versus Apple iOS (winner) and Google Android (winner), but with your national competitiveness on the line. ‘Wait and see’ for a standard to emerge and for the pace of change to slow is, unfortunately, not an option. Alternatives are addressed in the next section of this essay.

New developments are creating additional choices in lieu of committing to a full-scale sovereign AI. For example, the proposed strategy of ‘centralized training, decentralized inference’ suggests that the expensive base model training is still conducted in central clusters, perhaps still on GPUs, but that edge devices (e.g., even mobile phone processors) and/or lower-cost and more widely-available CPUs are used for inference (answering queries). This helps ameliorate the concerns about hardware scarcity. While this does not ameliorate a number of the concerns such as those regarding export restrictions, values and ethics, it can help address questions around population surveillance, data privacy and aspects of behavior manipulation and security. It also may introduce some local controls at the inference (query) layer, versus the training set underlying the model.

Other developments in terms of training algorithms are being pursued, which require less power and less computing in order to achieve comparable results to extant large-scale systems. This may help ‘democratize’ access to these powerful systems and introduce an alternative route for countries to harness sovereign AI benefits without requiring the same intensity of capital investment.

 

Alternatives for Government Leaders to Consider

Countries exploring the development of Sovereign AI systems face a range of strategic options, each with its own set of advantages and challenges. Countries may choose to actively create their own Sovereign AI systems, which would involve significant investment and resource allocation but could provide greater control over AI technologies and their alignment with national interests. Addressing considerations of scale is another critical factor, as the development and deployment of AI systems require substantial infrastructure and coordination. Forming partnerships with Big Tech companies is a potential strategy, leveraging their expertise and resources while maintaining a degree of national oversight. Adapting open-source code and leveraging open-source data are cost-effective strategies that can accelerate the development of Sovereign AI by building on existing technologies and datasets. Finally, countries can harvest the benefits of other government initiatives, such as existing digital transformation programs, to support the development and implementation of Sovereign AI.

The Trusted AI Alliance advocates consideration of a ‘Federation of GPTs,’ first proposed by Professor Aldo Faisal of Imperial College London and the Alan Turing Institute. In this formulation, the resources required to build new independent GPTs are shared across several nations, with private sector assistance, but with key decisions about values and protections controlled by government, not by individual large technology companies. Open-source code projects and open data repositories provide a code base and training data set that have been reviewed and vetted by teams of independent experts, with transparency at the forefront of the design process.

We are seeing growing interest in this notion of a Federation of GPTs. One avenue is oriented around non-aligned nations looking for an alternative to U.S.-centric or China-centric AI ecosystems. Another looks to bring together common language and cultural groups in supra-national clusters, sometimes forming along existing lines of international cooperation (e.g., ASEAN), and sometimes forming new aggregations.

Each option for national AI policy requires careful consideration of the national context, international agreements and collaborations, resources, and long-term strategic goals. The time to consider these alternatives is now. A country that moves too slowly in developing a national AI policy is not only risking loss of economic competitiveness, but exposure of its citizenry to new and powerful risks. Within this policy, consideration should be given as to whether or not a sovereign AI should be developed, and if so, what form it should take, so as to ensure that this new technology accrues to the benefit of a country’s citizenry and national interest.

Back to Table of Contents