Neil Lawrence is the DeepMind Professor of Machine Learning at the University of Cambridge and a Visiting Professor at the University of Sheffield. This essay is partly drawn from his public policy annual lecture, delivered at the Bennett Institute in December 2024.
Despite its transformative potential, artificial intelligence risks following a well-worn path where new innovations fail to address society’s most pressing problems. In 2017, the UK Royal Society’s Machine Learning working group conducted research with Ipsos MORI to explore citizen’s aspirations for AI. It showed a strong desire for AI to tackle challenges in health, education, security, and social care while also demonstrating an explicit disinterest in AI generated art. And yet eight years later, AI has made remarkable progress in emulating human creative capabilities, but the demand in other areas remains unfulfilled.
Digital technologies can be particularly problematic in serving broad population needs. We can draw lessons from major IT project failures such as the UK’s Horizon program, a misimplemented IT system that led to wrongful prosecutions of UK subpostmasters. Or the Lorenzo project from the UK NHS National Programme for IT, a project cancelled with a bill of over £10 billion. These projects weren’t just technical failures, but failures of understanding. They could not bridge the gap between the underlying need and the provided solution. My concern is that without changing our approach, we face the same problem when introducing the next wave of digital capabilities - the widespread deployment of AI.
A significant part of the problem is the persistent gap between the supply of technical solutions and the demands from society. In digital technologies, traditional market mechanisms have failed to map macro-level interventions to the micro-level societal needs. Conventional approaches to technology deployment continue to fall short. To counter this trend, radical changes are needed to ensure that AI truly serves citizens, science, and society.
The philosopher’s stone is a mythical substance that can convert base metals to gold. Before the emergence of modern chemistry, a major goal of alchemists was to recover the process through which rare and valuable gold could be produced. In his early career the natural philosopher Isaac Newton dedicated some of his time to the search. In 1717, Newton was Master of the Royal Mint. He miscalculated the exchange rate between silver and gold pushing Britain onto a de facto gold standard for currency. Both silver and gold have been widely used as currency because of their relative rarity and durability. These properties mean that they can be exchanged in lieu of goods or services. They enabled us to replace a system of barter with a system of currency.
Today we might think of the philosopher’s stone as a foolish dead end, placing it alongside perpetual motion machines or cold fusion as an exciting but misguided scientific foray. The recent debate in digital technology has focused on new quest. The search for artificial general intelligence. This is the idea that we can develop machines that provide a substitute for human capabilities of intelligence. Some predictions suggest that we are on the cusp of producing machines that dominate us intellectually.
I view this perspective as problematic, it distracts us from the real issues in the debate. The term artificial general intelligence builds on the notion of general intelligence that is due to Charles Spearman. Spearman’s work was part of a wider attempt to quantify intelligence in the same way we quantify height. This domain of ‘science’ was called eugenics. The ideas arose from Francis Galton’s book, Hereditary Genius (1869) who suggested
“as it is easy … to obtain by careful selection a permanent breed of dogs or horses gifted with peculiar powers of running, or of doing anything else, so it would be quite practicable to produce a highly-gifted race of men by judicious marriages during several consecutive generations.”
There are general principles underlying intelligence, but the notion of a rankable form of intelligence where one entity dominates all others is flawed. But it is this notion that underpins the modern idea of artificial general intelligence. To understand the flaws we can consider a concept I call the artificial general vehicle. An artificial general vehicle is a vehicle that dominates all other vehicles in all circumstances. It doesn’t matter whether you are travelling from Nairobi to Nyeri, from London to Lagos or from your house to the end of your road, the artificial general vehicle would be the right vehicle to use. Now, just as there are general principles underlying intelligence the same applies to vehicles. Transportation is subject to fluid resistance, surface friction, and conversion of potential into kinetic energy. Different vehicles show solutions that are a composition of ideas that address the physics of these problems: wheels, lubrication, levers, wings, engines etc. In a similar respect any single decision-making system is an amalgamation of solutions inspired by general principles that can be combined according to different recipes. Each recipe is appropriate for a different context. The idea of a single recipe that would dominate in all respects is as flawed as the philosopher’s stone or a perpetual motion vehicle.
From a societal perspective, when understanding our new AI capabilities, one challenge we face is that notions of intelligence are very personal to us. Calling a machine intelligent triggers us to imagine a human-like intelligence as the drive behind the machine’s decision-making capabilities. We anthropomorphize, but our anthropomorphizing becomes conflated with our understanding of the undoubted strengths of the machine. Just as a Boeing 747 jet moves faster than a walking human, so can a machine perform calculations far faster than a thinking human. But the right place for walking is different from the right place for a 747. To compare intelligences we need to step back towards a more fundamental measure than the eugenic notion of rankable intelligence. Instead, we need to turn to information theory.
The field of information theory was introduced by Claude Shannon, an American mathematician who worked for Bell Labs. Shannon was trying to understand how to make the most efficient use of resources within the telephone network. To do this he developed an approach to quantifying information. To compare information from different circumstances and different contexts is challenging. Shannon’s most important idea was that information should be separated from its context. To do this he associated information with probability. Shannon suggested that a bit of information is the amount of information you gain when you learn the result of a 50/50 random event—e.g. a coin toss. But in Shannon’s theory it didn’t matter whether it’s a coin toss you’re studying or the outcome of a 50/50 probability Federer-Nadal tennis match. In both cases when you learn the outcome you gain one bit of information.
Removing context makes the information in Shannon’s theory fungible. You can combine different bits of information from different sources together. Then you can measure how much information you can feed through a copper cable or a fiber-optic line or across the airwaves. We can compare different technologies according to how much information they can transfer. In this sense information is more tangible than intelligence, we can use it to compare across different technologies without stating the context.
Since the development of the telegraph, we have used electromagnetic waves to communicate. These waves travel at 300,000 kilometers each second. Modern computers use this speed of propagation to communicate with each other with billions of bits per second. The digital computer is the latest in a series of technologies that have changed the way we communicate. From the development of writing itself, to the emergence of a practical printing press, to the telegraph, radio, and television, all these technologies have changed the way information flows between us. They’ve disrupted our information topography.
Humans evolved using speech to communicate: sound waves instead of electromagnetic waves. But the advent of the machine-mediated information topography has significantly increased the quantity of information available. The improvement in our capability to move information is a major driver of disruption and innovation in human culture. It is this shift in our information topography that has led to a phenomenon we call the attention economy.
The attention economy was foreseen by the American computer scientist Herbert Simon, who argued in his 1971 paper “Designing Organizations for an Information-Rich World” that “What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention…”
What Simon was suggesting was that as the availability of information increases, it triggers a poverty of human attention. In the attention economy, human attention has become the bottleneck in the system. It has replaced gold and silver as the rare commodity.
The poverty of attention has driven demand for automated decision-making. Machines as well as humans now pass judgment on the basis of information. We sometimes call this artificial intelligence, but it would be less emotive to just call it automated decision-making. One example comes to us from financial trading. In high-frequency trading, machines make many millions more decisions than humans do. But the results of this can be disastrous. A flash crash is when a cascade of mistaken decisions happens so quickly that the entire trading system needs to be shut down. A famous crash in 2010 led to the Dow Jones Industrial Average index losing 9 percent of its value.
A typical human, when speaking, shares information at around 2,000 bits per minute. Two machines will share information at 600 billion bits per minute. In other words, machines can share information 300 million times faster than us. This is equivalent to us travelling at walking pace, and the machine travelling at the speed of light. From this perspective, machine decision-making belongs to an utterly different realm to that of humans. Consideration of the relative merits of the two needs to take these differences into account.
Some commentators have warned that we face an era where machines are many orders of magnitude more intelligent than us. Such a statement is ill-defined. It is like saying one vehicle is many orders of magnitude better than another. In contrast, information transfer rates are well defined. And machines can already transfer information many orders of magnitude faster than humans can. In The Atomic Human (2024) I argue that this limit in our ability to access information characterizes the nature of our human intelligence.
This difference between human and machine underpins the revolution in algorithmic decision-making that has already begun reshaping our society.
The modern attention economy leads to a challenge where automated decision-making is combined with an asymmetry of information access. Some have more control of data than others. This power asymmetry leads to a phenomenon that Tim O’Reilly, Ilan Strauss, and Mariana Mazzucato described as “algorithmic attention rents”, excess returns over what would normally be available in a competitive market. Control of data allows for monopolistic practices. I’ve referred to this phenomenon as the “digital oligarchy.” The challenge has not gone unnoticed by legislators, the EU’s Digital Market’s Act and the UK’s Digital Markets Competition and Consumer Act are both attempts to redress the power that arises through these information asymmetries.
But the machines lack social context and the vulnerabilities that we humans share. Their decisions are not driven by an understanding of human experience. This makes our intelligence irreplaceable and renders the notion of artificial general intelligence absurd. But that irreplaceability does not mean this technology is not revolutionary. The machine’s extraordinary access to information is driving a change in our information topography with corresponding dramatic effects on human culture.
Over 5,000 years ago in the city of Uruk, on the banks of the Euphrates, bronze age communities developed a new approach to recording transactions involving the exchange of goods. They had access to plentiful quantities of clay. They recorded the exchange through impressions in the clay. Over time, their system of recording evolved enough sophistication to record a codification of their social customs. They wrote down laws. It was also able to record their stories. They wrote epic poems. Today we call this system of writing cuneiform. While initially developed to record transactions, its wider effect was to expand our society’s capacity to remember to store what today we call data. When we decipher these tablets we explore the lives of people who lived more than two millennia before Christ.
The ancient data-controllers were called scribes. They could read and write gained great power as administrators of accounts, legal judgments, and civil decision-making. In our modern society, those powers have co-evolved with responsibilities. The modern scribes sit in institutions we call the professions. These institutions evolved to manage the problem where information asymmetries become asymmetries of power.
In ancient societies scribes preserved their power through strict controls on the data. They decided what forms of writing were considered authoritative. The phenomenon of diglossia is one where different dialects of language evolve. Accordingly, our formal written language evolved separately from spoken language. At the most extreme different, vestigial languages such as Latin, Hebrew, or Sanskrit, were used for legal or religious practices ensuring that access to information could be controlled by a smaller empowered group. In Europe this form of institutional protection was eroded by the emergence of printing in the fifteenth century. This led to an increase in European literacy and a dispersion of the scribes’ power which today is dispersed through our modern professions: accountancy, civil administration, law. But today that control has shifted into the digital oligarchy, the modern scribe is the software engineer, and their guild is the big tech company, but society has not yet evolved to align the great power these entities have with the social responsibilities that are needed to ensure that power is wisely deployed.
The printing press left a legacy for Europe in an educational advantage that persists today. Human capital is a measure of the educated and healthy workforce. A 2019 World Bank report measured human capital and ranked 14 out of the top 20 countries were European. We can think of human capital as some form of measure of “human attention”. In the attention economy it’s the shortage of human attention the presents a bottleneck so this implies that in the attention economy Europe should have a significant advantage. But the advantage in human capital is double-edged sword. Automated algorithmic decision-making means that the machine can replace human “mental labor” with machine decisions. The computer can operate in an analogous manner to the “philosopher’s stone.” In the past the machine combined the base metal of steel with steam to automate human physical labor. Our recent advances allow machines to combine silicon with electrons to automate human mental labor. Both advances allow tasks to be completed more rapidly through the machine than the human. If Isaac Newton had discovered the philosopher’s stone, then transmutation of base metals into gold would have led to significant inflation, undermining the value of existing gold stocks. Similarly, by automating human mental labor we trigger inflation of human capital. The European advantage in human capital would dissipate rapidly. How should we invest to preserve our precious human resources?
In the 1970s and 1980s, significant investment in computing wasn’t accompanied by a corresponding increase in economic productivity. In a 1993 paper, Erik Brynjolfsson characterized the challenge as a “productivity paradox,” where benefits of computational investment lagged the investment. Part of the explanation for this lag was down to the need for an organization to adapt its corporate infrastructure to make best use of the new information infrastructure. This is a manifestation of Conway’s law, which suggests that those organizations which design systems are constrained to create systems which mirror the organization’s own communication structure. In 2002, Jeff Bezos issued a memo to Amazon, known as the API mandate, that applied this principle in reverse. He reorganized Amazon’s corporate structure around the architecture of the software they were building. This change foreshadowed the dominance of today’s main approach to building digital systems. It is known as microservices or a service-oriented architecture. The scale of this reorganization hints at the new challenges that the next wave of digital systems present. It will not be enough for individuals to be digitally literate, to get the best from these technologies a more complex cultural shift will be required.
Modern AI systems present us with a new productivity paradox. It is less about absolute measures of productivity and more about the distribution of the benefits of productivity. Let’s consider the problem from the perspective of a productivity flywheel. In this model the idea is that a business invests in research and development (R&D) that leads to technical innovations, which in turn produces productivity gains or profits. Those gains provide economic surplus that can be re-invested in research and development. A classical case study is that of DuPont by Hounshell and John Kenly Smith Jr. in their 1989 book Science and Corporate Strategy: Du Pont R&D, 1902-1980. The book considers not just investment in R&D but also how that investment was deployed through the business.
The Innovation Flywheel | Source: Courtesy of the author
The innovation flywheel illustrates the traditional model of technological innovation and value creation. This cyclical process begins with investment in R&D, which produces technical innovations that can be deployed as productivity improvements or new products. These innovations generate economic surplus through increased efficiency or new revenue streams, which can then be reinvested to fund further R&D efforts. This self-reinforcing cycle has driven technological progress in commercial settings, but tends to prioritize innovations with clear economic returns. The model creates a potential disconnect between technological advancement and social value when societal needs don’t align with economic incentives. The flywheel’s dependence on quantifiable economic returns can leave significant public needs unaddressed, particularly in areas where value is harder to measure or monetize.
The ability of organizations to assimilate and deploy innovation has become known as absorptive capacity, but the productivity challenge we face now is not only about how organizations absorb this technology but how they redeploy it in domains of importance what we might think of as distributive capacity.
One challenge is the misrepresentation of the AI technology in the wider debate. The notion of artificial general intelligence is distracting, by focusing on the more fundamental idea of the asymmetry in information transfer rates we can also suggest more actionable solutions. This asymmetry is what has transformed our information topography, creating new power structures that our institutions are struggling to assimilate.
Past digital failures like the UK’s Horizon and Lorenzo systems reflect a recurring pattern where macro-level policy decisions are disconnected from the micro-level operational realities with insufficient attention to how systems function in practice. In these sociotechnical systems and many of the failures start with the social and spread to the technical.
Public dialogues can lead to consistent and nuanced opinions about AI deployments, but so far the innovation ecosystem has failed to address the public needs with appropriately tailored solutions. Dialogues show a desire for solutions in sectors such as healthcare, education, and social services, and yet the innovation economy has prioritized areas such as creative content generation that the public explicitly suggested was a low priority.
This persistent misalignment is the new productivity paradox. Investments in AI are failing to map onto societal needs. Traditional market and policy mechanisms have proven to be inadequate for connecting the supply of innovation to societal demand.
How should we bridge this gap? We are inspired by colleagues on the African continent who since 2015 have been working to deploy these technologies in close collaboration with those who are directly experiencing the challenges. They founded an organization, Data Science Africa, that pursues end-to-end solutions going from medical centers to the Ministry of Health or from the farmer’s field to the Ministry of Agriculture. To scale their approach we envisage an attention reinvestment cycle. The current productivity flywheel assumes that innovation delivers returns value through financial return. In the attention reinvestment cycle we acknowledge that we can deliver efficiency through freeing up people’s time. This means we can release valuable human attention and reinvest that by building networks that share the newfound knowledge.
In Cambridge, we are putting these ideas into practice through collaborative networks that use multidisciplinary, community-centered approaches to build capacity. We are developing solutions grounded in local context and needs. In 1944, Karl Popper responded to the threat of fascism with his book, The Open Society and its Enemies. He emphasized that change in the open society comes through trusting our institutions and their members, a group he called the piecemeal social engineers. The failure to engage the piecemeal social engineers is what led to the failure of the Horizon and Lorenzo projects. We cannot afford to make those mistakes again, they would lead to a dysfunctional digital autocracy that would be difficult or impossible to recover from. The attention reinvestment cycle refocuses our attention on supporting the piecemeal social engineers in deploying these technologies.
Delivery requires a shift in our innovation mindset—away from the grand narratives of artificial general intelligence and toward the more humble but ultimately more rewarding work of deploying AI to enhance and amplify our human capabilities, particularly in the domains where our human attention is most needed but most constrained.
The gap between AI innovation and societal needs is not an inevitability. By learning from past failures, engaging directly with public expectations, and creating mechanisms that deliberately bridge technical and social domains, we can ensure that AI truly serves science, citizens, and society.