Daniel Wagner is CEO of Country Risk Solutions, an Adviser at DOGE-UN, and author of 11 books on current affairs, risk management, and our future. He is currently co-authoring a book about reforming multilateral institutions. You may follow him on X @countryriskmgmt.
The pace of transformation in the Artificial Intelligence (AI) and Machine Learning (ML) domains is nothing short of breathtaking, and it promises to continue upending conventional wisdom and surpassing some of our wildest expectations, particularly as it proceeds on what appears to be an unalterable and preordained course. Along the way, much of what we now consider “normal” or “acceptable” will continue to change. What remains indisputable, however, is that humanity is confronting an unprecedented challenge—one with deeply complex and far-reaching implications for its future. The idea that the future potential union of man and machine may not only be inevitable, but achievable sometime in our own lifetimes, is staggering.
Yet, it should be apparent that we are already treading the path toward this convergence—albeit in what could still be seen as mere baby steps. A mere few years ago, discussions about AI were dominated by its “promise”; today, however, the focus has shifted squarely to the potential dangers it presents. A range of new and emergent threats, coupled with an expanded array of actors capable of leveraging AI and ML for malicious purposes, has become alarmingly evident. This progression is a natural byproduct of the unparalleled efficiency, scalability, and diffusion capabilities inherent in AI systems, which in turn broaden the pool of potential perpetrators capable of launching attacks on civilian, business, and military targets using AI.
Hacking is just one among the endless array of malignant purposes for which AI is being used | Source: Shutterstock
Attacks supported and enabled by progress in AI will be especially effective, finely targeted, and difficult to attribute, as they have been in the cyber arena. Given that AI can, in a variety of respects, exceed human capabilities, attackers may be expected to conduct more effective attacks with greater frequency and on a larger scale. Attackers often face a trade-off between how efficient and scalable their attacks will be versus how finely targeted. A telling example of this tension is the potential use of drone swarms deploying facial recognition technology to identify and eliminate specific individuals within a crowd, as opposed to a broad-based mass casualty event.
The AI/Cyber Nexus
Recent breakthroughs in science, engineering, and mathematics have unlocked a new generation of intelligent systems—those capable not only of executing high-value tasks, but also of making nuanced value judgments, some of which approach the realm of human thought. As we wade through ever larger oceans of data, ML is helping us process and make sense of that information, while creating a pathway to the future. Yet ML is difficult to develop and deliver, as it requires complex algorithms devised within a framework that permits information to be interpreted and results to be produced.
The link between AI and the cyber world may not have appeared obvious, even as recently as the start of this current decade. Computers and software are not naturally self-aware, emotional, or intelligent the way human beings are. They are, rather, tools that carry out functionalities encoded in them, inherited from the intelligence of their human programmers. If programmers teach a machine to learn in the wrong way, it may completely defeat the purpose of the learning exercise. Software can be designed to learn like the human brain. However, it can be rather difficult for machines to be taught correctly.
We know that algorithms already play increasingly active roles in a growing range of businesses and governments. There is a lot of potential value to be gained by using AI for a seemingly endless array of purposes; however, by the same token, AI’s existence and widespread use have also opened the door for virtual terrorists to hack, steal, malign, and sow fear.
A new cyber era has begun, with AI and machines ready to fight battles, and sophisticated cyber attackers and criminal groups seizing any opportunity to take advantage of systemic vulnerabilities. The battlefield is composed of corporate and government networks, and the prize is control of the organization—whether known or unknown to them. The stakes are extremely high. The target of this behind-the-scenes battle is not just stolen information; neither is it merely the ability to embarrass a rival. It is the ability to alter IT systems, including the capacity to install kill switches that can be activated at will. These attackers are sophisticated; they use previously unknown code and silently breach boundary defenses without being seen or heard.
It didn’t take long for cyber thieves to start cashing in on the ChatGPT craze in 2023. In the first six months of the year, more than 100,000 ChatGPT accounts were hijacked by malware and offered for sale on the dark web. Conventional approaches to cybersecurity rely on understanding the nature of a threat in advance, but that approach is fundamentally flawed, since threats are constantly evolving, laws and policies are outdated, and the threat from insiders is growing. In the current cyber era, threats easily bypass legacy defense tools.
AI would appear to have the advantage over legacy defense mechanisms that have failed to keep pace with the rate of AI development. New black hat machine intelligence need only enter an organization’s IT systems a single time. From that point of entry, they listen, learn how to behave, blend in, and appear as authentic as the original devices, servers, and users. These automated attackers can hide their malicious actions among ordinary daily system tasks, sometimes with devastating results.
Today’s attacks can be so swift and severe that it is impossible for humans to react quickly enough to stop them. However, based on advances in self-learning, it is now possible for machines to rapidly uncover emerging threats and deploy appropriate, real-time responses against the most serious cyberthreats. Firewalls, endpoint security methods, and other tools are routinely deployed in some organizations to enforce specific policies and provide protection against certain threats. These tools form an important part of an organization’s cyber defense strategy, but they are quickly becoming obsolete in the new age of AI- and ML-driven cyber threats.
When applied correctly, machines can make logical, probability-based decisions and undertake thoughtful tasks. ML is already operating successfully in a broad range of commercial and industrial fields, such as payment processing, online video services, advertising, healthcare, and onboard computers in cars and airplanes. However, much of our existing ML is supervised, meaning that in order for it to operate successfully, prior knowledge of potential outcomes must be pre-programmed by a human. In an area as complex and obfuscated as cybersecurity, there cannot be complete knowledge of all forms of existing or emerging threats.
Traditional approaches to cybersecurity are based on identifying activities that resemble previously known attacks—the “known knowns.” This is usually done with a signature-based approach, whereby a database of known malicious behaviors is created, new activities are compared to those in the database, and any that match are flagged as threats. Other systems use methods based on supervised machine learning (SML), wherein a system is trained using a dataset in which each entry has been labeled as belonging to one of a set of distinct classes.
In the information security context, the security system is trained using a database of previously seen behaviors, where each set of behaviors is known to be either malicious or benign and is labeled as such. New activities are then analyzed to determine whether they more closely resemble those in the malicious class or those in the benign class. Any that are evaluated as being sufficiently likely to be malicious are again flagged as threats.
While SML has inherent benefits, it also has fundamental weaknesses: malicious behaviors that deviate sufficiently in character from those seen before will fail to be classified as such and will pass undetected. A large amount of human input is required to label the training data, and any mislabeled data can seriously compromise the system’s ability to correctly classify new activities. It is also worth mentioning that in a dynamic and evolving threat landscape that is constantly bouncing up against existing boundaries, SML can miss a lot of important operational variables in the process.
By contrast, unsupervised machine learning (UML) presents a significant opportunity for the cybersecurity industry, with the prospect of enhanced network visibility and improved detection levels resulting from more advanced computational analysis. UML can overcome the limitations of rules- and signature-based approaches by learning what is considered normal within a network and not being dependent on prior knowledge of previous attacks. It thrives on the scale, complexity, and diversity of modern businesses—where every device, person, and operation is different—and it turns the innovation of cyber attackers against themselves by making any unusual activity visible.
Utilizing ML in cybersecurity technology is difficult, but when correctly implemented, it is powerful. Previously unidentified threats can be detected, even when their manifestations fail to trigger any rule set or signature. Instead, ML allows the system to analyze large sets of data and learn a pattern from its processes. ML can attribute human capabilities to machines, such as thought (using past information and insights to form judgments), real-time information processing, and self-improvement by adapting to the integration of new information. UML therefore allows computers to recognize evolving threats without prior warning.
As information networks continue to grow in scope and complexity, the opportunities for attackers to exploit gaps have naturally increased. Walls are no longer enough to protect the content of systems; rules cannot pre-emptively defend against all possible attack vectors, and signature-based detection methods fail repeatedly. Since cyberattacks are advanced, subtle, and varied, only automated responses based on ML can keep pace with them. ML technology is the fundamental ally in the defense of systems from hackers and insider threats. Used cleverly, it offers a real opportunity to gain the upper hand in the ongoing battle for supremacy in cybersecurity.
Convergence of the Cyber and Physical Arenas
Malicious AI actors and cyberattackers are likely to rapidly evolve in tandem over the coming years—across both virtual and physical arenas—so a proactive effort is needed to stay ahead of them. There is a growing gap between attack capabilities and defense capabilities more generally, because defense mechanisms are capital-intensive, while the hardware and software required to conduct attacks are increasingly affordable and widely distributed. Unlike the digital world, where critical nodes in a network—such as Google—can play a key role in defense, physical attacks can happen anywhere in the world, and many people are located in regions with insufficient resources to deploy large-scale physical defenses. Some of the most worrying AI-enabled attacks may come from small groups or individuals whose preferences are far removed from what is conventional, and which are difficult to anticipate or prevent—much like today’s “lone-wolf” terrorist attacks.
Given the sheer breadth of potential attack surfaces and the relentless pace of advancement in both offensive and defensive capabilities, any equilibrium reached between rival states, criminal enterprises, security agencies, or competing organizations is likely to be fleeting. As technology and policy frameworks evolve, the balance of power will remain in constant flux. In this volatile landscape, major technology and media conglomerates are positioned to remain the de facto guardians of digital security for the public. Their unparalleled access to real-time, large-scale data—combined with ownership of critical platforms, communication networks, and core infrastructure—places them in a uniquely powerful position to deliver adaptive, customized protection at a scale no government or smaller entity can easily replicate.
Developed nations—particularly those at the forefront of AI and cyber capabilities—have a clear head start in establishing the control mechanisms to provide security for their citizens, but maintaining that comparative advantage requires significant ongoing commitment across a plethora of resources. What is also required, of course, is the maintenance of forward-thinking organizational strategic planning, which is not necessarily in abundance. Much more work must be done to establish the right balance between openness and security, improve technical measures for formally verifying the robustness of systems, and ensure that policy frameworks—developed in a world that was previously less AI-infused—adapt to the new world we are creating.
The Malicious Use of AI
While AI and ML have many broadly beneficial applications, there are a plethora of potentially nefarious uses for both due to human nature. Just as ammonium nitrate is both a fertilizer and a potential explosive, AI is a dual-use area of technology. AI is dual use in the same sense that human intelligence is, for it is not possible for AI researchers to avoid producing research and systems that can be directed toward harmful ends. Many tasks that would benefit from automation are themselves dual use. For example, systems that examine software for vulnerabilities have both defensive and offensive potential applications; the difference between the capabilities of an autonomous drone used to deliver packages or explosives may not be all that great.
In addition, research that aims to increase our understanding of AI, its capabilities, and our degree of control over it appears to be inherently dual use in nature. AI systems are generally both efficient and scalable. For example, once it is developed and trained, a facial recognition system can be applied to many different camera feeds for much less than the cost of hiring human analysts to do equivalent work. Many AI systems can perform a given task better than any human can, as has been proven against top-ranked players in games such as chess and Go.
AI systems can also increase anonymity and psychological distance in tasks that involve communicating with other people, observing or being observed by them, making decisions that respond to their behavior, or being physically present with them. By allowing such tasks to be automated, AI systems can enable actors who would otherwise be performing the tasks to retain their anonymity, resulting in a greater degree of psychological distance from the people they impact. For example, rather than using a handgun to carry out an assassination, someone who uses AI avoids the need to be present at the scene and greatly increases the likelihood of never getting caught.
While attackers may find it costly to obtain or reproduce the hardware associated with AI systems—such as powerful computers or drones—many new AI algorithms are reproduced in a matter of days or weeks, making it much easier to quickly gain access to software and resultant scientific findings. In addition, AI research is characterized by a high degree of openness, with many published papers being accompanied by source code. AI systems also suffer from a number of novel, enduring vulnerabilities, which include data poisoning attacks (introducing training data that causes a learning system to make mistakes), adversarial examples (inputs designed to be misclassified by ML systems), and the exploitation of flaws in the design of autonomous systems’ goals.
Absent the development of adequate defenses, actors with malicious intent should be expected to expand existing threats, introduce new threats, or alter the typical character of threats. The diffusion of efficient AI systems can increase the number of actors who can afford to carry out particular attacks. Future attacks using AI technology should be expected to be more effective, finely targeted, difficult to attribute, and more likely to exploit vulnerabilities in AI systems. Increased use of AI should also be expected to expand the range of actors capable of carrying out attacks, the rate at which these actors can carry them out, and the set of plausible targets.
If the relevant AI systems are also scalable, then even actors who already possess the resources to carry out these attacks may gain the ability to do so at a much higher rate. One example of a threat likely to expand in these ways is that posed by spear phishing attacks, which use personalized messages in an attempt to extract information or initiate action from a target by using a superficially trustworthy facade to obtain sensitive information or money. The most advanced spear phishing attacks require a significant amount of skilled labor, as the attacker must identify suitably high-value targets, research these targets’ social and professional networks, and then generate messages that are plausible within this context. If some of the research and synthesis tasks can be automated, more actors may be able to engage in spear phishing. In doing so, attackers would no longer need to speak the same language as their target.
Attackers might also gain the ability to engage in mass spear phishing, becoming less discriminating in their choice of targets. If an actor knows that he or she is unlikely to be identified, they will presumably feel less empathy toward their target (if they have any empathy at all) and become even more willing to carry out the attack. The importance of psychological distance is easily illustrated by the fact that military drone operators who must observe their targets prior to killing them frequently develop post-traumatic stress from their work. Increases in psychological distance could plausibly have a significant impact on potential attackers’ psychologies.
AI is not the only force expanding the potential scale and scope of existing threats. Progress in robotics and the declining cost of hardware (including both computing power and robots) are contributing to the same phenomenon. Being unbounded by human capabilities implies that AI systems can enable actors to carry out attacks that would otherwise be infeasible. The proliferation of cheap hobbyist drones, which can easily be loaded with explosives, has made it possible for non-state actors to launch aerial attacks that may use AI systems to complete tasks more successfully than any human could—or take advantage of vulnerabilities that AI systems have but humans do not. While most people are not capable of mimicking others’ voices and creating audio files that resemble recordings of human speech, significant progress in developing speech synthesis systems that learn to imitate individuals’ voices has vast potential negative implications.
Just as using fingerprints, retinal scans, and voice or facial recognition to unlock a smartphone can be convenient for users, there is a growing risk that biometric inputs can do more harm than good if they are stolen. The outputs of these systems could also become indistinguishable from genuine recordings in the absence of specially designed authentication measures. Such systems would, in turn, open up new methods of spreading disinformation and impersonating others. Consider the implications in a political campaign or a court case. Did the candidate, official, defendant, or witness actually say what they are accused of saying? How can they necessarily prove they did not when AI technology reaches the point of perfect mimicry?
AI systems could, in addition, be used to control behavioral aspects of robots and malware that would not be feasible for humans to manually control. As an example, humans could not realistically be expected to monitor every drone in use at any given point in time, nor a virus designed to alter the behavior of a large array of air-gapped computers. The growing capability and widespread use of AI systems implies that the threat landscape will change through the expansion of some existing threats and the emergence of new threats that do not yet exist. The typical character of threats will likely shift in some distinct ways. Attacks supported and enabled by progress in AI could be particularly effective by being finely targeted, difficult to attribute, and exploitative of vulnerabilities in other AI systems.
Given AI efficiency and scalability, highly effective attacks will undoubtedly become more common (at least absent substantial preventive measures), with attackers facing a trade-off between the frequency and scale of their attacks and their effectiveness. For example, spear phishing is more effective than regular phishing, which does not involve tailoring messages to individuals. However, it is relatively expensive and cannot be carried out en masse. By improving the frequency and scalability of certain attacks, including spear phishing, AI systems can diminish the impact of such trade-offs. The expected increase in the effectiveness of attacks naturally follows from the potential of AI systems to exceed human capabilities. Attackers can be expected to conduct more effective attacks with greater frequency and at a larger scale.
Efficiency and scalability—specifically in the context of identifying and analyzing potential targets—also suggest that more finely targeted attacks will become more prevalent, such as against high-net-worth individuals or with a focus on specific political groups. Drone swarms could be programmed by AI to deploy facial recognition technology in order to kill specific members of crowds. The increasing anonymity of AI systems also suggests that difficult-to-attribute attacks will become more typical, such as an attacker who uses an autonomous weapons system to carry out an attack rather than doing so in person. We should also expect attacks that exploit the vulnerabilities of AI systems to become more typical.
Embracing or Fearing our AI Future?
If data is the new oil, then AI is the engine that extracts, refines, and weaponizes its value. Should we be awed or alarmed by the prospect that AI could become a dominant and pervasive force in human life within the span of a single decade? One of the dangers in deciding to dive in is that, as more organizations jump on the AI bandwagon, a “shoot first and ask questions later” mentality develops. The perceived need to get in the race (and do so quickly) is prompting many participants to skip steps they might otherwise have taken along the product development process.
Ironically, while data is the lifeblood that fuels AI, it remains one of the most undervalued and overlooked intangible assets—conspicuously absent from most corporate balance sheets. Very few companies treat data as a balance sheet asset, either because they do not think of it as an asset or because there is no standard methodology for attributing tangible value to data. As the race toward AI supremacy marches on, this is becoming an increasingly important omission. Yet failure to accurately quantify the enterprise value of data may woefully undervalue not only a firm’s stock and brand equity, but also the potential value of its AI-related assets and investments.
AI is already a fact of life whose potential will grow exponentially, along with its applicability and impact. So much data is now generated on a daily basis globally that only gigantic infusions of data are likely to make a difference in the growth of AI going forward. That implies that only the largest, most technically sophisticated firms with the capability to consume and process such volumes of data will benefit from it in a meaningful way in the future.
Some of the greatest thinkers of our time have already pondered what our AI future may imply. Henry Kissinger saw AI as dealing with ends rather than means, and as being inherently unstable, to the extent that its achievements are, at least in part, shaped by itself. In his view, AI makes strategic judgments about the future, but the algorithms upon which AI is based are mathematical interpretations of observed data that do not explain the underlying reality that produces them. He worried that, by mastering some fields more rapidly and definitively than humans, AI may diminish human competence and the human condition over time, as it turns them into mere data.
AI makes judgments regarding an evolving, as-yet-undetermined future, and Kissinger argued that its results are imbued with uncertainty and ambiguity, which leads to unintended outcomes and a danger that AI will misinterpret human instructions. By achieving intended goals, AI may change human thought processes and values, or be unable to explain the rationale for its conclusions. By treating a mathematical process as if it were a thought process—and either trying to mimic that process ourselves or merely accepting the results—we are in danger of losing the capacity that has been the essence of human cognition. While the Enlightenment began with philosophical insights being spread by new technology, the period in which we are living is moving in the opposite direction, for it has generated a potentially dominating technology in search of a guiding philosophy.
There are no “rules of the road” in existence for AI right now. While AI remains in an embryonic state, this would be a perfect time to establish rules, norms, and standards by which AI is created, deployed, and utilized. We should ensure that it enhances globally shared collective values to elevate the human condition in the process. While there will probably never be a single set of universal principles governing AI, by trying to understand how to shape the ethics of a machine, we are at the same time forced to think more about our own values—and what is really important. If the debacle that social media became from a governance perspective is any guide, we should not be optimistic.
Attempting to govern AI will not be an easy or pretty process, for there are overlapping frames of reference, and many of the sectors in which AI will have the most impact are already heavily regulated. New norms are emerging, but how will the two be merged? It will take a long time to work through the various questions now being raised. Many are straightforward questions about technology, but many others concern what kind of societies we want to live in and what types of values we wish to adopt in the future. If AI forces us to look ourselves in the mirror and tackle such questions with vigor, transparency, and honesty, then its rise will be doing us a great favor. History suggests, however, that the issues that ought to matter most are often distorted, overlooked, or discarded entirely along the way.
We may see a profound shift in agency away from man and toward machine, wherein decision-making could become increasingly delegated to machines. If so, our ability to implement and enforce the rule of law could prove to be the last guarantor of human dignity and values in an AI-dominated world. Yet, as we continue to grapple with such fundamental issues as equality and gender bias with great difficulty, what should be at the top of the AI “values” pyramid?
To achieve anything close to our potential in scientific discovery will require a much larger effort than teams of people sequestered in a room to think about it all. Only those organizations and countries that commit massive resources to solving problems and creating a competitive edge today have a chance of achieving AI supremacy in the next decade. That implies adopting a mindset that makes AI an integral part of the long-term planning process, with clear objectives and benchmarks in view. That will be much easier said than done, no matter how large the organizations or how committed the government.