Artificial Intelligence: Terrorism and International Relations

Author:
Mateja Nikolic
Senior Security Analyst at Accenture, Masters in Diplomacy

Artificial Intelligence: Terrorism and International Relations
Center for International Relations and Sustainable Development


Mateja Nikolic (April 10th, 2024)

Introduction


The development of Artificial Intelligence (AI) is revolutionizing not only the field of Information Technology (IT) but most aspects of human life, from developing and forecasting consumer habits to improving disease diagnosis. The expansion of AI is heralded as the 5th Industrial Revolution, drastically changing our interaction with the world, and leading to unprecedented political, social, and economic developments.


As a field of computer science, AI is dedicated to the theory and production of computer systems tasked with visual perception, speech recognition, translation, and problem-solving (Russell & Norvig, 2010). It has two primary subfields: machine learning and deep learning. Machine learning refers to the creation of algorithms that learn from data by extracting patterns and learning implicit rules from examples in a database. These algorithms do not require specific instruction from humans, instead they perform independently. Deep learning works with a smaller family of algorithms – neural networks – that learn from large amounts of data by performing a single task repeatedly, making minor modifications to its internal features to improve the outcome, whatever it may be. Machine learning and deep learning form a crucial relationship within AI, creating a program that can sense, reason, act, and adapt (Goodfellow et al. 2016). These new systems are leading the way for automatization, being used by companies, governments, and likely criminal organizations, potentially offering ways to escape the law. As this program develops and becomes more available to different actors, it can also be used by terrorist organizations to the detriment of both global and national security.


AI is a neutral tool that can be used to both help humanity and cause destruction and instability. The purpose of this paper is to explore how AI can be used by terrorist and other criminal or insurgent groups to create insecurity for political and financial benefit. Although there is no clear use of AI by terrorists currently, it is likely to occur in the future. By employing machine learning and deep learning, terrorists could decipher how to exploit weaknesses in both physical and cyber security. Therefore, this paper explores how AI impacts both physical and cybersecurity, how it affects the international system, and what countries are doing to regulate it.


Physical Attacks


As machines continue to be automated and become integrated with AI, they can be used for malicious purposes. Such is the case for the automation of vehicles and drones; cars, trucks, and drones can be used to target individuals and institutions by implementing AI technologies.
Due to their simplicity and the amount of carnage vehicles can cause in densely populated areas, terrorists have long used them for their attacks, from hijacking planes and crashing them into buildings targeting civilians – September 11 Attacks – to using armored trucks filled with explosives, driven by suicide bombers in the battlefields of Syria, Iraq, and Afghanistan. There are also more recent high-profile examples: the Berlin Christmas Market Attack (2016), the 2016 Truck Attack in Nice, France, and the 2017 Barcelona Attacks (UNICRI, 2021). All these assaults have one thing in common, terrorists using vehicles to drive them into pedestrians, causing mass casualties. AI could make such attacks easier to carry out by taking out the need for a driver and using facial recognition systems to target certain individuals or groups of people.


The application of AI could aid terror organizations in further engaging in such assaults by taking away the human factor. There may be less of a reliance on radicalized/motivated individuals who are willing to kill themselves and others to achieve their ideological aspirations if all that is needed is an algorithm. By utilizing AI, human error may also be reduced; factors such as fear and fatigue in implementing terror attacks can be minimized as the primary tool for them would be machines or a mix between machine and man. Thus, the spread of automation and digitalization is not only impacting industries but also terror strategies. This may be the case for autonomous vehicles (AVs), which can be weaponized in various ways and to great effect.
AVs are becoming more present every day. These vehicles can function without a human driver. They do so by having AI embedded in the vehicle's computer system, employing deep learning techniques mimicking the decision-making of a driver to steer, accelerate, and brake Companies such as Google and Tesla are leading the way in developing autonomous cars which are increasingly being used in the US and internationally. Currently, in the US, the top cities using driverless cars are Austin, San Francisco, and Phoenix (Muller, 2023). In these cities, such cars are being used as taxis, where individuals use an application to contact the car, and then are driven remotely from point A to B. The issue with such a process is that the software in these cars can be hacked and depending on the scale and nature of the hacking, lives will be endangered; there is no such thing as a perfect and secure software system as any developments in cyber security lead to new ways in undermining it, and vice versa. With potentially thousands of AVs operating soon in many cities globally, new security problems may appear, from traffic jams to car accidents, and perhaps car kidnappings.


At an annual hacking contest earlier this year, a Tesla Model 3 vehicle was hacked in less than two minutes (Vijayan, 2023). Synacktiv, a French-based penetration testing firm, was able to demonstrate in two separate attempts, that it was able to successfully attack and penetrate the Tesla 3's cybersecurity system. The first attack gave Synacktiv access to components controlling the vehicle's safety and other systems, such as opening doors or the trunk while the Model 3 was in motion and 22 other vulnerabilities the researchers uncovered. During the second attack, the researchers exploited a vulnerability in the car's Bluetooth chipset to break into the Tesla's infotainment system. From there, they gained access to other subsystems from which they were better able to control the vehicle (Vijayan, 2023). This showcases the dangers of using AVs, especially those that are fully independent and utilize AI, as they are vulnerable to being hacked and controlled without the presence of a human driver. If a company can hack into a car in less than two minutes, one can imagine both state and non-state actors having the ability to do the same – to fully control the vehicle to achieve destructive ends.


Like cars and other vehicles, drones are being paired with AI technologies and could be used to devastating effect. Drones are being used in most major conflicts today, by both state and non-state actors. Their recent introduction to battlefields across the globe, from Yemen to Syria and Ukraine, has seen commercial drones being weaponized not only for reconnaissance but equipped with explosives on search and destroy missions. Although such commercial drones require a person to fly them, they could be upgraded with AI features, making them more deadly.


The most widely used of these drones are described as First Person View (FPV) drones, giving the user the ability to have a pilot's view of the area and to strike from afar without directly endangering themselves. FPV drones cost around 500 dollars each and have a range of 8-10km. This low cost and long range give terror groups unprecedented abilities in asymmetric warfare, allowing them to narrow the technological gap they may have against traditional militaries. Ukraine has used FPV drones to devastating effect in its war against Russia. The Russo-Ukrainian War has been ongoing for close to two years now, and despite Russia having a larger military and population, Ukraine's asymmetric capabilities, especially its drone use, have evened out the battlefield. There are hundreds of videos online of cheap Ukrainian FPV drones, armed with explosives, destroying Russian tanks that cost millions of dollars each (Melkozerova, 2023). More recently, Hamas employed similar drones in its October 7th attacks against Israel, using cheap, commercial drones to overwhelm Israeli guard posts and defenses, allowing Hamas to storm Israeli positions. Combining AI technologies with such drones would likely make them more lethal by automation and other features such as facial recognition.


By utilizing machine learning, drones already use facial recognition and different identification features to pick out both objects and people. The cameras can target specific individuals and even identify emotions. The US Air Force recently developed such a system, claiming that the drone can fly by itself and identify friends from foes (Brodsky, 2023). If such technology falls into the wrong hands or becomes more widely available, it could cause havoc. It is easy to imagine hundreds or thousands of FPV drones using AI features such as facial recognition or independent flight, targeting both civilian and military personnel in a kamikaze fashion. With AI development, drones are changing the concept of security, making the environment both safer and more dangerous at the same time, a similar circumstance regarding cybersecurity.


Cyber Terrorism and AI


Cyberspace is one of the most crucial aspects of modern life. It connects everyone to everything, forming the foundation of communication and production. Cybersecurity is defined as the body of technologies and processes that are designed to protect networks (aka Cyberspace), one of the most important spheres of security, with Cyberspace considered one of the five operational domains by the US military among land, sea, air, and space (Kreuzer, 2021). As it is not a physical feature as the other four, instead encompassing all of them, it plays a crucial role in each by maintaining communication and operations. With digitalization being a global phenomenon, cybersecurity plays a crucial role internationally, guaranteeing safe transactions among the different actors. AI is changing how we secure networks, with the need to implement new strategies for combating developing AI cyber threats. As such, AI may aid terrorists and other groups in developing more sophisticated and dangerous cyber-attacks, possibly carrying global ramifications.
Cyber-terrorism can be defined as a politically motivated use of computers and information technology to cause disruption and fear in society, targeting individuals, organizations, and governments (Oxford Languages). Terrorists may utilize AI for cyber-terror purposes to automate their attacks, utilizing machine learning to exploit cybersecurity weaknesses. By applying machine learning, hackers can develop algorithms that are more complex and accurate; every time a hacker attempts to enter a system and fails, through a method of self-learning and automation, the next cyber-attack would be more lethal. This gives cyber terrorists incredible potential in the amount of damage they could do.
Features for such attacks already exist: highly sophisticated malware and open-source AI research projects that give valuable information to the public (Dixo et al. 2019). One of the examples is Emotet Trojan, a type of malware targeting banking. Emotet's main function is spam phishing, applying invoice scams that trick users into clicking on malicious email attachments. There are other versions of Emotet capable of stealing email data from infected victims; it does so by sending emails at scale, inserting itself into preexisting email threads, giving the phishing email more context, and making it more legitimate for the receiver to click on it (Dixo et al. 2019). With AI's ability to learn and replicate natural language, these emails are becoming tailored to specific individuals; these customizations give credibility to the sender and create a more dangerous cyber environment.
Such a dangerous cyber environment is already here. With AI's ability to enhance attacks globally, the 2017 WannaCry ransomware attack hit organizations in over 150 countries around the world, causing mass disruption by using AI features. The attack was unprecedented in scale. WannaCry impacted hundreds of thousands of computers, spreading across networks by exploiting vulnerabilities in Windows computers. It moved laterally through an organization in seconds, paralyzing hard drives and inspiring copycat attacks (Dixon et al. 2019). One of the largest institutions affected was the National Health Service (NHS), a public health care provider in the United Kingdom. Around 70,000 devices were impacted, not only including computers but also MRI scanners, blood storage refrigerators, and other equipment. In some cases, ambulances and patients had to be diverted as services could not be provided due to the ransomware (BBC, 2017). This highlights that cyber-attacks have life-threatening consequences. What impacts the digital realm can have drastic ramifications for millions of people, making cybersecurity one of the most important aspects of both national and international security. WannaCry shows how cyber criminals and terrorists can use AI features to cause havoc.
In the case of WannaCry, the ransomware was spread via email. They were able to do so by manipulating a flaw in the Microsoft Windows implementation of the Server Message Block (SMB) protocol, which helps with the communication of nodes. The unpatched (unprotected) implementation was tricked by crafted packets into executing an arbitrary code, known as the EternalBlue, a type of ransomware developed by the National Security Agency (NSA). The cybercriminals encrypted (locked) data and then demanded payments in Bitcoin to decrypt them. The ransomware spreads by itself independently, encrypting data across networks without complete control of the hackers. The autonomous spread of ransomware can be seen as a feature of AI due to its independence and the speed by which it spreads. One can imagine how much deadlier cyber-attacks which are supported by AI will become, and the urgent need for better, ever-evolving security measures.
It is widely believed that the NSA knew of the flaw in the Windows system, but instead of notifying Microsoft, it developed Eternal Blue to exploit it (Frulinger, 2022). This tool was itself stolen from the NSA by a hacking group called Shadow Brokers, using Eternal Blue to great effect. The then Vice President of Microsoft, Brad Smith, slammed the US government for not having shared its knowledge of this weakness in the Windows system and commented on the need for collaboration between private and governmental institutions (Smith, 2017). This points to a weakness in communication between governments and companies, where knowledge and information are not shared. In cyberspace, where networks are internationally interconnected, there needs to be a better structure in place facilitating cooperation between all the different actors, not just within countries but globally. If such a system existed before this attack in which companies and intelligence agencies communicated in a better way, WannaCry could have been avoided or at least its impact mitigated.


International Factor


It has been said that the September 11th Attacks were able to occur partially as a result of poor communication/collaboration between the US intelligence agencies. The intelligence that was gathered before the attacks could have prevented them as it pointed to terrorists hijacking airplanes and crashing them into buildings. However, such intelligence was not shared, making it difficult to see the full picture, as each agency kept the intel to themselves due to the competitive security environment (9/11 Commission, 2003). Cyber terrorists and criminals are still exploiting this lack of partnerships on a transnational level. As cyber-attacks continue to evolve and become more lethal with the advent of AI, there needs to be greater international cooperation in their prevention and mitigation. The United Nations (UN) currently has a Global Programme on Cybercrime as part of its Office on Drugs and Crime. Given the scale of cybercrime, this may not be enough to coordinate a global response between all of the international actors, both private and governmental.


Corporations play a great role in cyberspace, and one can argue an even greater one than governments, with companies such as Google, Microsoft, and Accenture pioneering new technologies. This is especially true in the field of AI. However, these companies and others are not represented by international institutions like nations, even though they are transnational and have a global impact. A global security structure where private institutions are included is lacking. This may cause issues in the areas of cybersecurity. As one can see with the WannaCry incident, where a lack of coordination on a national level impacted cybersecurity globally. If there was a global cybersecurity structure between software developers and governments, then such incidents could be avoided, or their impact mitigated. Many countries, for example, use software developed by the above companies, but one can only guess the relationship between them. If AI continues to develop exponentially and continues to be weaponized, then a more formal relationship may be necessary between public and private institutions as global problems require global solutions.


More recently, China, the US, and the UK signed an AI safety pledge highlighting the need for international action regarding AI development (11/1/2023). These three countries, among 25 others and the European Union, signed the Bletchley Declaration, agreeing on "the urgent need to understand and collectively manage potential risks through a new joint global effort…" ensuring that AI is developed in a safe and is a benefit for the global community (France 24, 2023). According to the summit organizers, the summit was not designed to produce a blueprint for international law regarding AI development but is meant to outline future actions. The agenda of the summit was meant to identify AI risks, build a shared scientific consensus, and create risk-based policies across countries while considering national circumstances and legal frameworks. The countries also agreed on a non-binding code of conduct for companies developing AI systems. However, the non-binding nature of these agreements, though good for building consensus, may cause a lack of real action.


One of the greatest difficulties of international agreements is how to make them binding, holding those who break them accountable, and deciding on how the actions regarding their respective issues should be regulated. A good example is the Paris Accords (2015), a non-binding agreement where countries agreed on how to regulate climate change. Even though the agreement was signed eight years ago by many countries, most have failed to implement its provisions, meet their desired target goals, and meet their deadlines on meeting greenhouse emissions standards. This may be the case with any future agreements on AI. If the agreements follow the same non-binding nature, without any regulatory power, then they will remain words on paper. Though the Bletchley Declaration outlines positive changes in AI security, it does not make any concrete plans on how to enact them. This lack of tangible action in creating international laws on AI and concrete collaboration is something that terrorists and criminals will likely exploit.


Conclusion


As AI continues to develop at an incredible pace, institutions need to change and develop with it. The impact that AI does have and will have on physical and cybersecurity is and will be significant. Terrorists and other criminals are likely to use it to their advantage as AI offers great potential for attacks, giving them an asymmetric edge over governmental institutions. Due to the transnational nature of AI development and the impact it carries, there needs to be a stronger, institutional response on a global level regulating its development and keeping it from the hands of terrorist organizations. Ultimately, the potential for AI to create instability and cause destruction remains vast and there is a need to better regulate it on both the national and international level.
Works Cited

 

Back to young contributors