Fu Ying is Founding Chair of the Center for International Security and Strategy at Tsinghua University, China’s former Vice Minister of Foreign Affairs, and former Chairperson of the Foreign Affairs Committee of the National People’s Congress, having also formerly served as her country’s ambassador to the United Kingdom, Australia, and the Philippines.
In November 2019, Dr. Henry Kissinger came to Beijing to attend the New Economy Forum. During the event, he specifically requested to participate in a sub-forum on artificial intelligence. I joined the discussion as a guest alongside former Google CEO Eric Schmidt. In his remarks, Kissinger noted that the development of AI would bring profound transformations to humankind. He raised a key question: what impact will AI have on the world? There is no clear answer yet. However, Kissinger argued that both technologists and policy experts had to discuss the issue together.

Henry Kissinger during the 2019 New Economy Forum in Beijing | Photo: Guliver Image
Two years later, The Age of AI: And Our Human Future was published, co-authored by Kissinger, Schmidt, and Daniel Huttenlocher, Dean of the Schwarzman College of Computing at MIT. Likewise, discussions surrounding AI had become even more active. This was marked by the emergence of the world’s first trillion-parameter language model: the Switch Transformer. By this point, AI was already permeating nearly every aspect of daily life. Powerful natural language systems such as ChatGPT were sparking global debates about the future of AI. This widespread anxiety reached the point where even primary school students began to worry whether AI would replace human skills and prevent them from realizing their dreams. The idea that machines could evolve from tools into partners, or even competitors, is no longer science fiction; we are stepping into a reality that was once only imagined in literature.
Will AI Transform Humanity’s Future?
When examining serious issues such as the future of AI, Kissinger approached topics from a philosophical angle and often said that such thinking was inspired by conversations with Chinese leaders. In The Age of AI: And Our Human Future, he examines AI from the perspective of human existence and destiny: What does AI mean for the future of humanity? Alongside Kissinger, the co-authors (Schmidt and Huttenlocher) are influential figures in their fields. They highlight the fact that the book does not seek to define an era. Instead, it aims to capture this moment, when the implications of AI are still somewhat understandable. Additionally, the authors seek to discuss how technology reshapes human thought, knowledge, cognition, and reality. They also urge humanity to respond collectively to the issues created by AI by engaging in dialogue with one another.
The birth of AI marks the emergence of a powerful new tool for humankind’s inquiry into reality. The book, therefore, raises questions that require serious reflection: How will AI influence and reshape human beings and our living conditions? What kind of future will it bring? To provide context, one chapter illustrates how human rationality gradually attained its supreme status across history. The authors invoke Plato’s allegory of the cave, where the philosopher is akin to a prisoner breaking free from his chains to perceive the real world in the sunlight. Guided by this rational inquiry into the nature of reality, humanity has long relied on reason to accumulate knowledge and enhance wisdom. At the same time, the integration of (and conflicts between) different societies and cognitive frameworks has continually expanded the bounds of human experience and accelerated scientific advancement.
For a long time, humanity believed itself to be the sole intelligent agent in the world—a belief now shaken by the rapid emergence of AI. Previous technological innovations were typically extensions of existing human forms: film was moving photography, the telephone was conversation across a distance, and the automobile was a faster horse carriage. AI, however, is not an extension of the known. Tasks once performed only by humans—such as reading, research, shopping, conversation, record-keeping, surveillance, and military planning—are now being digitized and executed in cyberspace. Where information once became knowledge through human context, and knowledge became wisdom through human reasoning, AI is beginning to alter the fundamental notion that humans are the only thinkers and decision-makers in the world.
AI has entered daily life almost unnoticed, offering us convenience by freeing us from trivial tasks. I recall working at the International Labour Organization in Rome over 30 years ago, relying on a foldable map to visit the city’s historical sites in my free time. Today, few people use paper maps, we simply navigate with our smartphones. We rarely consult physical dictionaries anymore, instead looking up unfamiliar words online.
But these are only basic, everyday encounters. In scientific research and medicine, AI has produced revolutionary changes at the very frontiers of human knowledge. For example, by learning the molecular properties associated with antibacterial effects, AI has discovered entirely new antibiotic structures. Some of these structures were beyond prior human awareness. Similarly, DARPA’s “Alpha Dogfight” AI pilot has performed maneuvers beyond human capability, defeating experienced fighter pilots in virtual combat.
Yet AI’s powerful learning abilities also evoke confusion and fear. The book highlights a major concern: AI can reach correct conclusions without displaying its reasoning, thus making its processes opaque to us. This opacity raises the possibility of a future shaped by AI that is no longer fully intelligible to humans. If AI adopts cognitive processes different from human rationality, then our longstanding belief in reason as the primary means of understanding reality may be challenged. This leads the authors to ask: When we are encased in AI-filtered information, are we approaching knowledge—or is knowledge slipping away?
Cogito, ergo sum (I think, therefore I am). If AI can think—or nearly think—what does this mean for the very definition of a human being? Although some theorists predict that AI may one day achieve consciousness, it currently has no emotional or moral capacity. It does not reflect, fear, or forgive. This makes it dangerous to entrust AI with risk-taking decisions that involve elements of human nature. AI can follow rules and optimize outcomes, but it cannot experience the moral reckoning that allows humans to reject war after witnessing suffering.
The book cites works such as Homer’s Iliad, which recounts the duel between Hector and Achilles, and Picasso’s Guernica, which depicts civilian suffering during the Spanish Civil War. These examples illustrate that humans, having experienced warfare and suffering, can reflect on the tragedies that they engender and strive to prevent their recurrence. AI, however, cannot do this. It must strictly follow rules, provide excellent performance, and secure the most advantageous outcomes, but it is incapable of generating the moral or philosophical impulse to reflect on good and evil.
The book outlines several key risks of AI: its lack of common sense, its rigid data-based judgment, its inability to self-correct, and its potential to amplify human bias. AI is prone to absorbing the biases of its designers and data, which can lead to flawed judgements or responses that do not align with human values. A stark example occurred in 2016, when Microsoft’s chatbot, Tay, had to be shut down after it rapidly began replicating hate speech that it encountered online. The authors warn that if humanity’s reliance on AI becomes a form of unchecked delegation, the risks will be immeasurable.
Preventing the AI Arms Race
Kissinger, Schmidt, and Huttenlocher place significant emphasis on the geopolitical and strategic security risks that may arise from the AI revolution. While there is no clear evidence of an active AI-weapons arms race, the authors warn that combining AI with cyber or nuclear weapons could dramatically increase both their destructive power and accessibility. The consequence of such a development lies in increasing risks of conflict escalation. A further danger arises if AI begins to operate at strategic decision-making levels, particularly in ways beyond human rationality. In that scenario, the processes, scope, and ultimate implications of its actions could cease to be transparent.
Humanity faced a similar fear and uncertainty when nuclear weapons first emerged. During the Cold War, the United States and the Soviet Union operated under the logic of nuclear deterrence. Both sides raced to develop weapons capable of annihilating humankind, all while claiming that this buildup was necessary to prevent war. This created a paradoxical cycle that continuously heightened the risk of conflict. Strategic discussions at the time centered on whether nuclear weapons could ever be reconciled with political objectives in the context of total war and mutual destruction.
In other words, how could we prevent nuclear development from inevitably leading to a universal catastrophe? After years of scholarly debate—including contributions from Kissinger—and prolonged negotiations between the United States and the Soviet Union, arms control and verification mechanisms were eventually established. More importantly, a global consensus emerged among major powers that nuclear war was an event that had to be avoided at all costs.
Therefore, Kissinger, Schmidt, and Huttenlocher’s core concern is how to prevent and contain the threats posed by AI’s military application. First, AI technology is inherently opaque and unconstrained by physical boundaries. Unlike physical weapons, which leave traces of their manufacture and allow detection of deployment, AI programs can be developed and operated anywhere a network-connected computer exists. As a result, our ability to implement international oversight and impose restrictions on these programs is exceedingly difficult.
Moreover, the integration of AI could substantially enhance military capabilities, potentially beyond the limits of human cognition. This, in turn, would make it harder for adversaries to anticipate each other’s intentions, greatly increasing the risks of miscalculation and escalation. The danger would be even greater if autonomous offensive AI weapons were to emerge. Although many states have pledged not to deploy lethal autonomous weapons, the secrecy and unpredictability surrounding AI development make it impossible to be certain that such weapons will not proliferate.
While the developed world has incorporated nuclear weapons into a broad framework of international security and arms control, no comparable strategic framework exists for AI or cyber weapons. Kissinger, Schmidt, and Huttenlocher argue that technological development cannot—and should not—be halted. However, they warn that the prospect of AI weaponization challenges the fundamental security consensus that humanity has achieved. They call for rationality, wisdom, and cooperation to create a strategic framework for AI militarization, one even more sophisticated than that which governs nuclear arms. In particular, they question whether the United States and China—both at the forefront of AI development and application—can cooperate on this issue, which is profoundly consequential to the future of humanity.
The U.S.-China AI Challenge
During a discussion at the New Economy Forum in Beijing in 2019, I remarked that China and the United States, as the two foremost nations in AI development, bear significant governance responsibilities. The increasingly strained relations between them will inevitably affect humanity’s capacity to address the challenges posed by emerging technologies.
I raised the question: Can the two sides work together to achieve a symbiosis between humanity and technology, or will we diverge and even use technology to weaken or harm one another? Kissinger noted at the time that the fundamental challenge facing the United States and China is whether they are entering a truly adversarial relationship. Can cooperation still be found to address shared problems? He warned that if the two countries diverge, they would compete across the entire world, leading to global division.
In their book, Kissinger and his co-authors wrote that while absolute trust between nations is impossible, a degree of mutual understanding is still achievable. They argued that both countries must recognize—regardless of how their relationship evolves—that neither should embark upon a technological war at the frontier of innovation. Therefore, initiating dialogue on cyber and AI issues at multiple levels is crucial. At a minimum, this would help establish a shared strategic vocabulary and enhance awareness of each other’s red lines. The authors also acknowledge a more fundamental problem: defining AI may prove more difficult than any challenge previously encountered in human history.
Kissinger and his co-authors are not alone in expressing deep concern about the future of AI. Geoffrey Hinton, a former Google Vice President and researcher, has recently argued that humanity may be entering a transitional stage in the evolution of intelligence. He believes AI already demonstrates elementary reasoning and may eventually acquire capabilities that enable it to influence or even control human beings. This concern is mirrored by regulatory action. Europe, for instance, is advancing the AI Act to impose strict regulations, prohibiting systems deemed to pose “unacceptable risks” to human safety. Kissinger, however, maintained that if clear boundaries can be drawn and rationality upheld above fear and confusion, the future of humanity remains full of potential. Otherwise, humanity may fall into peril.
Over the last decade, the United States has increased its investment in AI and has sought to maintain its lead in the AI race. In August 2018, the U.S. government established the National Security Commission on Artificial Intelligence. In March 2021, the Commission released a report laying out strategic guidelines and action plans to prevent other nations from gaining a competitive advantage in the AI era. More recently, in May 2023, the U.S. government convened executives from major technology companies to discuss AI safety.
However, the sitting U.S. administration not only discourages high-technology cooperation with China but also imposes continuous restrictions designed to impede China’s technological development. Furthermore, it has shown little willingness to engage with China on international AI governance. Thus, the proposals and appeals for cooperation made by Kissinger and others do not reflect the U.S. government’s actual position.
Nevertheless, academic and civil exchanges between China and the United States in AI have not ceased. This cooperation persists even as China’s scientific and industrial communities accelerate their own progress, creating a dynamic atmosphere of global competition. For instance, Stanford University’s AI Index 2022 shows that between 2010 and 2021, Sino-American collaboration produced more co-authored AI research papers than any other bilateral partnership. Foundational AI research worldwide also continues to exhibit a degree of openness, with open-source models helping to restrain monopolistic control.
A key example of this difficult yet productive collaboration is the joint research project on the governance of AI-enabled weapons, launched in October 2019 by Tsinghua University’s Centre for International Security and Strategy (CISS) and the Brookings Institution. Despite rising strategic divergence and declining mutual trust, these academic discussions have proceeded with a strong level of persistence. Scholars from both sides agree that the governance of AI-enabled weapons—particularly autonomous systems—is of paramount importance. The project focuses on building shared norms, such as embedding international humanitarian law into AI weapons and prohibiting their use against civilian infrastructure such as nuclear power plants and large dams. While this project demonstrates the difficulty of academic cooperation on the frontier of scientific research amid great-power competition, the progress that has been made remains encouraging.
China and the United States bear a significant responsibility in shaping humanity’s future, as their choices will profoundly influence the trajectory of technological development. Neither country can monopolize global technological progress. If the two adopt complementary and constructive attitudes, the prospects for AI will be brighter; if cooperation fails, both sides will suffer, and humanity will pay the price. Major powers must not allow narrow self-interest—as seen in certain current U.S. policies—to override the shared interests of humankind. Stubbornly proceeding in a Cold War mentality and zero-sum logic will only lead humanity further away from a shared, positive future.
It should be noted, however, that The Age of AI: And Our Human Future has its limitations. Its arguments are grounded in Western intellectual traditions, and its historical references are largely drawn from Western experiences. Consequently, it demonstrates insufficient awareness of the history and cultural thought of developing countries, paying limited attention to how AI development affects societies outside Euro-American settings.
In addition, the book raises more questions than it answers and does not always offer deeper analysis or concrete solutions. However, it remains an accessible work on a cutting-edge and widely debated topic. It helps to initiate serious, multidimensional discussion—even constructive disagreement—and enables readers to better understand the global discourse surrounding artificial intelligence. We must, therefore, continue to study diverse international perspectives, articulate China’s own ideas and positions, and actively contribute to solutions for major global challenges. Through these actions, we can promote the building of a community that seeks a better shared future for humankind.
China’s Responsible AI Governance
Guided by President Xi Jinping’s Thought on Diplomacy, China has taken a responsible approach in its discourse and actions regarding AI governance. In February 2019, the Ministry of Science and Technology established the National Governance Committee for the New Generation of Artificial Intelligence, which issued eight principles for AI governance: harmony and friendliness, fairness and justice, inclusiveness and sharing, respect for privacy, safety and controllability, shared responsibility, openness and collaboration, and agile governance.
In September 2020, the Chinese government proposed the Global Data Security Initiative, emphasizing mutual respect and deeper dialogue and cooperation, with the aim to jointly build a peaceful, secure, and open community of shared future in cyberspace. China then released position papers in December 2021 and November 2022 on regulating military application of AI and strengthening AI ethics, calling on all parties to uphold established ethical standards for artificial intelligence.
Afterwards, in May 2023, Wan Gang, President of the China Association for Science and Technology, stated at the 7th World Intelligence Congress that China should work to break down “data silos” and promote lawful open-source development. He argued that this would provide the foundational impetus for the AI industry’s stable growth.
More recently, President Xi Jinping’s proposed Global Governance Initiative provides important guidance for global AI governance, emphasizing five principles: sovereign equality, international rule of law, multilateralism, people-centeredness, and action-orientation. AI governance is a key part of this initiative. The Initiative calls for countries to increase dialogue and cooperation, respect each other’s development paths, jointly formulate rules and standards, and ensure that technological development benefits humanity.
In his congratulatory letter to the 2024 World AI Conference, President Xi pointed out that the next-generation of AI is injecting new momentum into economic and social development and is profoundly changing how people live and work. He noted that continuous technological breakthroughs, new business models, and expanding applications have become an important driving force for a new round of technological revolution and industrial transformation. At the same time, he acknowledged that AI also faces a series of new challenges in areas such as law, security, employment, and moral ethics.
China has gained some experience in balancing development and safety in AI governance. It has built a relatively comprehensive regulatory system, including the “Interim Measures for Generative AI Service Management,” the “Algorithm Recommendation Service Management Regulations” and the “AI-Generated Synthetic Content Identification Methods.” This framework was recently updated with the “AI Safety Governance Framework 2.0” in September 2025.
Through these measures, China has established a rules-based governance system that regulates AI’s ultimate uses. The aim is to prevent systematic abuse in the nuclear, biological, chemical, and missile domains. To pave the way for industrial development, the 2024 “National AI Industry Comprehensive Standardization System Construction Guide” aims to issue over 50 standards and participate in over 20 international standards by 2026. Furthermore, China has widely applied AI in smart cities, transportation, and healthcare, forming the world’s largest AI application ecosystem and accumulating rich scenario-based governance experience.
In international cooperation, China actively promotes the construction of a global AI governance system. It established the China AI Safety and Development Association (CNAISDA) to foster international exchange and conducting bilateral dialogues with the U.S., UK, and Singapore. China has also proposed several key frameworks, including the 2023 Global AI Governance Initiative (the same year it signed the Bletchley Declaration), the 2024 Shanghai Declaration on Global AI Governance, and the 2025 Global AI Governance Action Plan. These efforts reflect China’s sense of responsibility as a major country in AI advancement and promote Chinese wisdom on global AI governance.