Humans lead the 'AI triad'
In a world increasingly driven by technology, artificial intelligence (AI) stands out as a transformative force, much like fire in ancient times. AI has the power to propel scientific breakthroughs, enhance our daily lives, and even revolutionize entire industries. However, it also carries risks, from deepening societal divisions to escalating geopolitical tensions. The metaphor of fire captures this duality perfectly, illustrating both the potential benefits and dangers inherent in this rapidly evolving technology.
Andrew Imbrie, Associate Professor of Practice at Georgetown University's School of Foreign Service, delves into these themes in his latest book, "The New Fire", co-authored with Ben Buchanan (edited in Italian by Bocconi University Press, 2024, 368 pages, €28,50). Whether AI will become a tool of progress or a source of destruction will depend on how humanity chooses to use it. As Imbrie succinctly puts it, "The way we think and speak about AI matters. It shapes our judgments and conditions our sense of the possibilities."
The title of your book, "The New Fire", evokes a powerful metaphor. Can you explain how AI represents a new kind of fire for humanity?
While metaphors are imperfect devices for grappling with a fast-changing technology, the book argues that fire is an apt guide for understanding the near- and medium-term future of AI. Just like fire, AI can warm our societies and fuel breakthrough advances in science and innovation. But, as with fire, AI can also be harnessed as a weapon of war or even blaze out of control if people do not use it responsibly. The range of potential outcomes is vast, and the benefits and risks are hard to disentangle, which is why strategic foresight, civic engagement and close partnerships between governments, industry and academic institutions are so important.
In the book, you talk about the three sparks of AI: data, algorithms and computing power. How do these three elements combine to fuel technological innovation?
Modern AI capabilities require data for training, algorithmic innovations to improve efficiency, and massive computing power to execute calculations. By some measures, these underlying components have been growing exponentially in recent years, but there are debates today about whether we are facing diminishing returns on the availability of high-quality data and computing power to train the largest models and, if so, what the drivers of change may be. These debates are a reminder of an important truth: underlying all three components of the so-called “AI triad” are the people who design, develop and deploy the technology. It is people who are making choices about the future course of this technology, and it is people – policymakers, legislators, executives, and citizens – who must confront and manage the risks so that we can ultimately benefit from its responsible use.
AI has the potential to transform democracy, but also to strengthen autocracy. What do you see as the main risks and opportunities in this dualism?
Whether AI can work for democracy is a proposition that citizens must prove together through concerted effort and wise policymaking. If managed poorly, AI could entrench divisions in our societies, fuel polarization, disrupt labor markets, and fan the flames of misinformation and disinformation, thereby undermining a vital element of the principle of self-government: trust. If managed responsibly, AI could widen access to opportunity, drive innovations in science, reinvigorate our education systems, and enable more people to participate in the democratic process. It is no surprise that AI has become the fulcrum of geopolitical competition. Some worry that AI will prove to be an arrow in the authoritarian quiver by accelerating the centralization of control at home and providing new tools for authoritarian regimes to press their advantage abroad.
There is no question that democracies today are under stress. But we shouldn’t underestimate the power of democracies to harness their dynamic innovation ecosystems, adapt to changes in technology, and shape the trajectory of AI in ways that uplift and empower people. One of the core strengths of democracies is that they are ultimately accountable to and govern in the interest of their citizens. That means that while democracies can and do make mistakes, they can self-correct and benefit from a diversity of voices and perspectives in the policymaking process. They are stronger because of their commitments to human rights, transparency and broad public participation. And they can partner with other democracies in ways that are more enduring and less transactional than has often proven to be the case with more authoritarian forms of government. How democracies manage the risks and seize the opportunities of AI will come down to the choices we make and the willingness of citizens to stay engaged in the democratic process.
You mentioned how democracies might lag behind autocracies in adopting AI. What strategies can democracies adopt to avoid falling behind?
Already, democracies are taking action to shape the trajectory of AI in ways that are conducive to democratic values. They are investing in basic and applied research, supporting innovations in semiconductor manufacturing, forging creative partnerships around the world to shape norms and standards for responsible use, and widening access to shared data and computing resources. The risk of AI fueling misinformation and disinformation is already apparent, but governments, academic researchers and industry can work together to adopt content authenticity tools, invest in digital watermarking and deep fake detection, and promote the longer-term work of digital media literacy and civic renewal that will be at the heart of any effort to shore up the resilience of democratic societies. Democracies are also working together to invest in safety training, incident reporting, and test and evaluation methods so that AI can be developed responsibly and we can anticipate and mitigate the risks while also staying adaptable to changes in the field. There is no silver bullet, and ultimately, the solutions will need to be tailored to local realities and then shared with others so that we can learn from one another.
You talk about AI evangelists, warriors and Cassandras. Can you briefly describe these categories and explain how they intersect in the AI debate?
It is important to center the human dimension in debates around artificial intelligence: we are making choices every day that will shape the future course of this technology. Some of these choices reflect the view that AI will be, on balance, a net good for societies – that it will inspire innovations in science and help us advance medical diagnosis and drug discovery that will make our societies healthier and more productive. Others are quick to point out that technology cannot be separated from geopolitics and that innovations today will soon appear on the battlefield and could decide the wars of the future. Still others focus on the risks of AI – its propensity to fail and the mix of uncertainty and exuberance that may lead to dangerous outcomes. The boundaries between these three perspectives overlap in practice: you can believe in the potential of AI to advance science and still see the risks, and you can focus on what AI will mean for national and international security and yet support investments in testing, evaluations, and safety practices. What’s important to recognize is that all three perspectives matter. All three viewpoints are legitimate and help to enrich the debate in our societies. How we strike the balance between them and manage the complex tradeoffs will define the landscape of risks and benefits that all of us must navigate in the years to come.
How has your academic and professional background influenced your views on AI and geopolitics?
I grew up the son of a diplomat, so I was always interested in the state of the world and how issues looked from the vantage point of different countries and cultures. AI is a general-purpose technology, which means that no one country can command all its benefits or shelter from the potential risks. Instead, we will need to invest in wise diplomacy so that even as nations compete over technologies like AI, they can also cooperate to promote stability, widen access to opportunity, and solve global problems, from climate change to food security to nonproliferation. That will require a complex geometry of diplomacy and development investments, and it will require countries to engage not just bilaterally and plurilaterally, but also in multilateral fora and with leaders in governments, industry and civil society. The stakes are high, and there is not a moment to waste for the next generation to make their voices heard in these debates.
Artificial intelligence is revolutionizing the modern world. It is ubiquitous – in our homes and offices, in the present and most certainly in the future. Today, we encounter AI as our distant ancestors once encountered fire. If we manage AI well, it will become a force for good, lighting the way to many transformative inventions. If we deploy it thoughtlessly, it will advance beyond our control, as Ben Buchanan and Andrew Imbrie show in “Il nuovo fuoco” (Egea, 2024, 368 pages, €29.50).