The Future of Defense Task Force report, presented recently to the House Armed Services Committee, lays out its assessment of national security challenges to the US and offers a set of recommendations for the future of US defense. While the report had some welcome suggestions, one of its key recommendations strikes an unexpectedly dissonant note as we all try to think more critically about what the future of conflict may look like.
In highlighting a geotechnological competition between the US and China over “AI,” the report proposes that, “Using the Manhattan Project as a model, the United States must undertake and win the artificial intelligence race.” Referencing the Manhattan Project is an intuitive and easily understood move: this connotes a sense of national priority and even urgency – we need to assemble our great minds to accomplish important things. Past the initial rallying cry, however, we may want to abandon the atomic bomb analogy in favor of other analogies that might provide better ways for us to think about the futures of conflict, AI, and our “competition” with other world powers.
Forecasting by analogy can be a valuable approach to thinking about the future. Assessing new phenomena through historical analogies can offer important intellectual hand holds so that we can start exploring what something is and how it might change in the future and impact society. It can also backfire by interfering with a clear-eyed assessment of emerging phenomena; use the wrong historical analogy and you may entirely misperceive important changes. In this case, framing “AI” like it will be the atomic power of its day could distort our understanding of important changes reshaping our world and blind us to important possibilities for the futures of conflict, competition, and cooperation.
The analogy with the atomic bomb is problematic on a couple of levels:
It assumes that like nuclear weapons, there will be this singular thing called, “AI.” There is no “AI,” in the way there was a bomb or a nuclear reactor. Despite the imagination of even the great Isaac Asimov, the future didn’t end up being defined by a civilization of nuclear-enabled technologies, with nuclear power riding unobtrusively on belts and encapsulated in hand-held devices. In contrast, machines today come in a variety of forms, designed for different applications. Is this a contest that one person wins, in the same way someone can win a race to the Moon or be the first to develop an atomic bomb? Is there really a finish line somewhere? To how many different possible ends might current, and future developers deploy machines, and how many different forms might those machines take?
It presumes “AI” will have the same clear, delineated impact on warfare, foreign policy, and diplomacy that atomic weapons did. Outside of weapons of mass destruction and limited use as an energy source, atomic energy has had few other widespread applications. In contrast, given the current trajectories of how we are weaving machines into everything we do, and how conflict and competition is fast becoming wickedly boundary-spanning, future ecosystems of machines will shape – and be shaped by – billions of human beings. When virtually every perception and action related to competition and conflict is in some way mediated by machines of diverse types, how difficult will it be to determine exactly how they have changed warfare, policy making, and relationships?
Does it matter who is the first actor to achieve a specific capability such as practical and effective swarming capabilities for autonomous combat machines? It absolutely might, for a specific situation and for a limited period of time. Given what seems to be a rising difficulty in maintaining an absolute lead in digital technologies, such first-mover advantages may be fleeting. Additionally, since our societies today are frantically weaving automation and machine autonomy into virtually every aspect of daily life, and because conflict and competition is evolving to span all these domains, having a military-centric view of AI accomplishments would seem to impose dangerous strategic blinders.
This is not to say that reasoning or forecasting by analogy can’t be useful here. There certainly might be useful analogies to apply to the issue of “AI,” as well as more compelling ways to think about this.
For starters, let’s talk about this in terms of machines rather than AI. I know, this seems semantic or retro or something, and hear me out. Talking about “AI,” particularly to American audiences, too often conjures up the image of Skynet and sentient killer robots. Let’s just not go there. It also really makes people think of some singular phenomenon, as if it is some specific and final achievement. As an alternative term, machines prompts a much wider variety of images in audiences, everything from dishwashers to factories to the fictional robot Wall-e.
Second, it really might pay for more of us to explicitly talk about AI (ahem – machines) as a general-purpose technology, as suggested by Erik Brynjolfsson and Andrew McAfee in their book, Race Against the Machine. When we look at things like machine learning, it is used in everything from facial recognition to supply chain management to healthcare. The comparison to previous general-purpose technologies like steam power or electricity is compelling. And by talking about it this way, we might facilitate audiences seeing machines all around them – especially the invisible ones that already shape so much of our daily digital lives.
Third, and riffing off the previous point, perhaps a better analogy than the atomic bomb in the short-term would be industrialization. The historical experience of the industrialization of the economy is closer to what we are experiencing. In the industrial revolution we steadily and completely overhauled economic life with various new power sources, electrification, and mechanization. In recent years we have steadily been rewiring economic and social life through the internet, the Web, mobile connectivity, and now weaving through all of it automation and machine intelligence.
While industrialization might be a more useful analogy in the short-term, over the longer term the best analogy might come from biology. After some initial “installation” phase for our machine helpers, thinking about machines more like a biosphere might open up our perceptual filters wide enough to really see what is evolving. Unlike the technologies and infrastructure built during industrialization, what we have been building today is closer to some set of pre-Cambrian machine ecosystems. There are all manner of machines, some of which evolve as a matter of their design, some of which compete and co-evolve, and many of whom interact in direct and indirect ways that more closely resemble natural ecosystems than 19th century coal-powered infrastructure.
One of the primary ways we make sense of the future is by referencing past experience. It can be an extremely valuable approach, so long as we periodically stop to think about how our analogies influence our expectations for change. We also need to be aware of how they shape our responses to change; strategies and policies pursued with apparent success in history aren’t always the best ones for tomorrow. For the US today, we want to be very careful about our marked tendency to understand and anticipate the future through analogies drawn from World War II and the Cold War. We need to push away from our “glory days” and work a little harder to make sense of this rapidly changing world.
Comments