The US government is contemplating how to shape AI policy. Competition with China looms large.
For a long time, artificial intelligence seemed like one of those inventions that would always be 50 years away. The scientists who developed the first computers in the 1950s speculated about the possibility of machines with greater-than-human capacities. But enthusiasm didn’t necessarily translate into a commercially viable product, let alone a superintelligent one.
And for a while — in the ’60s, ’70s, and ’80s — it seemed like such speculation would remain just that. The sluggishness of AI development actually gave rise to a term: “AI winters,” periods when investors and researchers got bored with lack of progress in the field and devoted their attention elsewhere.
No one is bored now.
Limited AI systems have taken on an ever-bigger role in our lives, wrangling our news feeds, trading stocks, translating and transcribing text, scanning digital pictures, taking restaurant orders, and writing fake product reviews and news articles. And while there’s always the possibility that AI development will hit another wall, there’s reason to think it won’t: All of the above applications have the potential to be hugely profitable, which means there will be sustained investment from some of the biggest companies in the world. AI capabilities are reasonably likely to keep growing until they’re a transformative force.
A new report from the National Security Commission on Artificial Intelligence (NSCAI), a committee Congress established in 2018, grapples with some of the large-scale implications of that trajectory. In 270 pages and hundreds of appendices, the report tries to size up where AI is going, what challenges it presents to national security, and what can be done to set the US on a better path.
It is by far the best writing from the US government on the enormous implications of this emerging technology. But the report isn’t without flaws, and its shortcomings underscore how hard it will be for humanity to get a handle on the warp-speed development of a technology that’s at once promising and perilous.
As it exists right now, AI poses policy challenges. How do we determine whether an algorithm is fair? How do we stop oppressive governments from using AI surveillance for totalitarianism? Those questions are mostly addressable with the same tools the US has used in other policy challenges over the decades: Lawsuits, regulations, international agreements, and pressure on bad actors, among others, are tried-and-true tactics to control the development of new technologies.
But for more powerful and general AI systems — advanced systems that don’t yet exist but may be too powerful to control once they do — such tactics probably won’t suffice.
When it comes to AI, the big overarching challenge is making sure that as our systems get more powerful, we design them so their goals are aligned with those of humans — that is, humanity doesn’t construct scaled-up superintelligent AI that overwhelms human intentions and leads to catastrophe.
Because the tech is necessarily speculative, the problem is that we don’t know as much as we’d like to about how to design those systems. In many ways, we’re in a position akin to someone worrying about nuclear proliferation in 1930. It’s not that nothing useful could have been done at that early point in the development of nuclear weapons, but at the time it would have been very hard to think through the problem and to marshal the resources — let alone the international coordination — needed to tackle it.
In its new report, the NSCAI wrestles with these problems and (mostly successfully) addresses the scope and key challenges of AI; however, it has limitations. The commission nails some of the key concerns about AI’s development, but its US-centric vision may be too myopic to confront a problem as daunting and speculative as an AI that threatens humanity.
The leaps and bounds in AI research, briefly explained
AI has seen extraordinary progress over the past decade. AI systems have improved dramatically at tasks including translation, playing games such as chess and Go, answering important research biology questions (such as predicting how proteins fold), and generating images.
These systems also determine what you see in a Google search or in your Facebook News Feed. They compose music and write articles that, at first glance, read as though a human wrote them. They play strategy games. They are being developed to improve drone targeting and detect missiles.
All of those are instances of “narrow AI” — computer systems designed to solve specific problems, versus those with the sort of generalized problem-solving capabilities humans have.
But narrow AI is getting less narrow and researchers have gotten better at creating computer systems that generalize learning capabilities. Instead of mathematically describing detailed features of a problem for a computer to solve, today it’s often possible to let the computer system learn the problem by itself.
As computers get good enough at performing narrow AI tasks, they start to exhibit more general capabilities. For example, OpenAI’s famous GPT series of text generators is, in one sense, the narrowest of narrow AIs — it just predicts what the next word will be, based on previous words it’s prompted with and its vast store of human language. And yet, it can now identify questions as reasonable or unreasonable as well as discuss the physical world (for example, answering questions about which objects are larger or which steps in a process must come first).
What these developments show us is this: In order to be very good at narrow tasks, some AI systems eventually develop abilities that are not narrow at all.
The NSCAI report acknowledges this eventuality. “As AI becomes more capable, computers will be able to learn and perform tasks based on parameters that humans do not explicitly program, creating choices and taking actions at a volume and speed never before possible,” the report concludes.
That’s the general dilemma the NSCAI is tasked with addressing. A new technology, with both extraordinary potential benefits and extraordinary risks, is being developed. Many of the experts working on it warn that the results could be catastrophic. What concrete policy measures can the government take to get clarity on a challenge such as this one?
What the report gets right
The NSCAI report is a significant improvement on much of the existing writing about artificial intelligence in one important respect: It understands the magnitude of the challenge.
For a sense of that magnitude, it’s useful to imagine the questions involved in figuring out government policy on nuclear nonproliferation in the 1930s.
By 1930, there was certainly some scientific evidence that nuclear weapons would be possible. But there were no programs anywhere in the world to make them, and there was even some dissent within the research community about whether such weapons could ever be built.
As we all know, nuclear weapons were built within the next decade and a half, and they changed the trajectory of human history.
Given all that, what could the government have done about nuclear proliferation in 1930? Decide on the wisdom of pushing itself to develop such weapons, perhaps, or develop surveillance systems that would alert the country if other nations were building them.
In practice, the government in 1930 did none of these things. When an idea is just beginning to gain a foothold among the academics, engineers, and experts who work on it, it’s hard for policymakers to figure out where to start.
“When considering these decisions, our leaders confront the classic dilemma of statecraft identified by Henry Kissinger: ‘When your scope for action is greatest, the knowledge on which you can base this action is always at a minimum. When your knowledge is greatest, the scope for action has often disappeared,’” Chair Eric Schmidt and Vice Chair Bob Work wrote of this dilemma in the NSCAI report.
As a result, much government writing about AI to date has seemed fundamentally confused, limited by the fact that no one knows exactly what transformative AI will look like or what key technical challenges lie ahead.
In addition, a lot of the writing about AI — both by policymakers and by technical experts — is very small, focused on possibilities such as whether AI will eliminate call centers, rather than the ways general AI, or AGI, will usher in a dramatic technological realignment, if it’s built at all.
The NSCAI analysis does not make this mistake.
“First, the rapidly improving ability of computer systems to solve problems and to perform tasks that would otherwise require human intelligence — and in some instances exceed human performance — is world altering. AI technologies are the most powerful tools in generations for expanding knowledge, increasing prosperity, and enriching the human experience,” reads the executive summary.
The report also extrapolates from current progress in machine learning to identify some specific areas where AI might enable notable good or notable harm:
Combined with massive computing power and AI, innovations in biotechnology may provide novel solutions for mankind’s most vexing challenges, including in health, food production, and environmental sustainability. Like other powerful technologies, however, applications of biotechnology can have a dark side. The COVID-19 pandemic reminded the world of the dangers of a highly contagious pathogen. AI may enable a pathogen to be specifically engineered for lethality or to target a genetic profile — the ultimate range and reach weapon.
One major challenge in communicating about AI is it’s much easier to predict the broad effects that unleashing fast, powerful research and decision-making systems on the world will have — speeding up all kinds of research, for both good and ill — than it is to predict the specific inventions those systems will come up with. The NSCAI report outlines some of the ways AI will be transformative, and some of the risks those transformations pose that policymakers should be thinking about how to manage.
Overall, the report seems to grasp why AI is a big deal, what makes it hard to plan for, and why it’s necessary to plan for it anyway.
What’s missing from the report
But there’s an important way in which the NSCAI report falls short. Recognizing that AI poses enormous risks and that it will be powerful and transformative, the report foregrounds a posture of great-power competition — with both eyes on China — to address the looming problem before humanity.
“We should race together with partners when AI competition is directed at the moonshots that benefit humanity like discovering vaccines. But we must win the AI competition that is intensifying strategic competition with China,” the report concludes.
China is run by a totalitarian regime that poses geopolitical and moral problems for the international community. China’s repression in Hong Kong and Tibet, and the genocide of the Uyghur people in Xinjiang, have been technologically aided, and the regime should not have more powerful technological tools with which to violate human rights.
There’s no question that China developing AGI would be a bad thing. And the countermeasures the report proposes — especially an increased effort to attract the world’s top scientists to America — are a good idea.
More than that, the US and the global community should absolutely devote more attention and energy to addressing China’s human rights violations.
But it’s where the report proposes beating China to the punch by accelerating AI development in the US, potentially through direct government funding, that I have hesitations. Adopting an arms-race mentality on AI would make involved companies and projects more likely to discourage international collaboration, cut corners, and evade transparency measures.
In 1939, at a conference at George Washington University, Niels Bohr announced that he’d determined that uranium fission had been discovered. Physicist Edward Teller recalled the moment:
For all that the news was amazing, the reaction that followed was remarkably subdued. After a few minutes of general comment, my neighbor said to me, “perhaps we should not discuss this. Clearly something obvious has been said, and it is equally clear that the consequences will be far from obvious.” That seemed to be the tacit consensus, for we promptly returned to low-temperature physics.
Perhaps that consensus would have prevailed, if World War II hadn’t started. It took the concerted efforts of many brilliant researchers to bring nuclear bombs to fruition, and at first most of them hesitated to be a part of the effort. Those hesitations were reasonable — inventing the weaponry with which to destroy civilization is no small thing. But once they had reason to fear that the Nazis were building the bomb, those reservations melted away. The question was no longer “Should these be built at all?” but “Should these be built by us, or by the Nazis?”
It turned out, of course, that the Nazis were never close, nor was the atomic bomb needed to defeat them. And the US development of the bomb caused its geopolitical adversaries, the USSR, to develop it too, much sooner than it otherwise would have, through espionage. The world then spent decades teetering on the brink of nuclear war.
The specter of a mess like that looms large in everyone’s minds when they think of AI.
“I think it’s a mistake to think of this as an arms race,” Gilman Louie, a commissioner on the NSCAI report, told me — though he immediately added, “We don’t want to be second.”
An arms race can push scientists toward working on a technology that they have reservations about, or one they don’t know how to safely build. It can also mean that policymakers and researchers don’t pay enough attention to the “AI alignment” problem — which is really the looming issue when it comes to the future of AI.
AI alignment is the work of trying to design intelligent systems that are accountable to humans. An AI even in well-intentioned hands will not necessarily ensure its development consistent with human priorities. Think of it this way: An AI aiming to increase a company’s stock price, or to ensure a robust national defense against enemies, or to make a compelling ad campaign, might take large-scale actions — like disabling safeguards, rerouting resources, or interfering with other AI systems — we would never have asked for or wanted. Those large-scale actions in turn could have drastic consequences for economies and societies.
It’s all speculative, for sure, but that’s the point. We’re in the year 1930 confronting the potential creation of a world-altering technology that might be here a decade-and-a-half from now — or might be five decades away.
Right now, our capacity to build AIs is racing ahead of our capacity to understand and align them. And trying to make sure AI advancements happen in the US first can just make that problem worse, if the US doesn’t also invest in the research — which is much more immature, and has less obvious commercial value — to build aligned AIs.
“We ultimately came away with a recognition that if America embraces and invests in AI based on our values, it will transform our country and ensure that the United States and its allies continue to shape the world for the good of all humankind,“ NSCAI executive director Yll Bajraktari writes in the report. But here’s the thing: It’s entirely possible for America to embrace and invest in an AI research program based on liberal-democratic values that still fails, simply because the technical problem ahead of us is so hard.
This is an important respect in which AI is not analogous to nuclear weapons, where the most important policy decisions were whether to build them at all and how to build them faster than Nazi Germany.
In other words, with AI, there’s not just the risk that someone else will get there first. A misaligned AI built by an altruistic, transparent, careful research team with democratic oversight and a goal to share its profits with all of humanity will still be a misaligned AI, one that pursues its programmed goals even when they’re contrary to human interests.
The problem with an arms-race mentality
The limited scope of the NSCAI report is a fairly obvious consequence of what the commission is and what it does. The commission was created in 2018 and tasked with recommending policies that would “advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.”
Right now, the part of the US government that takes artificial intelligence risks seriously is the national security and defense community. That’s because AI risk is weird, confusing, and futuristic, and the national security community has more latitude than the rest of the government to spend resources seriously investigating weird, confusing, and futuristic things.
But AI isn’t just a defense and security issue; it will affect — is affecting — most aspects of society, like education, criminal justice, medicine, and the economy. And to the extent it is a defense issue, that doesn’t mean that traditional defense approaches make sense.
If, before the invention of electricity, the only people working on producing electricity had been armies interested in electrical weapons, they’d not just be missing most of the effects of electricity on the world, they’d even be missing most of the effects of electricity on the military, which have to do with lighting, communications, and intelligence, rather than weapons.
The NSCAI, to its credit, takes AI seriously, including the non-defense applications — and including the possibility that AI built in America by Americans could still go wrong. “The thing I would say to American researchers is to avoid skipping steps,” Louie told me. “We hope that some of our competitor nations, China, Russia, follow a similar path — demonstrate it meets thorough requirements for what we need to do before we use these things.”
But the report, overall, looks at AI from the perspective of national defense and international competition. It’s not clear that will be conducive to the international cooperation we might need in order to ensure no one anywhere in the world rushes ahead with an AI system that isn’t ready.
Some AI work, at least, needs to be happening in a context insulated from arms-race concerns and fears of China. By all means, let’s devote greater attention to China’s use of tech in perpetrating human rights violations. But we should hesitate to rush ahead with AGI work without a sense of how we’ll make it happen safely, and there needs to be more collaborative global work on AI, with a much longer-term lens. The perspectives that work could create room for just might be crucial ones.