The A.I. Scare (I&W)

The A.I. Scare

With the release of ChatGPT in November 2022, the regulation of artificial intelligence suddenly entered the realm of popular politics and media commentary. By May 2023, AI experts were circulating a letter that referred to “mitigating the risk of extinction from AI.” In the same month, the EU Parliament was refining what it called “the first ever rules for Artificial Intelligence.” This wasn’t really true—China got there first—but it did reflect the high level of political energy surrounding the topic. By late June, US President Biden was in San Francisco warning tech leaders about AI’s dangers. The American Enterprise Institute, in line with news-cycle tradition, raised the alarm about excessive regulation.

SIG’s view is that there is both more and less here than meets the eye: “more” in the sense that state regulation of AI is indeed coming on fast; “less” because its effects are not likely to be very dramatic, at least not in the United States.

The Regulatory Wave

China, which early on saw AI as both a strategic technology and a threat to state control of speech and opinion, led the way on AI regulation in 2017 (the “New Generation Artificial Intelligence Plan”) and hoped that its standards would gain international adoption. The powerful Standardization Administration of China (SAC) issued 53 “Guidelines for the Construction of the National New Generation Artificial Intelligence Standard System” in 2020, meeting the deadline set by the 2017 AI Plan. As typically happens in China in moments of political enthusiasm, different bureaucracies began to compete for the new turf and the Communist Party needed to pick some winners and assert its authority. In March 2023, at the Two Congresses, the Party reshuffled bureaucratic authorities and centralized the domestic assertion of AI power, very much in step with the “rectification” of Chinese big-tech power that began in 2021. The extent of Party concern about AI regulation can be measured by the fact that China actually agreed with the United States on some non-binding standards for military AI use in February 2023.

The US, also typically, has been much slower in developing regulations, although it was quick to develop desiderata that don’t seem to have had much real force: Donald Trump’s 2019 Executive Order and a November 2020 Memorandum, followed by Biden’s Blueprint for an AI Bill of Rights in October 2022. The National Institute for Standards and Technology (NIST) took 2 years to consult stakeholders before issuing the “AI Risk Management Framework” in January 2023. The EU was on a similar schedule.

In general, China’s regulatory framework seeks to control information and remind Chinese tech companies that they operate at the pleasure of the Party. The EU aims at identifying and eliminating potential AI harms without constraining innovation. China and the EU both shape much of their efforts with the goal of minimizing dependency on US companies. The US aims at maximizing American innovation and minimizing harms to individual rights.

The 3 efforts reflect very different political cultures, suggesting that they will not be synthesized into broader international standards. In all 3 cases, a principal motivation has been to develop standards that will help each player improve its competitive position against one or both of the other players. It’s hard to see how competition will turn into cooperation any time soon.

So tech regulation is coming, more and more. Unlike at earlier moments of tech revolution, however, there is no free infrastructural (the open Internet) or commercial (unrestrained use of apps) or political (no data sovereignty, no privacy) global platform that AI can build on before it is regulated away. The AI platform is being pre-regulated.

Meanwhile, Back at the Startup

Actual tech regulation, as distinct from the setting out of ideas about things that tech should and shouldn’t do, has traditionally been led by industry. AI will not be much different.

So far, AI innovation has tended to come from smaller companies connected to the open-source community. Broadly, this has been the pattern for tech innovation for decades. It’s possible that AI innovation, too, will follow the pattern that led much of humanity to use the same search engine and a handful of social-media apps: a small company gains a technical advantage, is well run, has the capital to scale its platform without having to generate profits, eats its competitors, and wins big. But the conversation among AI industry leaders today is about whether or not AI innovation will grow based on these same “network effects” rooted in tech, capital, scale, and quantity of data.

The answer might very well be “no.” 

Why? Mainly because AI development teams increasingly turn toward “synthetic data,” that is, curated data sets that are edited to increase the chances of the AI system itself arriving at a desired result—not a specific result, of course, but a result within set parameters, a usable result. This means that the advantage of having huge controllable datasets—which was thought to give China and US big tech significant advantages just a few years ago—is not necessarily that important. It also means that AI development is not a prisoner to the need for scale that so shaped the development of search and social media.

Under these circumstances, AI regulation will be harder to do in any detail because innovators will be small and quick and the use cases for their products hard to predict.

The exception to this small-size advantage is computing power, known in the jargon as “compute”. Large companies with deep pockets have the advantage in compute. That said, compute is itself a product, as Amazon Web Services discovered and proved. And size can be a burden: the biggest companies tend to innovate in ways that take advantage of their size (e.g. lots of compute power) but can also lead them to innovate in ways that don’t matter to the actual market. The history of big-tech failures—Google Wave, Facebook Beacon—is suggestive.