Many experts have considered the United States the dominant world power since World War II, a timeframe spanning the lives of all current Harvard undergraduates.
But this status may change within our lifetimes. It increasingly appears that China might displace the U.S. as the world’s leading superpower. This isn’t shocking news; it’s a story that has been unfolding over several decades and is top-of-mind for many American political and economic commentators — as well as students here, in courses tracing China’s recent ascent.
There are two tailwinds that continue to support China’s climb.
The first is a fundamental economic fact: China’s economy is growing faster than the U.S.’s. Today, the American GDP is still bigger by roughly $5.6 trillion, but this might change soon. According to the Centre for Economics and Business Research, China will overtake the United States as the world’s largest economy by 2036 — just around a decade after most of us graduate.
While the first tailwind is well-understood, the second tailwind is not. Technology is an equally powerful force reinforcing China’s growth, but not in the way you might expect.
China has invested heavily in artificial intelligence — and has state surveillance feeds that can provide the large datasets needed to improve their models. Some have started to see an “AI race” between China and the United States, with similar “races” happening across other fields, such as biotechnology and cybersecurity.
But what might prove to be more impactful than China’s use of technology is its non-use of it. China’s authoritarian government tightly controls technology on a large scale: restricting some kinds (gaming) and outright banning others (cryptocurrency).
Why? In our last piece, we explored the history and impact of the printing press, which, while definitely much older, reflects many of the same patterns of new technologies. The printing press undeniably kindled lots of good — such as increased knowledge transfer and monumental leaps in scientific discovery — but it also supercharged the spread of disinformation and spurred political and religious conflicts.
It’s always difficult to prognosticate, but today’s technological waves — including artificial intelligence, decentralized ledgers, and biotechnology, to name a few — despite their appealing applications, could have similarly destabilizing effects.
To take a topical example, large language models (like ChatGPT) could at some point do human-quality knowledge work, leading some to predict large-scale layoffs in white-collar service industries in the next five years. This would leave millions needing to “re-skill,” potentially translating to widespread unrest. It would put our institutions — and our social fabric — to the test.
Further, as technologies become increasingly powerful, the possible harm that can be caused by bad actors using those technologies also increases. This “dual-use” dynamic of technology appears in many cases. For example, synthetic biology has been used to advance breakthrough treatments in blood cancer research, but also to develop pox viruses from scratch. In a future where the barrier to creating a deadly virus might be dangerously low, how can society be protected?
China seemingly has the answer. Their strictly-enforced bans, restrictions, and regulations appear to minimize dual-use risk and destabilizing forces of technology. This extent of control is a lever that the United States realistically does not have. This raises an important question for the United States: Is the Chinese model better suited for the 21st century? Perhaps restricting freedom can lead to prosperity, as Lee Kuan Yew argued for many decades in Singapore. Should governments emulate China in response to potentially destabilizing frontier technologies?
The answer is no, for two reasons.
First, aggressively restricting technology can lead to missing out on progress. The United States prides itself on its experimental ethos, a core value that has historically led us to a better future. We’d be mistaken to turn back on it now.
Second, and more importantly, trying to manage the adverse effects of technology through heavy governmental interference creates a new danger: an uber-powerful government capable of using these sophisticated technologies maliciously. It’s unclear what the future of advanced technologies will look like — whether highly positive, negative, or, more likely, falling somewhere in between — but regardless, we should prioritize maintaining our current liberties.
From biotechnology labs to classrooms in Harvard Kennedy School, we’re fortunate to be part of an institution that plays multiple roles in the unfolding story of technological advancement and U.S-China relations. But as we think particularly about these so-called technology races with China, it’s important to remember that technological competitiveness is not all that matters.
Given the American inclination towards free experimentation over authoritarian regulation, the work we do here — both technological and governmental — to protect ourselves against the ugly side of technological progress is crucial to the national interest. The challenge is more than simply building the better mousetrap; it’s also making sure the hammer doesn’t backfire.
Roman C. Ugarte ’24 is an Applied Math in Economics concentrator in Eliot House. K. Oskar Schulz ’22 is currently on leave founding a startup in New York City. Their column, “Under-indexed,” runs on alternate Wednesdays.