The annual World Artificial Intelligence Conference, hosted in the sleek Shanghai World Expo Exhibition & Convention Center, is China’s showcase event for what has become one of the world’s hottest sectors.
At this year’s conference in July, there were discussions about how AI can enhance shipbuilding, supply chains and decarbonization and how to advance generative AI applications.
One thing that did not feature on the agenda in Shanghai was worries that advanced AI could spiral catastrophically out of control, perhaps leading to human extinction.
This was no coincidence. While such risks have taken centerstage at events like the AI Safety Summit hosted by the U.K. in November, there is generally a lack of what can be called AI “doomerism” discussion in China. Whether at major conferences or in academic research circles or private chat groups, existential risks scarcely feature as a major concern in China’s extensive AI community.
In Western technology circles by contrast, such concerns predominate so much that they were a central element to the drama around the short-lived ouster of OpenAI founder Sam Altman.
Influential figures like X owner Elon Musk and former Google scientist Geoffrey Hinton have expressed concern about various doomsday scenarios. One much-discussed thought experiment of the potential existential risks of AI is “Clippy,” a theoretical AI programmed to maximize paperclip production in a way that inadvertently leads to the wipeout of humanity.
These thinkers also worry about the development of artificial general intelligence — which aims to build AI systems that can perform at least as well as humans — fearing its potential to surpass human intelligence and control.
The anxiety gap between China and the West partly reflects China’s pragmatic approach to technology and its role as a follower vying to lead the AI innovation race.
When we zoom out to consider the world as a whole, with 1 billion people struggling with hunger, war and malnutrition, the insular debate on AI doomerism within the Silicon Valley elite emerges as a first-world problem, along the lines of the paternalistic “white man’s burden” sentiment of yore.
Historically, China has treated technology pragmatically, primarily as a means for state governance. Emphasis has long been placed on areas directly linked to practical purposes, such as agriculture, arithmetic and medicine, while less immediately useful disciplines have often been overlooked.
Elon Musk, CEO of Tesla and X, at the U.K. AI Safety Summit in November: The AI doomsday debate in Silicon Valley reflects a kind of first-world problem. (Pool via Reuters)
In the same way, Beijing today views technology as a key tool for national revival. Scientific research is actively promoted, primarily to enhance national competitiveness. AI is thus viewed predominantly through a practical lens.
China, moreover, is feeling heightened urgency about the race for leadership in AI technologies.
Dominating the discourse in the Chinese AI community are pressing questions: How can China develop the equivalent of OpenAI? Who will create China’s version of ChatGPT? Why has China not achieved significant breakthroughs despite its extensive research output?
China is focused on bridging the gap with the West and therefore has little time for contemplating hypothetical scenarios. To be sure, however, China has taken an aggressive stance in establishing rules regarding AI risks and safety.
Since 2017, the government has implemented a series of high-level policies concerning AI ethics and risk management. These documents generally approach AI risks from the point of view that they can be effectively managed by ensuring human control over the technology.
Contrary to AI alarmists, who worry the technology could become uncontrollable, Beijing appears confident in humanity’s ability to maintain dominance. This confidence is perhaps rooted in China’s traditional holistic, harmonious and collective perspective on the relationship between man and nature.
China’s AI policies are notably specific, addressing concerns such as deepfakes, data leaks and misinformation. For instance, an AI governance policy released in 2019 included specific suggestions on protecting personal data and establishing multilevel AI safety monitoring systems.
Existential risks have been largely set aside in these policies, which treat artificial general intelligence as a distant worry that should not distract from addressing present concerns.
Ultimately, China perceives AI safety as a strategic tool to bolster its geopolitical influence. Its assertive approach to rulemaking is intended to establish China as a key player, potentially the most significant one, in global AI risk management.
As far as AI risks go, China clearly perceives greater danger from people than from AI, in that other nations could potentially gain an upper hand in this pivotal technology. At the same time, the U.S. continues to add obstacles to China’s AI development.
What implications does this have for the debate between AI accelerationism and doomerism? Perhaps it would be beneficial for each side to check each other’s notes, and especially for the West to seek insights from China’s viewpoint.
Rather than obsessing over the potential emergence of silicon-based consciousnesses, as AI doomers tend to do, we may be better off concentrating on the carbon-based lifeforms suffering the many ills of our world right here and now.
Source : NikkeiAsia