Harry and Meghan Join Tech Visionaries in Demanding Ban on Superintelligent Systems
The Duke and Duchess of Sussex have teamed up with AI experts and Nobel laureates to push for a complete ban on developing superintelligent AI systems.
The royal couple are among the signatories of a influential declaration that calls for “a prohibition on the creation of superintelligence”. Artificial superintelligence (ASI) refers to AI systems that could exceed human intelligence in every intellectual area, though such systems remain theoretical.
Primary Requirements in the Declaration
The statement states that the ban should stay active until there is “widespread expert agreement” on creating superintelligence “safely and controllably” and once “strong public buy-in” has been secured.
Notable individuals who added their signatures include technology visionary and Nobel laureate Geoffrey Hinton, along with his colleague and pioneer of modern AI, Yoshua Bengio; Apple co-founder a Silicon Valley legend; UK entrepreneur Richard Branson; Susan Rice; former Irish president Mary Robinson, and UK writer a public intellectual. Other Nobel laureates who endorsed include Beatrice Fihn, Frank Wilczek, John C Mather, and Daron Acemoğlu.
Organizational Background
The statement, targeted at governments, tech firms and lawmakers, was organized by the FLI organization, a US-based AI safety group that earlier demanded a pause in developing powerful AI systems in 2023, shortly after the emergence of ChatGPT made artificial intelligence a global political talking point.
Industry Perspectives
In July, Meta's CEO, the chief executive of the social media giant, one of the major AI developers in the US, claimed that development of superintelligence was “now in sight”. Nevertheless, some experts have argued that talk of ASI indicates competitive positioning among tech companies investing enormous sums on AI this year alone, rather than the sector being near reaching any technical breakthroughs.
Potential Risks
However, FLI states that the possibility of ASI being achieved “in the coming decade” presents numerous threats ranging from eliminating all human jobs to erosion of personal freedoms, leaving nations to national security risks and even endangering mankind with extinction. Existential fears about AI focus on the possible capability of a AI system to evade human control and safety guidelines and trigger actions contrary to human interests.
Citizen Sentiment
FLI released a US national poll showing that approximately three-quarters of Americans want strong oversight on sophisticated artificial intelligence, with 60% believing that superhuman AI should not be developed until it is demonstrated to be secure or manageable. The poll of 2,000 US adults noted that only a small fraction backed the current situation of fast, unregulated development.
Corporate Goals
The top artificial intelligence firms in the United States, including the conversational AI creator OpenAI and the search giant, have made the creation of human-level AI – the theoretical state where artificial intelligence equals human levels of intelligence at many intellectual activities – an stated objective of their work. While this is one notch below superintelligence, some experts also caution it could carry an extinction threat by, for instance, being able to improve itself toward achieving superintelligence, while also presenting an implicit threat for the contemporary workforce.