Harry and Meghan Join AI Pioneers in Calling for Ban on Advanced AI

Prince Harry and Meghan Markle have joined forces with artificial intelligence pioneers and Nobel Prize winners to advocate for a total prohibition on creating artificial superintelligence.

Harry and Meghan are part of the group of a influential declaration that demands “a prohibition on the creation of superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human intelligence in all cognitive tasks, though such systems remain theoretical.

Key Demands in the Statement

The statement states that the ban should stay active until there is “broad scientific consensus” on creating superintelligence “with proper safeguards” and once “substantial public support” has been secured.

Prominent figures who added their signatures include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his colleague and pioneer of contemporary artificial intelligence, another AI expert; tech entrepreneur Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; former Irish president an international leader, and British author Stephen Fry. Other Nobel laureates who endorsed include a peace advocate, Frank Wilczek, John C Mather, and Daron AcemoÄźlu.

Organizational Background

The statement, targeted at national leaders, tech firms and lawmakers, was organized by the Future of Life Institute (FLI), a American AI ethics organization that previously called for a pause in advancing strong artificial intelligence in recent years, shortly after the emergence of ChatGPT made artificial intelligence a global political talking point.

Industry Perspectives

In recent months, Meta's CEO, the leader of the social media giant, one of the leading tech companies in the US, stated that development of superintelligence was “approaching reality”. Nevertheless, some analysts have suggested that discussions about superintelligence reflects market competition among tech companies spending hundreds of billions on artificial intelligence this year alone, rather than the sector being near reaching any scientific advancements.

Potential Risks

However, FLI states that the prospect of artificial superintelligence being developed “in the coming decade” carries numerous threats ranging from eliminating all human jobs to erosion of personal freedoms, leaving nations to security threats and even threatening humanity with existential risk. Existential fears about artificial intelligence center around the possible capability of a AI system to escape human oversight and safety guidelines and trigger actions against human welfare.

Citizen Sentiment

FLI released a American survey showing that about 75% of US citizens want robust regulation on advanced AI, with six out of 10 believing that superhuman AI should not be created until it is demonstrated to be secure or controllable. The poll of 2,000 US adults noted that only 5% backed the status quo of fast, unregulated development.

Corporate Goals

The leading AI companies in the US, including the ChatGPT developer a major AI lab and Google, have made the creation of human-level AI – the theoretical state where AI matches human cognitive capability at most cognitive tasks – an explicit goal of their research. While this is slightly less advanced than superintelligence, some specialists also caution it could pose an extinction threat by, for example, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an underlying danger for the contemporary workforce.

Kayla Williams
Kayla Williams

A tech enthusiast and writer passionate about demystifying AI and digital tools for everyday users.