Photo illustration: Jonathan Raa/NurPhoto via Getty Images
Cover Since the launch of viral chatbot ChatGPT in November 2022, companies including Baidu and Google have raced to create rivals of their own. In turn, this has led to mounting concerns about the safety and potential risks of artificial intelligence (Photo illustration: Jonathan Raa/NurPhoto via Getty Images)

More than 1,300 people, including tech luminaries, have signed an open letter asking for a six-month halt to the training of AI systems more powerful than GPT-4

On March 22, an open letter was published by the Future of Life Institute calling on artificial intelligence (AI) labs to pause their experiments of powerful systems for at least six months to assess the risks. 

In the week since, the letter has been signed by more than 1,300 individuals, many of who are computer and research scientists and others, tech luminaries and prominent thought leaders from Elon Musk to Sapiens author Yuval Noah Harari to Apple co-founder Steve Wozniak. 

The letter posits that despite the recent development and launches of AI systems such as OpenAI’s multimodal model GPT-4, the technology supporting its viral chatbot ChatGPT, we are far from understanding how AI works—and we need to take the time to do so. 

Read more: ChatGPT creator OpenAI releases new model GPT-4

An excerpt from the letter states: “As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” 

The Asilomar AI Principles is a set of 23 guidelines published in 2017 to direct AI research and development in the areas of research, ethics and values, and longer-term issues. The principles have been endorsed by the likes of Musk, Stephen Hawkings, OpenAI’s Sam Altman and Skype co-founder Jaan Tallinn.

Tatler Asia
Elon Musk was among the more than 1,000 individuals who signed an open letter calling for at least a six-month pause on training of AI systems more powerful than GPT-4 (Photo: Justin Sull)
Above Elon Musk was among the more than 1,000 individuals who signed an open letter calling for at least a six-month pause on training of AI systems more powerful than GPT-4 (Photo: Justin Sull)

The letter goes on to state that the level of planning and management required has not yet been achieved “even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control”.

Gen.T honouree Daniel Ting, who is a director of the AI programme at Singapore Health Services, agrees that AI developers need to have a deeper understanding of the risks that AI could pose to the larger society: “While pushing the boundaries of innovation, AI research groups worldwide should be cautious about the potential implications on safety and ethics, as well as the [potential] unintended consequences that could do harm to the global population.”

According to the letter, the pause would be necessary for coming up with shared safety protocols that have been audited and agreed upon by AI labs and independent experts. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter reads. The protocols will help to ensure that such systems are “safe beyond a reasonable doubt”.

The pause, it adds, would merely be “a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities”.

Read more: Bot MD to provide AI chat assistant to over 200,000 doctors in Indonesia

The letter also highlights the need for great regulations on the AI industry, and for developers to work alongside policymakers. 

For Gen.T honouree Dean Ho, who uses AI to produce life-changing outcomes in areas of healthcare and food security, this is crucial, especially in certain use cases of AI. “If AI directly impacts human well-being or is potentially used to aid in generating literature or recommendations, actionable regulation and scrutiny will be essential. A recent study showing the difficulty in differentiating between human and AI-created medical literature confirms the need for proper AI stewardship.”

The letter ends off by stating that the goal should be to allow everyone to enjoy a “long AI summer” where “we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.”