Your Route to Real News

AI risks ‘human extinction’ as ex-ChatGPT creators warn of ‘loss of control’

05 June 2024 , 16:14
1015     0
Greedy AI firms care more about money than people say accusers
Greedy AI firms care more about money than people say accusers

THE godfather of AI and ChatGPT experts have warned that out-of-control artificial intelligence firms could lead to “human extinction”.

Geoffrey Hinton and 12 AI specialists have signed an open letter demanding stronger government oversight to save us all.

An open letter has been written by 13 people who have worked at OpenAI, Google DeepMind, and Anthropic, warning of the dangers of advanced artificial intelligence qhiqhuiqkhiqedprw
An open letter has been written by 13 people who have worked at OpenAI, Google DeepMind, and Anthropic, warning of the dangers of advanced artificial intelligenceCredit: Getty

The former Google employee, who quit the company last year, is one of 13 signatories - six of whom are anonymous - for the Doomsday-style letter.

"We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity," the letter started.

"We also understand the serious risks posed by these technologies.

Pub delivers five-word response to critics of its 'slow' carvery servicePub delivers five-word response to critics of its 'slow' carvery service

"These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.

"AI companies themselves have acknowledged these risks as have governments across the world and other AI experts."

NUKE CATASTROPHE

Just last month, business magnate Warren Buffett warned that the threat posed by artificial intelligence was comparable to that of nuclear weapons.

And the US State Department has warned that the accelerating speed at which AI is being developed could be "catastrophic" for humanity.

There is an existential risk as it is feared an artificial general intelligence (AGI) which surpasses human intelligence could be developed.

Such advanced AI systems could prove catastrophic for humans.

Also, there is a risk that weapons could be powered by AI - experts fear AI will be harnessed for warfare purposes, which could see autonomous weapons controlled by machines.

MONEY MATTERS

In their letter published on June 4, the AI experts suggested that risks could be mitigated with guidance from the scientific community, policymakers, and the public.

"However, AI companies have strong financial incentives to avoid effective oversight," they added.

Some of us reasonably fear various forms of retaliation.

Millions of Android owners could slash 'vampire bills' – how to save moneyMillions of Android owners could slash 'vampire bills' – how to save money AI-extinction threat letter signatories

The letter pointed out that AI firms are fully aware of the harm they can wreak upon humanity.

"However, they currently have only weak obligations to share some of this information with governments, and none with civil society," the letter said.

"So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public.

"Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.

"Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.

"Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry."

The letter was signed by Geoffrey Hinton in conjunction with ex-OpenAI engineer Daniel Ziegler, and co-organizer, Daniel Kokotajlo, who quit OpenAI earlier this year.

Hinton worked on his AI technology at Google for more than a decade, but since quitting the company in 2023, he has been speaking out about the dangers his life work is capable of.

Others to sign the letter were Ramana Kumar, ex-Google DeepMind; Neel Nanda, currently Google DeepMind, formerly Anthropic, an AI safety and research firm; William Saunders and Carroll Wainwright, both ex-OpenAI employees.

FEARS OF REVENGE

The letter has listed four principles for advanced AI companies to commit to, to ultimately protect humanity.

These include protection of free speech - the firms shouldn't ban “disparagement” or negative comments about them, nor retaliate against those pointing out risk-related concerns.

The second principle is pushing for independent organizations which can be approached by current and former employees about their fears, without worrying about their livelihoods or revenge from their bosses.

Thirdly, the letter said there must be protection for whistleblowers and that firms should be open to criticism.

"The company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed," was its fourth principle.

ELON MUSK

Even Elon Musk has raised concerns over the speed at which AI is being developed and the lack of safeguards.

He previously signed an open letter that urged for a pause on creating new systems "more powerful" than current bots like ChatGPT.

ChatGPT is a chatbot that provides quick in-depth answers just like a human.

Debbie White

Print page

Comments:

comments powered by Disqus