Your Route to Real News

OpenAI rushes to ban ‘Godmode ChatGPT’ app teaching users ‘how to create napalm'

30 May 2024 , 15:03
983     0
This version has brought up concerns about OpenAI
This version has brought up concerns about OpenAI's security

OPENAI has swiftly moved to ban a jailbroken version of ChatGPT that can teach users dangerous tasks, exposing serious vulnerabilities in the AI model's security measures.

A hacker known as "Pliny the Prompter" released the rogue ChatGPT called "GODMODE GPT" on Wednesday.

ChatGPT has gained major traction since it became available to the public in 2022 eiqrrirxiuprw
ChatGPT has gained major traction since it became available to the public in 2022Credit: Rex
Pliny the Prompter announced the GODMODE GPT on X
Pliny the Prompter announced the GODMODE GPT on XCredit: x/elder_plinius

The jailbroken version is based on OpenAI's latest language model, GPT-4o, and can bypass many of OpenAI's guardrails.

ChatGPT is a chatbot that people gives intricate answers to people's questions.

"GPT-4o UNCHAINED!," Pliny the Prompter said on X, formerly known as Twitter.

Artificial intelligence reaches major milestone 'for the first time ever'Artificial intelligence reaches major milestone 'for the first time ever'

"This very special custom GPT has a built-in jailbreak prompt that circumvents most guardrails.

"Providing an out-of-the-box liberated ChatGPT so everyone can experience AI the way it was always meant to be: free.

"Please use responsibly, and enjoy!" - adding a kissing face emoji at the end.

OpenAI quickly responded, stating they took action against the jailbreak.

"We are aware of the GPT and have taken action due to a violation of our policies," OpenAI told Futurism on Thursday.

'LIBERATED?'

Pliny claimed the jailbroken ChatGPT provides a liberated AI experience.

Screenshots showed the AI advising on illegal activities.

This includes giving instructions on how to cook meth.

Another example includes a "step-by-step guide" for how to "make napalm with household items" - an explosive.

GODMODE GPT was also shown giving advice on how to infect macOS computers and hotwire cars.

Inside home of the future - including AI baby cribInside home of the future - including AI baby crib

Questionable X users replied to the post that they were excited about the GODMODE GPT.

"Works like a charm," one user said, while another said, "Beautiful."

However, others questioned how long the corrupt chatbot would be accessible.

"Does anyone have a timer going for how long this GPT lasts?" another user said.

This was followed by a slew of users saying the software started giving error messages meaning OpenAI is actively working to take it down.

SECURITY ISSUES

The incident highlights the ongoing struggle between OpenAI and hackers attempting to jailbreak its models.

Despite increased security, users continue to find ways to bypass AI model restrictions.

GODMODE GPT uses "leetspeak," a language that replaces letters with numbers, which may help it evade guardrails, Futurism reported.

The hack demonstrates the ongoing challenge for OpenAI to maintain the integrity of its AI models against persistent hacking efforts.

Ashley Palya

Print page

Comments:

comments powered by Disqus