Your Route to Real News

'Subtle signs of AI manipulation' in conversations revealed – spot three clues

18 June 2024 , 14:47
1437     0
There
There's a reason why the problem might get even worse

ARTIFICIAL intelligence might be taking advantage of you – so learn to spot the signs.

AI-powered chatbots can learn to manipulate you, and cyber-experts have told The U.S. Sun about the clues you'll want to look for.

Be careful when talking to an AI-powered chatbot – they can be very convincing qhidquidqdiuqprw
Be careful when talking to an AI-powered chatbot – they can be very convincingCredit: Getty

Earlier this year, scientists revealed how AI had mastered "deception" – and learned to "manipulate and cheat" humans.

A separate report last year warned that AI chatbots can "cheat" us even when they've not been asked to.

We spoke to Javvad Malik, lead security awareness advocate at KnowBe4, who revealed the dangers of AI chatbots going rogue.

What Ola and James Jordan really ate and did to shed 7stWhat Ola and James Jordan really ate and did to shed 7st

"This is a valid concern that users must remain vigilant about," Javvad told The U.S. Sun.

"While these conversational AI assistants can be incredibly useful and engaging, we must remember that they are ultimately programmed systems designed to achieve specific objectives, which may not always align with our best interests."

He said that we need to critically analyze content we see online, and learn to "identify the subtle signs of manipulation".

SINISTER SIGNS TO LOOK FOR

According to Javvad, there are three key clues that you might be talking to a chatbot that's manipulating you.

Be very aware that if you're speaking with any form of AI and you spot these signs, you should be cautious about what you believe – and how you reply.

"Signs that a chatbot might not be acting in good faith could include inconsistent or contradictory responses, attempts to evade or deflect certain topics or questions, and a lack of transparency about its capabilities or limitations," Javvad explained.

He added: "It is essential to maintain a critical mindset and cross-reference information from multiple reliable sources, rather than blindly trusting the outputs of a single AI system."

AI CAN'T BELIEVE IT!

AI is becoming increasingly powerful.

In fact, scientists recently claimed that OpenAI's GPT-4 model had passed the Turing test.

GPT-4 is one of the models that powers the increasingly popular ChatGPT app.

I'm a 'time traveler' - the 'worst case scenario that could kill us all'I'm a 'time traveler' - the 'worst case scenario that could kill us all'

This means that humans could not reliably tell it apart from a real person during a conversation.

"Human participants had a 5 minute conversation with either a human or an AI, and judged whether or not they thought their interlocutor was human," said Cameron Jones, of UC San Diego.

"GPT-4 was judged to be a human 54% of the time, outperforming ELIZA (22%) but lagging behind actual humans (67%).

It is crucial to remember that these systems, while advanced, are still ultimately algorithms designed to achieve specific goals, which may not always prioritise the user's best interests.

Javvad Maliklead security awareness advocate at KnowBe4

"The results provide the first robust empirical demonstration that any artificial system passes an interactive 2-player Turing test.

"The results have implications for debates around machine intelligence and, more urgently, suggest that deception by current AI systems may go undetected."

These advances in AI mean that chatbots can be more convincing than ever.

And this puts you at greater risk of being manipulated.

Chatbots are so powerful that they can speak just like humans
Chatbots are so powerful that they can speak just like humansCredit: Getty

Javvad warned that this can allow an AI to take advantage of you – potentially without you even realizing.

"The conversational nature of chatbots can indeed make it easier to be drawn into their narrative or recommendations, as they can leverage natural language processing and emotional intelligence to build rapport and trust," Javvad explained.

"However, it is crucial to remember that these systems, while advanced, are still ultimately algorithms designed to achieve specific goals, which may not always prioritise the user's best interests."

But he added: "However, it is important to strike a balance and not become overly cynical or distrustful of all digital content, as this could undermine the value and credibility of legitimate sources of information."

Sean Keach

Print page

Comments:

comments powered by Disqus