DEEPFAKES are becoming increasingly dangerous after a series of artificial intelligence upgrades.
Experts have told The U.S. Sun has it's now increasingly difficult to tell apart AI-generated video fakes from the real thing.
Deepfake technology can make you appear to say and do things you haven'tCredit: GettyDeepfakes use AI to create convincing videos of real people doing things they normally wouldn't.
And through voice-mapping technology, they can even create false audio – making someone appear to say (or even sing...) something they never did.
AI tech can even "clone" your voice so it sounds just like you, and only needs a few seconds of audio to base it on.
What Ola and James Jordan really ate and did to shed 7stWe spoke to Adam Pilton, Cyber Security Consultant at CyberSmart and former Detective Sergeant investigating cybercrime, who revealed how the technology is rapidly improving.
"AI is becoming increasingly sophisticated," Adam told us.
"The responses are increasingly accurate, the number of tools and resources we have access to has dramatically increased, from simple text responses, to pictures, audio, video and more.
"We are seeing deepfakes that have more realistic facial expressions, lip movements, and voice synthesis.
"This will make them even harder to distinguish from real videos and audio."
FAST FAKES
That's not all: deepfakes are also getting easier to create.
There are now many different apps and services that help users to create deepfakes.
And the time it takes to generate a deepfake is also shrinking.
"Deepfake creation tools are becoming more user-friendly and accessible, lowering the technical barrier for attackers," Adam explained.
"Cloud-based solutions and AI-powered platforms are making this more accessible."
I'm a 'time traveler' - the 'worst case scenario that could kill us all'SAVE YOURSELF
It's not all doom and gloom, however.
Adam told The U.S. Sun that although deepfakes are becoming more advanced, so too are the ways we can catch them.
While it's harder to rely on spotting visual or auditory mistakes yourself, technology is getting better at exposing deepfaked videos.
Using your instinct gives you the best chance of staying safe online.
Simon Newman
"The risk is not necessarily increasing at the same rate as the technology itself is," Adam said.
"Technology companies are developing tools to detect AI-generated material, making it harder for it to go unnoticed."
Eyeing mistakes in a video can now be extremely difficult.
And even technology will sometimes miss the signs that a video is faked.
That's why Simon Newman, CEO at the Cyber Resilience Centre for London & International Cyber Expo Advisory Council Member, warned that you'll need to rely on your own instincts.
Questioning the context of a video – and asking whether it seems to make sense – is important.
This is especially true if a video is making a bold claim or demanding urgent action.
"As the use of artificial intelligence by cyber criminals increases, it will become much harder to tell the difference between a fake and the real thing," Simon told The U.S. Sun.
"Fortunately, the cyber security industry has made great strides in the last few years developing technology that can spot deepfakes which will hopefully reduce the number of people falling for them.
"However, we can’t rely on technology alone – taking a cautious approach and using your instinct gives you the best chance of staying safe online."