Over the past decade, Russian disinformation has become far more dangerous, largely due to the rise of AI-generated content, which looks and sounds completely real.
Social media platforms like TikTok are being flooded with AI-generated content, and people – or in this case, countries – can generate and publish videos more quickly than any fact-checker could ever report them and have them removed.
Russia can spread disinformation quickly and discreetly, using deepfakes and voice cloning technology, with the hope of shifting people’s views of world events. The tactics of these propaganda videos have changed throughout the years, from telling an outrageous lie, to now targeting people with lots of different pieces of manipulated content, there to slowly wear away at your sense of what is true over time.
The BBC recently uncovered an example – the video appeared to be a UK emergency call handler. They have tried the weaponise the credibility of a real frontline responder’s voice to promote the ideas and beliefs they choose – in this case, spreading the fear of a terrorist incident ahead of Poland’s presidential election. You can see the full report here: https://www.bbc.co.uk/news/videos/c3dpeyrx1kyo.
We chose to test this dangerous and real-world example using our own AI detector.
This is one of the many scenarios we built our detection engine to handle. Whether it’s a live broadcast, an onscreen Zoom meeting, or a video circulating social media, our AI detector can pick up on the audio and text used, deciding if it is real of AI-generated in realtime.
In an internal test we ran the video through our detector. The result? A clear and obvious flag for AI-generated content. AI Aware confirmed what the BBC investigation found, the voice was fake.