top of page
Search
Writer's pictureKatja Bego

‘Deepfake’ videos get weaponised

Repost from 2018: 2019 will be the year that a malicious ‘deepfake’ video sparks a geopolitical incident, says Katja Bego. [I am archiving some of my writing in the spirit of digital preservation - this is a repost of a blog that originally appeared on the Nesta website in December 2018]



Today, we’re used to treating visual evidence as truth; “Seeing is believing,” as the saying goes. But deepfakes, a new AI-based technology that makes it possible to create fake videos of individuals nearly indistinguishable from the real thing, might soon change that. In 2017, researchers at the University of Washington in the US trained a deep learning algorithm to mimic President Obama’s facial expressions and voice, allowing the team to create videos of the former president appearing to make speeches using words from old interviews given in very different contexts. More recently, China’s state news agency Xinhua introduced a news anchor completely generated by AI. Though uncanny to look at, both of these examples are still relatively innocuous.

But there is a real risk in deepfakes being weaponised for more nefarious purposes. We already see the technology used for creating revenge pornography, where a subject’s face is swapped on existing footage, and there are a myriad of other ways disgruntled exes, vengeful employees or cybercriminals could abuse deepfakes to do significant harm. The risks on the global stage are even higher.

We predict that within the next 12 months, the world will see the release of a highly authentic-looking malicious fake video which could cause substantial damage to diplomatic relations between countries.

Rapid development

Though deepfakes are still a relatively new technology, they are evolving incredibly fast, making it harder and harder for the naked eye (or even digital forensics tools) to identify them. At the same time, they are also becoming ever easier and cheaper to create. Where lifelike computer-generated graphics were until now something we would only see in blockbuster movies, deepfakes can be created by anyone with a consumer-grade computer and some tech skills. This broadening of access means we could soon be moving towards a future where millions of fake videos bombard us from every direction. In a world where a viral Youtube video—fake or real—can reach an audience of millions within a matter of hours, how do we stop particularly incendiary deepfakes from escalating conflict? How do we as a society protect the truth? Diplomatic incidents

Imagine a deepfake video purporting to show a leading opposition politician talking about committing election fraud, a false declaration of war by a world leader or a feigned assassination of that same head of state (a 21st-century Franz Ferdinand moment?). Deepfakes not only open the door to conflict, misleading the public and discrediting our leaders, but also provide these same leaders with plausible deniability about every controversial video, even those that really are true. Already, over the past couple of years, have we seen conflict arise around the provenance and credibility of videos showing, for example, the Sarin attacks in Syria or the recent fracas between CNN journalist Jim Acosta and the White House. When the existence of deepfakes allows us to call just about anything fake or altered, how will we hold our leaders and rogue actors to account? The end of truth as we know it

Many researchers are working on solutions that can spot AI-generated manipulated video, from universities to the military. Indeed, digital tools are already under development that can, for example, rapidly detect whether a video contains altered pixels or spot biological tells such as off-pace heartbeats or blinking patterns which suggest a video’s subject is not actually human. Though these solutions are promising, experts fear that they are unlikely to be able to keep up in the arms race over deepfake development. Just focusing on potential technological solutions also risks us ignoring the political and social dynamics underpinning deepfakes’ development. It is unfortunate, if perhaps unsurprising, that deepfakes have emerged at a particularly contentious and polarised time, a perfect exemplar of our post-truth age. Given their power lies in sowing confusion, we need to make sure we do not make the same mistakes as we did around the “fake news” debate, which has become so politicised that we cannot begin to try and find consensus on how this problem needs to be addressed. This is why we need action now. In 2019, perhaps after a deepfake sparks a real geopolitical incident, we’ll be forced to ask deeper questions about whether we can trust what we see, and confront the consequences for society. I suspect that we will only see real action when damage has already been done.

Recent Posts

See All

Comments


bottom of page