All opinion articles are the opinion of the author and not necessarily of American Military News. If you are interested in submitting an Op-Ed, please email [email protected].
_______
“In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear missiles, and long-range missiles. Today, you just need access to our Internet system, to our banking system, to our electrical grid and infrastructure, and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply.” (US Senator Marco Rubio).
In today’s world national power is observed against the capacity of a nation to take advantage of technological change. In other words, the ability to innovate and adopt a new technology is the cornerstone of growth now, constituting national power.
It should come as no surprise then that in assessing the capacity of military technology owned by its main competitors, China and Russia, a US congressional report has placed a heavy emphasis on artificial Intelligence (AI).
Deepfake is an AI-based technology that simulates reality using computer imagery supported by an appropriate AI-powered application, digitally replacing one face by another in a recorded video and audio format.
The problem is that deepfake videos have been largely political, raising concerns about a negative impact they may have on democratic processes. Nasu (2022) cautions that when used to deceive the media and general public, a deepfake can be considered a tool of information warfare. It is not difficult to figure out that deepfake technology deepens the gap between the authority of global and national institutions and the way they are “contested by the others” (Fioretos and Tallberg, 2021).
Many leaders have been targeted by deepfake technology already – US President Joe Biden, former US President Donald Trump, US Speaker of the House Nancy Pelosi, Malaysian Minister of Economic Affairs Azmin Ali, to name a few.
Loualalen and Aelbrecht (2021) warn that deepfakes are widely used not only to interfere with an election process, but also to create political chaos or to conduct a special kind of military action based on psychological operations (PSYOPS), with a serious impact on national security.
From the perspective of military doctrine, it’s hard to imagine a tool more effective than a deepfake would be in targeting the decision-making ability of an adversary, halting the advance of troops, or showing a leader corroborating with the enemy or giving fake orders. Likewise, a deepfake video may undermine the success of a military operation if it’s eventually guided by inaccurate intelligence. A deepfake may also be used to support a plan to feign surrender first, and then order an attack on the receiving force with an ultimate aim is to influence individuals to impact the behaviour of (military) organisations or governments.
The use of deepfakes may also lend additional perspectives to assessing the relationship between war and globalisation, suggesting paths to change the nature of warfare, and providing more arguments for the classical deterrence theory to appear even more “badly flawed”. It might also reveal a refreshing perspective on the relationship between authoritarian governments and technological innovations.
Analysts pointed out that misleading information could harm the relationship between military forces and a foreign civilian population. A deepfake video designed to challenge it and inflame a local population would falsely depict the military assaulting and killing civilians, committing massive atrocities, etc. Furthermore, using no name actors to produce a deepfake video impersonating military or intelligence officers ordering the sharing of sensitive information or taking some action that would expose forces to vulnerability can cause a serious military harm. Hostile foreign regimes may also use deepfake videos to create political or military instability, showing leaders shouting offensive phrases or ordering atrocities, as was the case in Gabon in 2021, when even the rumour signalling the use of a deepfake created political instability first, and eventually led to military unrest.
The military aspect of it involves the challenge the emerging disruptive technology is posing, as well as the impact it may have on nuclear weapons decision-making, notably the decision makers, the Nuclear Command, Control and Communication (NC3), nuclear doctrine and associated signalling. Hence it seems logic to classify deepfakes as a weapon of mass distortion, referring to the inherent ability to reduce considerably the operability of a state’s NC3 system by decreasing its capacity to objectively assess a situation.
If used as described, one of the vital purposes of deepfake technology is to undermine confidence among states by making them distrustful of information analysis and outputs offered by digital security platforms, which as a consequence raises concerns over nuclear weapon decision making.
With the credibility of a country’s intelligence community tarnished, political and military leaders might be prompted to make decisions conducive to fatal outcomes. During the ongoing war in Ukraine, deepfake videos of Russia’s Vladimir Putin and Ukraine’s Volodymyr Zelenskyy emerged in March 2022, showing deceptive messages of support to the adversary. Clearly false, they nevertheless spread almost instantly online. Similarly, a very authentic deepfake audio file was presented as a private conversation between Russian President Putin and US President Trump during their meeting in Helsinki, voicing Trump’s promise that the US military will not intervene to protect NATO allies in case of Russian subversion (Chesney and Citron, 2018).
As a final point, malicious deepfakes have the potential to seriously undermine national and global security, in peacetime as well as in war. On that note, there are four particularly important and sensitive areas of engagement – autonomous nuclear weapons systems, nuclear command, control, and communications systems, cybersecurity, and disinformation. Narrowly tailored laws to accommodate, anticipate, and mitigate the harms deepfakes can produce would leave much to be desired.
Now would be a good time to reconsider Barry Posen’s advice to states: “They must assume the worst because the worst is possible‟.
Vladimir Krulj is a Fellow at the Institute of Economic Affairs in London and Senior Advisor at FIPCOR France. This Op-Ed is an extract from Dr.Krulj’s King’s College London Essay.