You are here: 国产小呦女 School of International Service Centers Security, Innovation, and New Technology Deepfake Technology: Assessing Security Risk

Technology

Deepfake Technology: Assessing Security Risk

By |

Imagine scrolling through your favorite social media feed when something catches your eye鈥攁 short video clip of a familiar face. Businessman turned celebrity Elon Musk is promoting a new cryptocurrency investment. All you need to do is transfer funds to a crypto wallet and the returns will be guaranteed. After all, you鈥檝e heard stories from friends who have made money from Musk鈥檚 other endorsements.

This situation occurred recently, and a small number of investors jumped at the opportunity after seeing the interview clip of Elon Musk. Unfortunately for them, , it was a deepfake. Deepfakes, fabricated videos which imitate the likeness of an individual, can take on many forms. Often, these include creating an image of a person that does not exist, creating a video of someone saying or doing something they have never done, or synthesizing a person鈥檚 voice in an audio file. Although deepfake technology is relatively primitive, bad actors have increasingly used it for malicious purposes. As the technology progresses, people will likely continue to use it for reputation tarnishing, financial gain, and for harming state security. Additionally, academics and policymakers show varying levels of concern for how deepfakes could harm society. It has yet to be seen if social media giants and governments will holistically address the misuse of deepfake technology, however some efforts are underway.

Deepfakes are potentially threatening to the individual and to the state. Both types of threats use the same communication vector and the same technology. They also provoke similar societal responses. However, differences appear when thinking through the implications of misuse. Solutions to the deepfake problem will likely differ between the two categories as governments and social media platforms weigh its ultimate impact.

The vast majority of threats to the individual are related to . In fact, the term 鈥渄eepfake鈥 originated from a with the same username. This user introduced the technology to the mainstream through the creation and sharing of fabricated pornographic videos. Usually, these videos contain the false likeness of celebrity women. Although counterfeit, these forged pornographic videos have real consequences. Often, they inflict psychological harm on the victim, reduce employability, and affect relationships. Bad actors have also used this technique to threaten and intimidate journalists, politicians, and other semi-public figures.

Furthermore, cyber criminals use deepfake technology to conduct online fraud. For example, a utilized artificially generated audio to match an energy company CEO鈥檚 voice. When the fake 鈥淐EO鈥 called an employee to wire money, his slight German accent and voice cadence matched perfectly. The employee wired $243,000 to the cybercriminal before realizing his mistake. Whether deepfake fraud presents itself as the Elon Musk video mentioned earlier or the phone call described above, the result is the same. Real people are losing money to deepfake-enabled fraud online.

Threats to national security are less frequent, though in theory they may occur in peacetime or war. To distinguish a threat to national security, it comes down to understanding the creator鈥檚 intentions. To give a wartime example, let鈥檚 look at what occurred during the early stages of Russia鈥檚 invasion of Ukraine. Supposed Russian actors disseminated a deepfake video that showed telling his military to stand down. Social media companies quickly removed the video from circulation; however, its immediate impact is unknown. At the very least, it contributed to the barrage of misinformation spread across Ukraine as Russia invaded the country. Like other forms of misinformation, peacetime deepfake threats to national security could take the form of political deception. Academics and have asserted state-sponsored deepfakes could attempt to sway public opinion about a politician, stoke violence, or erode public trust. For example, during the 2020 U.S. election, of a potential for deepfake video proliferation on social media. Fortunately, this did not seem to occur.

Academics who study deepfake technology are split regarding its overall impact on society. Those who are more concerned about the technology鈥檚 potential for misuse study how deepfakes directly impact consumers鈥 actions and attitudes; while those who are less concerned study how the technology contributes to the larger misinformation space.

Academics who are more concerned argue deepfake videos are capable of swaying public opinion when deployed with the right message to the right audience. A recent study highlighted how (fake videos deployed to reach a specific demographic) could impact groups鈥 political attitudes. The research showed deepfake videos were more apt to sway consumers' political attitudes over other types of online disinformation. An illustrates that those who have controversial views that align with the content of a deepfake are more likely to share the content online. The researchers found that a 鈥渟ingle brief exposure to a deepfake can influence implicit attitudes, explicit attitudes, and sharing intentions.鈥 Overall, these studies show deepfakes have the capability to change consumer perceptions, when shown to a targeted audience, and the capability to reinforce existing perceptions. Research is still underway to understand the extent to which deepfakes could cause consumers to change voting habits and potentially disrupt democratic elections.

Conversely, academics who are less concerned with deepfakes argue the technology is just as disruptive as other forms of misinformation online. They claim deepfakes much more often (through pornography) than governments or greater society. Counter to other studies, these scholars are unable to prove deepfakes are more manipulative than other forms of fake news. For example, found no increase of false memories in consumers who perceived deepfakes versus other types of misinformation (like simple text or images). Additional studies found deepfakes are at tarnishing a politician鈥檚 reputation than other forms of misinformation.

On a different note, some academics believe deepfakes and other forms of misinformation contribute to the problem of the so-called liar's dividend (if anything can be faked, nothing has to be real). Professor and deepfake expert Hany Farid refers to the as his 鈥渂iggest concern鈥 when it comes to widespread usage of deepfakes. Additionally, researchers quantitatively proved deepfakes sow uncertainty and, in turn, reduce trust in news seen online.

Despite a lack of consensus on how deepfakes impact society, policymakers and social media giants have attempted to quell the technology鈥檚 negative repercussions. Technology companies like Facebook and Google are spending resources to detect deepfakes through efforts such as Facebook鈥檚 and Google鈥檚 recent . Additionally, believe a form of online content authentication (a way to verify all posted content) could solve problems associated with deepfake dissemination.

Policymakers are also attempting to reduce the impact of harmful deepfakes. For example, several states have to provide legal recourse for victims of deepfake pornography. The Department of Homeland Security (DHS) has conducted and other forms of research on deepfake technology. Congress also which would create a National Deepfake and Digital Provenance Task Force to monitor deepfakes and bring together academia, government, and industry experts. Although promising, the bill has yet to become law or garner serious support.

Overall, deepfake technology is still in its nascent stage. As the technology improves we will likely begin to see more solutions for addressing misuse of the technology. If additional evidence reveals deepfakes are more manipulative and harmful than other types of misinformation, then government intervention to stop its spread online may be necessary. Ultimately, policy solutions offered today are unlikely to be effective due to social media鈥檚 complex and fast-changing environment and . Without concrete and repeatable quantification of deepfake technology鈥檚 impact on society, we are unlikely to see the issue properly addressed.


About the Author:听

Jack Cook is a current graduate student in the School of International Service's Global Governance, Politics, and Security program. Regionally, he is interested in the geopolitics of the Middle East and North Africa. Academically, his interests include cyber operations, counter-terrorism, and 21st century authoritarianism.


*THE VIEWS EXPRESSED HERE ARE STRICTLY THOSE OF THE AUTHOR AND DO NOT NECESSARILY REPRESENT THOSE OF THE CENTER OR ANY OTHER PERSON OR ENTITY AT AMERICAN UNIVERSITY.

more_csint_articles