Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Deepfakes: A Nightmare Scenario for Public Administration

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Tyler Sova
October 28, 2020

“The party told you to reject the evidence of your eyes and ears. It was their final, most essential command.” This prescient quote from George Orwell’s fictional world in 1984 is coming closer to reality. Instead of a nebulous government authority, the misinformation is coming from everywhere around the world.

Deepfakes are generally manipulated videos that show a politician, celebrity or public figure of some type transposed onto the face of someone else. Deepfakes are created by the utilization of machine learning. On a basic level a computer is given hours and hours of media of someone’s face or voice and then AI is used to create a composite of their face or voice. It gets complicated quickly; there are advanced algorithms called generative adversarial networks (GANS) that can create fake faces indiscernible from real ones. The tech is complex and could be expounded upon for pages and pages, but the scope of this article is on the immediate use of deepfakes. For example, in this deepfake, Jim Carrey’s face is now transposed onto Jack Nicholson’s character in The Shining. It would be difficult to tell it was fake if we didn’t already know that Jack Nicholson originally played that role. The lines will only become even more blurred. 

For now, deepfakes have mostly been used as a form of entertainment. Goofy ideas like putting the actor Nicholas Cage’s face on multiple characters in other shows and movies in which he never appeared. In true internet form, almost immediately after deepfakes appeared around 2017, they were used for nefarious purposes, including porn. It began with easy targets like celebrities because they generally have thousands of images and hours of video all over the internet, but it’s moved to the disturbing corner of regular people. In the age of social media, many women have hundreds, if not thousands, of pictures of themselves. Those pictures can be used by AI software to create lifelike video of any person who has enough pictures for the AI to accurately replicate their face. This type of porn almost exclusively affects women. It’s just as upsetting that many of the online tools needed to create deepfakes are free, which is democratizing the deepfake space. As the tools and knowledge spread, it is predicted that deepfakes will multiply exponentially.

At least with video, there are still signs and tells that the human eye can spot to draw suspicion of the origin. There are also audio deepfakes. It works similar to images, but instead machine learning is applied to hours of voice recordings in order to create an eerily accurate, and totally fake, voice recording. For example, a company in the UK was scammed out of $243,000 when their CEO thought he was talking to the chief executive of their parent company who asked for urgent funds. Turns out it was sophisticated AI reproduction of the man’s voice. Voice deepfakes may be more lucrative for criminals and those looking to create chaos since it’s easier to fake a voice to dupe people. They can access accounts that use voice verification, or call people who may not be able to tell, or create messages that sound exactly like a politician, but it isn’t. These tools, if not mitigated, will have many people questioning what is real and what is fake with disastrous effects. 

Without new technologies to fight this new technology, deepfakes will become a crisis for public administrators. Criminals and creeps have adopted the technology first, but it’ll be followed by those motivated for different reasons. Whether for politics, or some other reason, there will be people creating new deepfakes that are more sophisticated and more damaging. Imagine a deepfake of a president issuing a video proclamation of an attack, fake phone calls spreading misinformation on elections or a perfect mixture of video and audio to create scenarios that never happened. What happens to accountability? Public figures can use deepfakes as a way to deny troubling things they’ve said or done. They can simply say, “That’s not me, it’s a deepfake.” Only trained tech professionals will be able to tell. This is developing in a time where it’s becoming more and more difficult to figure out what is true and what is false. 

So far, we’ve been spared any significant deepfake debacles that we know of in 2020. This seems to be mostly because old methods of disinformation are working better than ever. Clever memes, photoshop and edited videos are working well enough, while deepfakes remain technologically challenging and time consuming to create. It’s important that everyone stay vigilant in the type of media they are consuming and give a healthy skepticism to new information. 

Author: Tyler Sova received his Bachelors in Finance from Duquesne University and his MPA from West Chester University. He is a Financial Management Analyst for the Federal Government and involved with the Pennsylvania Keystone Chapter of ASPA. He can be reached at [email protected]

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)

One Response to Deepfakes: A Nightmare Scenario for Public Administration

Leave a Reply

Your email address will not be published. Required fields are marked *