Deepfake is used as a term when artificial intelligence is used for the creation and manipulation of images, audio and texts. That is also called ‘deepfakes’. Face-swapping deepfakes, but also voice generation and text generation. It falls under the heading of ‘generative software’. Read all about Deepfakes in this blog.
What is Deepfake?
Deepfakes are images, sounds and texts created by artificial intelligent software. Smart software is then used for the creation or manipulation of images, audio and texts. That is also called ‘deepfakes’. You can read all about it in this article. How it is made, what the examples are and what the risks are of deepfakes.
Image manipulation is not limited to our digital era. There are many examples in world history where it later became apparent that images had been manipulated. For example, US President Lincoln had an engraving made. On it stood his head on the body of John C. Calhoun; the US Vice President in the first half of the 19th century. Reportedly, that was to make President Lincoln’s appearance look more “presidential.”
In 1990 Adobe released the Photoshop program. Since then, photoshopping has become a verb. Nowadays, almost everyone owns a smartphone with a built-in good photo camera. Includes all available photo filters. This makes it more likely that you will encounter a manipulated photo online than the original.
Edited realistic videos were until recently reserved for Hollywood studios, but nowadays they are within reach for everyone. That is what makes deepfake such a prominent development. In recent months, you have read more and more about deepfake technology in news reports. What is the definition of deepfakes?
Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. wikipedia.org
What are Deepfakes?
As per wikipedia’s destination, deepfakes are texts, images, videos and audio edits made by artificial intelligent software. The term Deepfake combines the English words deep and fake. Deep refers to the artificially intelligent deep learning networks. Fake stands for fake. Deepfake content can be a lot of fun.
Deepfake technology is also a threat. The application can be used in many ways and thus manipulate opinions, blackmail people or damage reputation. We are entering an online era in which we can no longer trust our eyes and ears.
Why do we hear so much about these deepfake videos? How can this development go so fast? Why is the amount of deepfake videos increasing so much? The answer is relatively simple. All signals are green to accelerate this development excessively. The videos are relatively easy to create (now with smartphone apps, for example), easy to distribute (via social media and WhatsApp), and there is plenty of audience willing to share crazy, high-profile, juicy videos.
Deepfake technology takes the concept of fake information to the next level. Information is fully available digitally and deepfake software is increasingly easier to use (now also via smartphone apps). Video, audio and texts are easy to manipulate. What was previously only made in Hollywood is now within everyone’s reach. That is why we must quickly learn to recognize what deepfake is. Especially now that we are sharing (fake) information with each other worldwide at lightning speed.
Generative Adversarial Networks
How are Deepfakes made? Usually by using general adversarial networks. In recent decades, the qualitative development of artificial intelligence has gone very fast. A deepfake technique that is rapidly emerging are the generative adversarial networks, or GANs. A GAN often consists of two networks. The first neural network, the generator, creates new digital content. The other system, the discriminator, determines whether that new information appears real. They work together in mutual competition.
In this the two systems push each other to a great height. The information that the discriminator ultimately approves can sometimes no longer be distinguished from the real thing. The dividing line between real and fake is getting thinner. The generator continues to create content until the discriminator says, “That’s right.” Then it can no longer be distinguished from real.
Types of Deepfake
Of all forms of generative deepfake software products, most attention is paid to deepfake images. Generative AI software can, for example, swap faces in face swap videos or turn a neat photo into a nude photo. As technology develops, it becomes increasingly difficult to distinguish fake and real.
Not only image sound like music and speech can also be fake. Recently committed a financial corporate fraud with a cloned director’s voice that was a successful order for a transaction of around $243,000
Voice Cloning, the replication of someone’s voice, is becoming increasingly credible.
Generative AI systems can also handle text. Digital text is widely available. From the online newspaper archives to complete libraries full of e-books. Systems are trained with this. A system does not need to understand the text to learn and generate new texts . In 2019, the lyrics are not yet of perfect quality. Certainly not the longer pieces of text. But the technology is getting better.
Risks of Deepfake
Fake information can be disruptive. Think of blackmail and reputational damage in politics, business and legal practice . One can say, “I didn’t say that, it comes from a computer.” People will be more likely to think: that information will be fake. If you can no longer trust information, society can become indifferent. This can threaten the democratic constitutional state.
Suppose a real audio recording emerges from the private meeting in 2018 in the Finnish capital of Helsinki between US President Trump and his Russian colleague Putin. This meeting took place behind closed doors, without assistants or note takers. If this sound recording shows that the US President is blackmailable, he can now dismiss this recording as deepfake technology in 2019. Nothing wrong.
Some are already talking about an infocalypse, a destruction of the reliability of information. Photos, videos, sound recordings, human voices, written texts and written reviews that we see everywhere. They can all be fake.
What problems can arise now that deepfake videos, texts and audio fragments can be created and distributed relatively easily and quickly? Below is a short, incomplete list.
Unrest and Polarization
Suppose a deepfake video shows up of an important Dutch politician who seems to be bribed. Or such a video in which an FBI employee says that he would like to arrest someone from the Trump family for alleged ties with Russia and that he is generating evidence for this himself. Or Russian fake videos featuring a staged disturbance between American politicians, an anti-America demonstration in Saudi Arabia or American soldiers burning a Quran. There are many examples where staged videos or sound recordings are guaranteed to lead to geopolitical unrest or social polarization in society.
It is not inconceivable that politicians, journalists, foreign military personnel, directors of large companies, whistle blowers and financial managers will face blackmail with deepfake videos in the future.
Even when the deepfake video is of mediocre quality, the protagonist obviously doesn’t want it to be distributed. The mere suggestion of unethical, criminal, deviant sexual behavior can lead to considerable reputation damage and shame.
Clearing yourself from all blame then takes a lot of time and energy, because suggestions are persistent and can haunt someone for years. After all, outsiders think that where there is smoke, there should also be fire. With deepfake technology, malicious people have a very powerful blackmail tool.
damage One of the most obvious effects of deepfake technology is the damage to reputation. Activist environmental groups could put directors of biotechnology companies in a bad light. Commercial companies could topple a competitor with the technology.
On the night before an IPO, a video of a financial director may show up, in which he allegedly admits that there are far less cash on the balance sheet than the official papers state. On the eve of an election, a video may emerge in which a politician expresses sexist, racist or aggressive language. If there is any reparation, it is too late to repair any electoral damage.
Certainly people for whom reputation is very important will have to face the dangers and solutions of this technology in the short term. Incidentally, at the beginning of the development of deepfake technology on the Reddit forum, a very vicious form of reputation damage already manifested itself: deepfake porn. With that application, women’s faces were pasted onto that of a porn actress.
Reddit intervened and removed this ‘non-consensual pornography’ part, but that doesn’t mean the application no longer exists. As the use of such deepfake technology spreads further, more and more women will become victims. There are already websites that manipulate female celebrities in a pornographic setting.