The editors of the Oxford Dictionary chose the word “post-truth” (honorary word ” 2016 “). The term describes circumstances in which objective facts are less important for shaping public opinion than appealing to emotions and personal beliefs. A particularly high frequency of use of this word was observed in English-language publications after the presidential elections in the US due to the proliferation of a large number of fake news, after which the very truth did not matter much. According to a number of analysts, the spread of false defamatory news has become one of the reasons for the defeat of Hillary Clinton in the election. One of the main sources of false news was Facebook ‘s algorithmic tape.

Against this background, almost unnoticed were the news coming from developers in the field of artificial intelligence. We have become accustomed to the fact that neural networks draw pictures, create a person’s photo by his verbal description, generate music. They do more and each time they get better. But the most interesting is that the machines have learned to create fakes.

Influence of fakes

During the pre-election race in the United States, the number of false news outlets in social networks exceeded the number of verbal outreach posts, because fakes were more in line with expectations or more exciting. After the election, Facebook hired independent fact-checkers who began to tag unchecked messages to warn users. The combination of increased political polarization of society and the propensity to read mainly the headings, gives a cumulative effect. Counterfeit news is also often distributed through fake news sites, information from which often falls into the mainstream media that are chasing the attraction of user traffic. And nothing attracts traffic like a catchy headline.

But in reality, only the political sphere, the impact of fakes is not limited. There are many examples that demonstrate the influence of false news on society.

 

The Australian video production company produced fake virus videos for two years , which were gaining hundreds of millions of views.

In a thoughtful fake about Stalin, whose face allegedly appeared in the Moscow metro, they believed even on April 1st. Millionaire Ilya Medkov began paying in 1992 to major news agencies in Russia and the CIS (including RIA Novosti, Interfax, ITAR-TASS).

ITAR-TASS launched a false report in the media about the accident at Leningrad NPP in January 1993. As a result, the shares of leading Scandinavian companies fell in price, and before the denial of the media Medkov agents bought the most profitable shares of Swedish, Finnish, and Norwegian companies.

June 26, 2017, the rate of the second on the capitalization of the crypto currency Ethereum sharply went down – the exchange reacted to the rumors about the tragic death of the creator of the Ether, Vitaly Buterin. And the first “urgent news” appeared on the anonymous 4chan imageboard – and this source, frankly speaking, is not the most reliable.

Wikipedia wrote that “Vitalik was a Russian programmer,” and then the yellow press picked up the news. As a result, the air rate fell by 13%, from about $ 289 to $ 252. The course began to grow right after the denial of the news by Vitaly himself.
There is no evidence that someone deliberately created a fake for making a profit on the difference in the rates of the crypto currency. It is indisputable only that the invented news without a single confirmed fact has a powerful impact on people.

The simulation of reality

 

French singer Françoise Ardi in the video will repeat the speech of Kellynn Conway – adviser to US President Donald Trump, who became famous after saying about “alternative facts”. It is interesting here that in fact, Ardie is 73 years old, and in the video she looks like a twenty-year-old.

The commercial of Alternative Face v1.1 was created by the German artist-actor Mario Klingemann. He took an audio interview with Conway and old music videos of Ardi. And then he applied a generative neural network (GAN), which made unique video content from a multitude of frames of different singer’s clips, and superimposed an audio track with Conway’s comments on top.

In this case, the forgery is easy to recognize, but you can go further – make changes to the audio file. People will believe in image and sound recording with more willingness than just text. But how to forge the very sound of a human voice? GAN-systems are able to study the statistical characteristics of audio, and then reproduce them in a different context with an accuracy of a millisecond. It is enough to enter the text that the neural network should reproduce, and you will get a plausible presentation. The Canadian start-up Lyrebird has published its algorithms, which can simulate the voice of any person on the basis of a sound file one minute long. To demonstrate the opportunities, the company laid out a conversation between Obama, Trump and Clinton – all the characters, of course, were fakes.

DeepMind, Baidu Institute for In-depth Studies and the Montreal Institute for the Study of Algorithms (MILA) are already dealing with highly realistic text-to-speech algorithms. The result is not perfect yet, you can quickly distinguish the recreated voice from the original, but the similarity is felt. In addition, in the voice network changes emotions, adds anger or sadness, depending on the situation.

Generating images

French singer Françoise Ardi in the video will repeat the speech of Kellynn Conway – adviser to US President Donald Trump, who became famous after saying about “alternative facts”. It is interesting here that in fact, Ardie is 73 years old, and in the video she looks like a twenty-year-old.

The commercial of Alternative Face v1.1 was created by the German artist-actor Mario Klingemann. He took an audio interview with Conway and old music videos of Ardi. And then he applied a generative neural network (GAN), which made unique video content from a multitude of frames of different singer’s clips, and superimposed an audio track with Conway’s comments on top.

In this case, the forgery is easy to recognize, but you can go further – make changes to the audio file. People will believe in image and sound recording with more willingness than just text. But how to forge the very sound of a human voice? GAN-systems are able to study the statistical characteristics of audio, and then reproduce them in a different context with an accuracy of a millisecond. It is enough to enter the text that the neural network should reproduce, and you will get a plausible presentation. The Canadian start-up Lyrebird has published its algorithms, which can simulate the voice of any person on the basis of a sound file one minute long. To demonstrate the opportunities, the company laid out a conversation between Obama, Trump and Clinton – all the characters, of course, were fakes.

DeepMind, Baidu Institute for In-depth Studies and the Montreal Institute for the Study of Algorithms (MILA) are already dealing with highly realistic text-to-speech algorithms. The result is not perfect yet, you can quickly distinguish the recreated voice from the original, but the similarity is felt. In addition, in the voice network changes emotions, adds anger or sadness, depending on the situation.

Generating images

The developer Christopher Hesse has created a service that with the help of machine learning can “dorisovyvat” sketches, consisting of several lines, to color photographs. Yes, this is the site that draws seals. Seals are obtained so-so – it is difficult to confuse them with the real ones.

Programmer Alex Joliker-Martino was able to make cats that are just like real ones. For this, he used DCGAN – Deep Convolutional Generative Adversarial Networks (deep generative convolutional networks). DCGAN can create unique photorealistic images by combining two deep neural networks that compete with each other.

The University of Washington has developed an algorithm that allows you to superimpose audio on the video of a person’s speech with precise lip synchronization. The algorithm was trained on 17 hours of video addresses by Barack Obama. The neural network was trained to simulate the movements of Obama’s lips in such a way as to correct their movements by imitating the pronunciation of the words needed. So far, you can only generate video with the words that a person really said.

And now look at the work of the algorithm, which allows you to change the facial expressions of another person to your own facial expressions on the fly. On the video with Trump (still there is a demonstration on Bush, Putin and Obama) impose a person’s antics in the studio, and turns out to be a twisted Trump. The algorithm is used in Face2Face. The created technology is similar to the principle of work on the Smile Vectorbot , which in photos add people a smile. Thus, it is already possible now to create a realistic video in which a well-known person speaks fictional facts. Speech can be cut from previous speeches in such a way as to make up any message. But soon such tricks will be superfluous – any text the network will put into the mouth of the fake character perfectly accurately.

Effects

From a practical point of view, all these technologies allow us to do a lot of good. For example, you can improve the quality of videoconferencing by synthesizing missing frames if they drop out of the video stream. And even synthesize entire missed words, providing excellent communication in places with any signal level. You can completely “digitize” the actor and add his realistic copy to movies and games. But what will happen with the news? It is likely that in the coming years the generation of fakes will come to a new level. There are also new ways of fighting. For example, if you compare a photograph with known conditions on the terrain (wind speed, slope of shadows, illumination level, etc.), then these data will help to identify the fake.

In 2014, NVIDIA engineers reconstructed the scene of landing on the Moon as accurately as they could, based on documentary evidence that has been preserved since that time. All the physical and optical properties of objects were taken into account to find out how light is reflected from various materials and behaves in real-time conditions. As a result, they were able to reliably confirm the authenticity of NASA photographs. NVIDIA confirmed that with proper preparation it is possible to prove the genuineness (or refute) even the most complex photo work created decades ago. Last year, the US Advanced Defense Research and Development Agency (DARPA) launched a four-year project to create an open system Media Forensic, which will be able to identify photographs that were somehow processed or distorted. Neural networks can not only change the original content, but also identify the highest quality fakes. Who will eventually win in this technology race – time will tell.