Take a look at this version of Rick Astley’s “Never Gonna Give You Up” posted by Rick hisself. With the help of an artist named Mario Klingemann, he rearranged the song in order of pitch. The video may look disjointed and weird, but notice how the overall pitch of rises over the course of three-and-a-half minutes.
What was the point? To show how sounds can be manipulated in interesting ways by AI software. Okay, so what does that have to do with fake news? The Economist takes it from here.
The video is a particularly obvious example of generated media, which uses quick and basic techniques. More sophisticated technology is on the verge of being able to generate credible video and audio of anyone saying anything. This is down to progress in an artificial intelligence (AI) technique called machine learning, which allows for the generation of imagery and audio. One particular set-up, known as a generative adversarial network (GAN), works by setting a piece of software (the generative network) to make repeated attempts to create images that look real, while a separate piece of software (the adversarial network) is set up in opposition. The adversary looks at the generated images and judges whether they are “real”, which is measured by similarity to those in the generative software’s training database. In trying to fool the adversary, the generative software learns from its errors. Generated images currently require vast computing power, and only work at low resolution. For now.
Images and audio have been manipulated almost since their invention. But older fakes were made by snipping bits out of photographic negatives, combining them with others, and then making a fresh print. New techniques are very different. Doctoring images and audio by fiddling with film or using photo-editing software requires skill. But generative software can churn out forgeries automatically, based on simple instructions. Images generated in this fashion can fool humans, but also computers. Some of the pixels in an image doctored using editing software will not match up with what might be expected in the real world.
Generated images, because their creation requires convincing an adversary that checks for just such statistical anomalies, contain none of these tell-tale signs of forgery. A generated image is also internally consistent, betraying few signs of tampering.
Oh, dear. Better keep reading.