AI tools will be required to discern faked video where the human eye and ear cannot. And trusted sourcing, distribution, and accountability will be even more important or we will be inundated. https://t.co/eP4WxXY8OA
— Garry Kasparov (@Kasparov63) December 12, 2017
by Edward Lucas
Malicious rumors are a weapon of political warfare. The Kremlin adeptly uses them to erode trust and sow divisions. But what we have seen so far—fake news sites, the use of stolen, twisted information, swarms of pretend social-media accounts and so forth—is just the start. Next-generation tactics will be far worse. They will involve audio and video that has not just been edited in order to deceive, but outright invented.
Worries have been growing for months. This summer, the Economist published a story called “Fake News: you ain’t seen nothing yet,” highlighting how French musician Françoise Hardy purportedly discussed President Donald Trump’s inauguration in a YouTube video in which she looks only 20 years old; she is actually 73. And the words she “speaks” are actually those of Trump’s adviser, Kellyanne Conway. The “recording” never happened: computer software had analyzed and reworked previously published material.
That video was monochrome, and grainy. But the technology has already leapt ahead. Nvidia, a company that specializes in graphics processing, has just published a paper showing how its software can turn daytime scenes into night, and winter ones into summer (it can also turn pictures of cats into wild animals).
… Read More