Deepfakes mean the end of shared reality, and nobody is ready

Richard W. DeVaul
12 min readJan 28, 2020
A video deepfake substituting the face of Nicolas Cage on the face of Amy Adams (image: wikimedia)

I was there, in the building, when the modern machine learning revolution got started at Google in 2011; It was called Project Brain, and it started at Google [X]. I’d like to tell you that I immediately knew that it would change everything, but that took several months. By the time a New York Times article ran in 2012 reporting that a cluster of 16,000 processors had discovered on its own that cats were essential to YouTube videos and made its own “cat neuron,” I was convinced that we were in for a wild ride; something fundamental had changed in computing, and the world would never be the same.

Now, almost eight years later, the deep learning technology behind Brain (and its near cousin, reinforcement learning) now powers almost every online service. It powers virtually every Google search. Every time you talk to Siri or Alexa it’s there. It translates languages on web pages and street signs, and helps physicians make more accurate diagnoses. And it powers the face recognition technology that lets you unlock your phone with your face and helps authoritarian regimes oppress millions of people.

Schematic diagram of generative adversarial network, depicting inputs and outputs
The structure of a Generative Adversarial Network, or GAN. Both the fake image generator and the discriminator are deep neural nets, and both iteratively improve as one or the other “win” the competition and generate a loss signal for the other. (image created by the author, licensed under creative commons CC BY-SA)

--

--

Richard W. DeVaul

Founder, mad scientist, moonshot launcher. Writes on innovation, entrepreneurship, and social/queer issues. ex-CTO of Google X. @rdevaul on twitter