Part 1
What do deep fakes mean for our First Amendment protections?
You were asked to consider this question while reviewing the powerpoint on the technicalities of deep-faking.
300 words
Part 2
find a deep fake that someone else has created (including images of the deep fake, its creator, that creator’s audience and purpose) of how we know it is a deep fake and why it was created. Do not use the Jim Carrey series that I showed in my slides.
Deepfakes:
Trick or Treat?
By: Kietzmann, J., Lee, L.W., McCarthy, I.P. and Kietzmann, T.C.
In the journal: Business Horizons.
https://doi.org/10.1016/j.bushor.2019.11.006
The authors
Jan H. Kietzmann Ian P. McCarthyLinda W. Lee Tim C. Kietzmann
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
The paper
Click here to access
the paper
https://doi.org/10.1016/j.bushor.2019.11.006
What are deepfakes?
“Deepfakes leverage powerful techniques from machine learning and
artificial intelligence to manipulate or generate visual and audio content
with a high potential to deceive” (Kietzmann et al. 2020).
Example: Rowan Atkinson (Mr Bean)
unexpectedly stars in a perfume
commercial (original recorded with
Charlize Theron).
View the original advert here:
https://youtu.be/VqSl5mSJXJs
View the deepfake here:
Deepfakes: Trick or Treat?
https://doi.org/10.1016/j.bushor.2019.11.006
https://youtu.be/VqSl5mSJXJs
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
What are deepfakes?
The phenomenon gained its name from a user of the
platform Reddit, who went by the name “deepfakes”
(deep learning + fakes).
This person shared the first deepfakes by placing
unknowing celebrities into adult video clips. This
triggered widespread interest in the Reddit
community and led to an explosion of fake content.
The first targets of deepfakes were famous people,
including actors (e.g., Emma Watson and Scarlett
Johansson), singers (e.g., Katy Perry) and politicians
(e.g., President Obama)
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
Deepfakes matter because:
Believability: If we see and hear something with our own eyes and
ears, we believe it to exist or to be true, even if it is unlikely.
The brain’s visual system can be targeted for misperception, in the
same way optical illusions and bistable figures trick our brains.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
Deepfakes matter because:
Accessibility: the technology of today and tomorrow, will allow all of
us to create fakes that appear real, without a significant investment in
training, data collection, hardware and software.
Zao, the popular Chinese
app for mobile devices lets
users place their faces into
scenes from movies and TV
shows, for free.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
How do deepfakes work?
Consider below deepfake featuring Jim Carrey and Alison Brie:
The original Alison Brie video: https://www.youtube.com/watch?v=QBmYDzLhWoY
The deepfake with Jim Carrey: https://www.youtube.com/watch?v=b5AWhh6MYCg
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
How do deepfakes work?
Many deepfakes are created by a three-step procedure:
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
How do deepfakes work?
Step 1: The image region showing Brie’s face is extracted from an
original frame of the video. This image is then used as input to a deep
neural network (DNN), a technique from the domain of machine
learning and artificial intelligence..
Step 2: The DNN automatically generates a matching image showing
Carrey instead Brie.
Step 3: This generated face is inserted into the original reference
image to create the deepfake.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
How do deepfakes work?
The main technology for creating deepfakes is
deep learning, a machine learning method used to
train deep neural networks (DNNs).
DNNs consist of a large set of interconnected
artificial neurons, referred to as units.
Much like neurons in the brain, while each unit itself
performs a rather simple computation, all units
together can perform complex nonlinear
operations.
In case of deepfakes, this is mapping from an
image of one person to another.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
How do deepfakes work?
Deepfakes are commonly created using a specific deep network
architecture known as autoencoder.
Autoencoders are trained to recognize key characteristics of an input
image to subsequently recreate it as their output. In this process, the
network performs heavy data compression.
Autoencoders consist of three subparts:
– an encoder (recognizing key features of an input face)
– a latent space (representing the face as a compressed version)
– a decoder (reconstructing the input image with all detail)
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
How do deepfakes work?
Autoencoder: a DNN architecture commonly used for generating
deepfakes.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
How do deepfakes work?
Encoder: Much like an artist
drawing an image, the encoder
compresses an image, from
originally tens of thousands of
pixels into a few hundred (typically
around 300) measurements.
These measurements can relate to
particular facial characteristics e.g.,
eyes are open or closed, the head
pose, the emotional expression,
skin colour, etc.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
How do deepfakes work?
Latent space: represents different facial
aspects of the person on which it is trained.
It is often compared to information bottlenecks
so that the network can learn more general
facial characteristics rather than memorizing all
input examples of specific people.
The compression achieved by the encoding of
an input image into the latent space can be 0.1%
of the memory needed to store the original
input image.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
How do deepfakes work?
Decoder: decompresses the information in the
latent space to reconstruct an image as
perfectly as possible.
The performance of the whole autoencoder
network is measured by how much the input
and generated (output) images resemble each
other. This task is made difficult because of the
heavy data compression performed by the
encoder.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
The deepfake trick
Two separate autoencoders trained each on a
different person will be very different and
cannot be integrated.
The trick for creating deepfakes lies in sharing
the encoder across two networks such that
they remain compatible.
This way, the image of one person can be
used to compute a compressed latent space
representation, from where the decoder of
another person is used to create the fake.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
The deepfake trick
Deepfakes: Trick or Treat?
Using the same encoder and hence latent space representation for
images of two separate people is key to understanding deepfakes.
If two autoencoders were trained separately, the latent spaces would
not be aligned (Brie, and Carrey latent space below). Encoder sharing
will result in an aligned latent space (grey dots). The autoencoders
can then be used to match from one to another person.
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
The deepfake trick
Deepfakes: Trick or Treat?
A shared encoder is key to creating novel facial images of a target
person that exhibit the same emotional expression, head posture,
etc. as the original facial characteristics. This new image can then be
doctored back into the original image to create a fake scene.
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
A typology of deepfakes and their business
applications
Type Description Current example Business application
Photo
deepfakes
Face and body-swapping
Making changes to a face, replacing
or blending the face (or body) with
someone else’s face (or body
FaceApp’s aging filter alters
your photo to show how you
might look decades from now
(Kaushal, 2019).
Consumers can virtually try on
cosmetics, eye glasses,
hairstyles or clothes.
Audio
deepfakes
Voice-swapping
Changing a voice or imitating
someone else’s voice.
Text to Speech
Changing audio in a recording by
typing in new text
Fraudsters used AI to mimic a
CEO’s voice and then tricked a
manager into transferring
$243,000 (Supasorn
Suwajanakorn, 2017).
Users could make controversial
Dr. Jordan B. Peterson a famous
professor of psychology and
author say anything they
wanted, until his threat of legal
action shut the site
NotJordanPeterson down (Cole,
2019).
The voice of an audio book
narration can sound younger,
older, male, or female and with
different dialects or accents to
take on different characters.
Misspoken words or a script
change in a voiceover can be
replaced without making a new
recording.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
A typology of deepfakes and their business
applications
Type Description Current example Business application
Video
deepfakes
Face-swapping
Replacing the face of someone in a
video with the face of someone else.
Face-morphing
A face changes into another face
through a seamless transition
Full body puppetry
Transposing the movement from
one person’s body to that of another.
Jim Carrey’s face replaces
Alison Brie’s in “Late Night with
Seth Meyers” interview.
Former “Saturday Night Live”
star Bill Hader imperceptibly
morphs in and out of Arnold
Schwarzenegger in the talk
show Conan.
“Everybody dance now” shows
how anyone can look like a
professional dancer.
Face-swapped video can be
used to put the leading actor’s
face onto the body of a stunt
double for more realistic-looking
action shots in movies.
Video game players can insert
their faces onto that of their
favorite characters.
Business leaders and athletes
can hide physical ailments during
a video presentation.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
The R.E.A.L. framework for dealing with the
darkside of deepfakes
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
The R.E.A.L. framework
Record: Deepfakes often seek to falsely portray
somebody doing or saying something and being
somewhere, the exposure of such fakes would
require evidence (or an “alibi”) to the contrary.
Technology already exists to track and “life-log” a
person’s life in terms of location, communications
and activities.
Such life-log data could then be encrypted, stored
and used to help identify and expose the posting of
dark deepfakes.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
The R.E.A.L. framework
Expose: Technological innovations are being
developed to detect and classify deepfakes by
identifying issues with image resolution, scaling,
rotation and splicing.
The U.S.’s Defense Advanced Research Projects
Agency (DARPA) has a Media Forensics program.
Reuters developed a free online tutorial to help us
identify manipulated media such as deepfakes.
The Deepfake Detection Challenge invites
people around the world to build innovative new
technologies that can help detect deepfakes and
manipulated media. Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
The R.E.A.L. framework
Advocate: At the moment there is little legal
consequence for producing, hosting and sharing
deepfakes. This is changing.
In China, as 1st January 2020, it is a criminal
offense to publish deepfakes or fake news without
disclosure.
Also victims have legal recourse in instances of:
● defamation, malice, breaches of privacy or
emotional distress from a deepfake, and
● cases of copyright infringements,
impersonation and fraud involving deepfakes.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
The R.E.A.L. framework
Leverage: One way to counter deepfakes is for
individuals and organizations to strengthen trust
in their content and presence.
Individuals and organizations with strong and
respected brands will be better positioned to
weather deepfake assaults, as their stakeholders
will defend their brand.
When brands built on strong ethics are portrayed
in an unfavorable light in deepfakes, the hope is
that stakeholders will not simply believe their
eyes and ears, but be more critical and think for
themselves.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
Takeaways
Deepfakes can be used in positive and
negative ways to manipulate content for
media, entertainment, marketing and
education.
Increasingly our lives are being captured
via social media and this content can be
used to train DNNs, with or without our
permission.
Deepfakes are not magic, but are
produced using techniques from AI that
can generate fake content that is highly
believable.
Deepfakes: Trick or Treat?
https://linkinghub.elsevier.com/retrieve/pii/S0007681319301600
The DOI (Digital Object Identifier) for the paper
on which these slides are based:
https://doi.org/10.1016/j.bushor.2019.11.006
https://doi.org/10.1016/j.bushor.2019.11.006