Ceci n’est pas un acteur

I’ve come to the conclusion that it’s very important to keep up to date with AI and ‘deep fake’ technologies, even if you’re not interested in the technology itself.  This is because we regularly need to recalibrate something that, for all its fallacy, is deeply embedded in human psyche: the idea that the camera never lies.  

Being aware of just how easy it is to fake an Amazon review, or an email from your bank, is, I hope, a standard part of every child’s education now, but the capabilities of all computer-generated content are a constantly-moving target and we all need to keep abreast of the state of the art to avoid being caught unawares.

In case you haven’t seen it, here’s a 1-minute message from Morgan Freeman that has been getting a lot of attention in the last couple of weeks:

Also available here.

This is an actor impersonating Freeman’s voice, and a computer-generated video of Freeman himself.  I was then surprised to discover that it was actually created 18 months ago.  The technology will, of course, have improved considerably since then.  One user commented on the video, “How can this tech NOT be deployed in the 2024 election?”

It occurs to me that, soon, you’ll only be able to trust the words of famous people if you actually see them in person (because it’ll be a long time before robotics is as good as computer-generated imagery!)  Perhaps this will herald a return of the popularity of theatre…

(It is, however, also very enlightening to read the sections of Dr Steven Novella’s book talking about the thoroughly unreliable nature of eye-witness testimony, and of memory itself.  It’s not just the camera that’s fallible!)

And when it comes to video, you’ll soon have to trust only people who are not famous, because there will be insufficient training data available online for anyone to do a good fake of them.

Enjoyed this post? Why not sign up to receive Status-Q in your inbox?

1 Comment

I’ve been thinking about one aspect of this: deepfakes could help with the revenge porn problem. At the moment, if a compromising picture of someone gets circulated, people assume it’s really them. If it becomes easy to make fake pictures of anyone, genuine pictures have no real power. No one will know that they’re not fake.

Making deepfake porn without someone’s consent isn’t nice, but I think it’s a much lesser evil than leaking genuine pictures. For one thing, it doesn’t invite victim-blaming by people who are that way inclined. As a result, I think it could be a gain on balance, even though it does swap one problem for another.

Got Something To Say:

Your email address will not be published. Required fields are marked *

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax

*

© Copyright Quentin Stafford-Fraser