I’ve come to the conclusion that it’s very important to keep up to date with AI and ‘deep fake’ technologies, even if you’re not interested in the technology itself. This is because we regularly need to recalibrate something that, for all its fallacy, is deeply embedded in human psyche: the idea that the camera never lies.
Being aware of just how easy it is to fake an Amazon review, or an email from your bank, is, I hope, a standard part of every child’s education now, but the capabilities of all computer-generated content are a constantly-moving target and we all need to keep abreast of the state of the art to avoid being caught unawares.
In case you haven’t seen it, here’s a 1-minute message from Morgan Freeman that has been getting a lot of attention in the last couple of weeks:
Also available here.
This is an actor impersonating Freeman’s voice, and a computer-generated video of Freeman himself. I was then surprised to discover that it was actually created 18 months ago. The technology will, of course, have improved considerably since then. One user commented on the video, “How can this tech NOT be deployed in the 2024 election?”
It occurs to me that, soon, you’ll only be able to trust the words of famous people if you actually see them in person (because it’ll be a long time before robotics is as good as computer-generated imagery!) Perhaps this will herald a return of the popularity of theatre…
(It is, however, also very enlightening to read the sections of Dr Steven Novella’s book talking about the thoroughly unreliable nature of eye-witness testimony, and of memory itself. It’s not just the camera that’s fallible!)
And when it comes to video, you’ll soon have to trust only people who are not famous, because there will be insufficient training data available online for anyone to do a good fake of them.