Tired of hearing about “fake news” and “fake media?” Sorry to break it to you, but there’s a real problem and it could affect your work.
The ability to edit photos and audio has been around for years. I remember 15 years ago taking a headshot of someone, realizing that I hadn’t noticed a crooked part of the person’s collar, and using Adobe Photoshop to copy the opposite side, create a mirror image, and blend it into the problem. The picture looked perfectly normal.
Software has become more powerful, with artificial intelligence aiding in editing and opening ways to drastically alter images. Photoshop’s Face-Aware Liquify feature allows someone to easily adjust facial expressions in images.
In 2016, Adobe and Princeton researchers demonstrated technology that would allow text-based voice editing.
It’s possible today to make someone in a video act and say things they never did. Look at this NSFW example with Jordon Peele putting words into the synchronized mouth of Barack Obama. BuzzFeed worked with Peele to develop the example using—a free tool.
I wrote about the growing potential problem at Inc.com in 2017. The potential goes far beyond putting highly recognizable people into embarrassing situations. Here are a few of the business-related implications I saw then:
- A company creates a damning video and recreates the face of a rival’s CEO for a damaged reputation.
- A former romantic partner takes revenge after a bad breakup and anonymously ensures people in someone’s workplace see it. The video goes viral and damage’s the victim’s professional reputation.
- Someone doctors a company’s videos to include information about products or services that would be negative to an important subset of buyers.
- A business produces how-to videos as a form of marketing and a competitor replaces the faces and perhaps background to capitalize on the work.
There is no way that all of this, and other uses you could imagine, never occur.
Business journalists are left with learning how to detect fakes and, to the degree possible, avoid being taken in.
Fakes can be difficult to detect. Look at this Adobe site that lets you see how easily you can tell fake from real images. Or this MIT research project where you try to determine which of two video versions is the fake. There’s a YouTube site called ctrl shift face that shows deep fakes.
It’s all a lot harder than you might think.
One step is to learn at least the rudiments of how to spot fakes. Such techniques as looking for suspicious repeating textures and patterns; discrepancies in lighting, shadows, and reflections; out-of-proportion body parts or objects; and overly perfect or inconsistent colors and tones can all provide giveaways of potential manipulation.
Even with modern forensic techniques and tricks, it can be difficult to detect fakes. Software can help, like free image fake detectors. Slowing a video down can show transitional glitches that are giveaways. And you can always take a video apart into frames and analyze single ones if you’re wondering about a section.
Still, detecting fakes, especially when done with skill, continues to increase in difficulty. Even experts can’t be certain they’re always correct.
So, it’s back to some basics. Who made it available? Does the context seem reasonable? Where does it appear? Expect these, and other questions, to demand answers more frequently and assume that your due diligence may have become even harder.