Deepfake “Amazon workers” are sowing confusion on Twitter


In fact there have already been several high profile instances in which deepfake photos have been used in damaging disinformation campaigns. In December 2019, Facebook identified and took down a network of over 900 pages, groups, and accounts, including those with deepfaked profile pictures, associated with the far-right outlet The Epoch Times, which is known to engage in misinformation tactics. In October 2020, a fake “intelligence” document distributed among President Trump’s circles that became the basis of numerous conspiracy theories surrounding Hunter Biden was also authored by a fake security analyst with a deepfaked profile image.

Toler says deepfake faces have become a trend in his line of work as an open-source investigator into suspicious online activity, especially since the launch of, a website which serves up a new AI-generated face with every refresh. “There’s always a mental checklist that you go through whenever you find anything,” he says. “The first question is, “Is this person real or not?” which was a question we didn’t really have five years ago.”

How big of a threat is this? At the moment, Toler says the use of deepfake faces hasn’t had a big impact on his work. It’s still relatively easy for him to identify when a profile image is a deepfake, just as it is when the photo is a stock image. The most difficult scenario is when the image is of a real person pulled from a private social media account that isn’t indexed on image search engines.

A growing awareness of the existence of deepfakes has also primed people to scrutinize the media they see more carefully, says Toler, as evidenced by how quickly people caught on to the fakery of the Amazon accounts.

But Sam Gregory, the program director of human rights nonprofit Witness, says this shouldn’t lull us into a false sense of security. Deepfakes are constantly “getting better,” he says. “I think people have a little bit too much confidence that it’s always going to be possible to detect them.”

A hyper-awareness of deepfakes could also lead people to stop believing in real media, which could have equally dire consequences such as by undermining the documentation of human rights abuses.

What should we do? Gregory encourages social media users to avoid fixating on whether an image is a deepfake or not. Oftentimes, that’s just “a tiny part of the puzzle,” he says. “The giveaway is not that you’ve somehow interrogated the image. It’s that you look at the account and it was created a week ago, or it’s a journalist who claims to be a journalist, but they’ve never written anything else that you could find on a Google search.”

These investigative tactics are much more robust to advances in deepfake technology. This advice rings true for the Amazon case as well. It was through checking the accounts’ emails and tweet details that Toler ultimately determined they were fake. Not by scrutinizing the profile images.