Counterfeits Unknown: Deepfakes of Anthony Bourdain

Deepfakes present greater risk to societal cohesion through our entertainment than they do through disinformation.

Counterfeits Unknown: Deepfakes of Anthony Bourdain
A Haunting New Documentary About Anthony Bourdain
“Roadrunner,” by the Oscar-winning filmmaker Morgan Neville, presents Bourdain as both the hero and the villain of his own story.

This New Yorker article by Helen Rosner this week reviews a recent documentary of Anthony Bourdain. The section I find noteworthy is a description of how filmmaker Morgan Neville generated audio of Anthony Bourdain reading an email written to a friend:

“But there were three quotes there I wanted [Bourdain's] voice for that there were no recordings of,” Neville explained. So he got in touch with a software company, gave it about a dozen hours of recordings, and, he said, “I created an A.I. model of his voice.” ... “We can have a documentary-ethics panel about it later.”

Neville told Variety that the fakes were generated "with the blessing of [Bourdain's] estate and literary agent," which adds a patina of consent to the concept. Two things are bad about this, though:

First, deepfakes commoditize people

While an estate and/or literary agent can authorize use of someone's works or other output, there is an important distinction between use of someone's products and use of the person themselves. In this case, Neville didn't want something produced by Bourdain (he already had the emails produced by Bourdain). He wanted Bourdain. And since he couldn't have him, he manufactured 'him.' Use of this technology in this way directly commoditizes people, and not just their products.

Second, undisclosed deepfakes are lies

Neville's use of deepfakes is a lie. It's just a very little lie and perhaps not very harmful, but it is a lie: the film intends that the fakes be taken as real; Neville says (emphasis mine): “If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know.” By presenting the fakes as real, this technology crosses the thin but very clear line into lying.

And of course since the words were written by Bourdain its just a little lie, but let's explore that further: let's imagine that Anthony Bourdain really liked Hondas. And let's imagine Honda obtained consent from Bourdain's estate, generated audio of Bourdain saying "I love Hondas and think everyone should buy one," and portrayed it in ad campaigns as something Bourdain had really said. This is more viscerally offensive, but its the same scenario: Bourdain expressed something, his estate provided consent, and then fakes were presented as reality in order to improve sales of a product.

'But Bourdain wrote the words'

There is a distinction between my scenario and the documentary, since in the scenario Bourdain never actually said "I love Hondas and think everyone should buy one." But it's a smaller distinction than it appears to be. Written phrases are given different meanings in pronunciation: "He said he did it," "He said he did it," "He said he did it," "He said he did it" etc. all have different meanings, and while how I have written those phrases helps you guess how I intended each of those to be spoken you probably haven't read them just as I intended. A fake can spin—or completely change—the meaning of a written phrase depending on how the designers construct its pronunciation.

Glimpses of Tomorrow

This is an issue that is going to become pervasive. There have been crude glimpses in the Star Wars films Rogue One and Rise of Skywalker, with Grand Moff Tarkin (Peter Cushing) and Leia Organa (Carrie Fisher) obviously digitally recreated in each film respectively. For a more analog example in public life, in 2019 Oregon police used photoshop to make a suspect look more like the description of the perpetrator for line-up photos without disclosing it. While the technology in these three examples are not directly comparable to Neville's use of fakes in his Bourdain documentary, these cases demonstrate the appetite for such technology in both entertainment and governance.

The broader adoption of this technology will create some weird new capabilities:

  • Want to watch all of the Bond movies with Daniel Craig as Bond? Can do!
  • Are you a film studio that discovered that one of your characters has been accused of sexual harassment? Pay a licensing fee and swap them out post-production with a different actor!
  • Do you hate a certain class of people? Turn all of the zombies in World War Z into that race/gender/etc!
  • Are you a politician that wants to show your base how you uphold specific values? Run an ad where you hunt with Reagan, or where you discuss suffrage with RBG.
  • Want to watch Lord of the Rings with Matthew McConaughey as Aragorn? Alright, alright, alright!

Sound far-fetched? Magic City Films has "posthumously cast" James Dean in an upcoming movie, Finding Jack. Disney is conducting research on "High-Resolution Neural Face Swapping for Visual Effects." Flawless has a product (described in more detail by Gizmodo here) to change the lip movements of actors to provide more convincing video when dubbed for foreign audiences. You can watch it in action:

Responses to Deepfakes

For the near future at least cheapfakes have more disinformation utility than deepfakes; I'm not going to suggest responses to deepfakes from the standpoint of fighting disinformation. Instead, I think deepfakes present significant risk to society through enhancing division by providing splintered experiences of reality (as I discussed briefly regarding the use of VR ads in a previous WYSK).

Regulation:

Foundational regulation is not going to be feasible—prohibitions on generating this kind of content will either violate the First Amendment, or require specific technical language that will be unenforceable and rapidly become outdated—however there are two regulatory actions that can be taken:

  1. For media marketed in the US, require clear disclaimers that content has been algorithmically generated in for-profit use of fakes, and enforce transparency by requiring a publicly accessible and plain language record of what sections are faked.
    a). Producers can choose to self-host or host via industry organizations (like the MPAA or RIAA), but the records must be available at least for the length of the copyright covering the media.
  2. Flat prohibition on use in sponsored political messaging (campaigns, ads, etc.).

Choices:

Individual choice is always better than regulation. When I am aware that content is leveraging deepfakes, I will choose not to engage with it. Of course, the ability to make that choice is predicated on transparency, and the regulatory steps I recommend above are focused on providing transparency so that consumers can make that decision.

By choosing to disengage with deepfake-enabled content, I'll miss out on some cool stuff catered generated for my whims. But I'll be sharing more experiences with my society as whole, I'll not be training myself that people are commodities, and I'll be avoiding more lies.

I recommend this choice for you too. It's going to be easier to make now before the smorgasbord of deepfake-enabled entertainment is fully set.


Photo by Arturo Rey on Unsplash