By Catherine A. Fitzpatrick
This article/video by Clair Wardle in the New York Times seemed to me like something not to be praised.
She thinks that panic about deepfakes -- videos that distort real people or events through technology to misleadingly sway opinion -- is more of a threat to democracy than the deepfakes themselves because it spreads doubt and disbelief in general.
David Kaye, UN Rapporteur on Freedom of Speech praised it on Twitter and linked to it, but now I can't find his tweet. It doesn't matter. Why do I have problems with the "anti-moral-panic" of this piece?
Three reasons:
- It's not clear there really is a moral panic -- and that even if technology to create fakes is being "democraticized," technology to detect fakes is also being developed and will be used by platforms when content is uploaded.
- The claim just doesn't hold up that panic about highly-realistic fake videos is worse than the fake videos because it undermines democracy by instilling doubt in institutions. That doubt has long existed and grows without fake videos.
- The Pentagon is working on this -- which means it's not in the realm of Twitter vapours or mass media-generated panic but something more serious -- beyond moral panic.
The "moral panic" most notable lately seems to be about the slowed-down Nancy Pelosi video, and it's because of the very controversy of Nancy Pelosi herself, where some Democrats desperately hang on to her because they think she is the only one who can manage the floor of Congress, keep Democrats together, and get deals. And others oppose her either because they're to the left of her program (the "Squad") or because they simply think she's been in the job too long. I'm not a supporter of AOC, but I don't see what's wrong with pointing out that Pelosi at 79 is too old for this job -- and I say that at age 63. She can't last forever, so it's time for a new leader. If the Democratic hold on things is that fragile that it depends on an old lady -- even a very tough and experienced old lady -- it's in more trouble than we thought.
I don't see people making the obvious point about this video -- that the satire succeeds because it's quite close to the truth. Nancy's speech is ALREADY slurred even if she is stone, cold sober because a) apparently she has dentures; b) she has one of those lock-jawed accents and rapid speech where she doesn't finish sentences and isn't always distinct. Look at videos from years ago (2012)
So when you go to watch the "doctored" video, especially the real one right after the fake one, you see how in fact her speech has grown more indistinct over the years, due to age, or dentures or lockjaw (people even joke about her dentures in the comments on the videos) and you wonder what the big deal is -- the satire is telling a truth of sorts. And how has this really changed the story of Pelosi and the controversy around her, i.e. that some want her to go and some think the world will end without her? It hasn't, it has only become another skirmish in the same battle.
But despite the obvious truths here -- that a slightly-slowed-down video could work as satire because it was very close to the real Nancy, and that it played on the misgivings that cause the divide in the Democratic Party -- no one would mention it. It could not be articulated by a pious Ian Bogost in his thumb-sucking piece about media and truth for the Atlantic. Nancy sounds slurred anyway, fake or no. Facebook can say it has no policy to ensure things are true before allowing them to be published because it's not a publisher -- but imagine if your description of your fabulous life on FB had to be subjected to the truth test?! Of course it can't have a truth test. What is truth?
Bogost writes:
When Facebook says it’s not a news company, it doesn’t just mean that it doesn’t want to fall under the legal and moral responsibilities of a news publisher. It also means that it doesn’t care about journalism in the way that newsmakers (and hopefully citizens) do, and that it doesn’t carry out its activities with the same goals in mind. And yet rather than understanding and responding to those truths, public discourse instead beats its head against a wall trying to persuade Facebook that it “can’t do” what it’s been doing with impunity. It would be much simpler and more productive to take the company at its word, since reforming it into a responsible actor concerned first with its responsibility to truth or citizenship is likely impossible.
Well, no. And journalism isn't that fabulous thing, either, given the way Covington was covered and many other subjects. The cure is to remove the safe haven of Section 230. Facebook doesn't have to like or do "real" journalism. But it is a publisher, the way tabloids are publishers and should have responsibility for the content on its platform. Bogost sidesteps this issue so I don't know what he believes about Section 230, but it is likely he wouldn't want to change it, given the Silicon Valley culture he has long resided in.
I disagree with David Kaye about a number of things (like his support of Snowden in his UN report as a "whistleblower" (he is a felon who fled to Russia) or his idea that Trump is the "worst" disinformer, when Putin is far worse), but he posed the right question here (although notice he doesn't answer it). It may not be politically correct, flying in the face of Pelosi supporters' outrage, but the reality is, Facebook shouldn't remove this video, because it's satire of a public figure, and that is allowed under the First Amendment.
homework assignment: draft the rule that prohibits doctored pelosi video but protects satire, political speech, dissent, humor etc. not so easy is it? https://t.co/zaA7kQf83i
— David Kaye (@davidakaye) May 25, 2019
I don't know if his praise of the NYT video is about the tendency of all such commentators to act as if the masses panic but the talking heads know better.
I personally don't think fake videos will undermine faith in democracy and its institutions as the author says -- for one, it already is undermined, but undermined notably on Twitter, which doesn't represent even the Democratic Party, let alone all of America, as multiple studies have shown. After all, we had democracy before the age of TV and radio and the Internet. To be sure, media has impacted democracy in good and bad ways, but I think we will find that if fake videos are used more extensively, to the point where they affect elections, reporters will get more savvy about getting real interviews face-to-face instead of looking up Twitter accounts and web sites and cutting and pasting. And that will be a good thing.
And even before the Internet gave legs to deepfakes, there was negative advertising on TV -- I'm thinking of those fake claims that Mitt Romney caused a fired worker's wife to die of cancer because Bain Capital, where he was an investor, had taken over the plant, then later laid off workers so that they loss their health insurance. Snopes didn't take this on, but Factcheck did, and has a very extensive entry explaining that the claim is misleading and a matter of debate. The plant Bain took over might have closed anyway. The man's wife died five years after the closing, and for one of those years she did have her own job's insurance. There are a lot of "whatifs" in this story -- too many to pin the blame on Romney certainly, and even Bain. American steel companies can't compete in a worldwide market. That's the underlying truth to the whole story. Leftists have a visceral hate of this sort of leveraged buyout but it doesn't always fail and it's not clear what alternatives they can offer to save a plant.
This anti-Romney video isn't called a "deepfake" but it is misleading. You don't need to have neural networks learning algorithms to re-make faces on a video to create a misleading video that can be argued as true, depending on the set of premises you start with -- hatred or appreciation of capitalism; belief that plants can be saved by Bain-style buyouts or not; even belief that people will go to the doctors if they have insurance -- after all, their insurance might not have been enough to deal with cancer anyway.
You know what I'm worried more about? Neural networks being taught by geeks to do things automatically. There's a scene in one of the episodes on "Silicon Valley" where Richard, the CEO of the start-up Pied Piper, bursts into a focus group where ordinary people are criticizing his app to explain that in fact it's very cool and neural networks are being trained to do things with people's content. One woman named Bernice expresses concern about what is being done with her uploaded content by the company running the app. One man then says worriedly, "Terminator" -- the neural networks will become so smart they will take over and destroy humans (the Singularity cult idea). Richard dissuades them and continues to insist on its coolness. Of course, those neural networks can only learn what they are being taught - garbage in, garbage out. And they only "keep learning" on their own -- and doing -- if coders don't stop them. Shouldn't we be more worried about ethics-free coders manipulating neural networks and not conceiving of breaks on their "learning" -- without any input from ordinary people in a democracy?
Even so, while one set of geeks are doing something undermining democracy, according to this Times essayist, another set are undoing it for mundane business reasons. Facebook doesn't want to be regulated. So some engineers are working on detecting such deepfakes at the stage they are uploaded to a platform.
Wardle suggests that platforms "think hard" about this (huh?) and "reduce prominence" to videos like the Pelosi slur or "give context". Facebook did that, but others didn't. But wait a minute. If Facebook does that much with content, aren't they editors? If they label things or start providing context on their own which inevitably will be politicized, aren't they media?
I'm all for Section 230 to be revised so that platforms no longer are exonerated from responsibility for user content. I want them to have this responsibility, whether it is for letting shooter's videos stay up or the morgue photos made by the despicable pro-Kremlin Graham Phillips of the children killed in the mortar attack of their soccer field in the Donbass (which and others tried to get removed -- and failed -- although ultimately other things Graham did caused his account to get moved).
There's a moral panic about Section 230 being removed from the libertarians and leftists, in the belief that this is the Internet's First Amendment. Baloney. Private companies do not have to -- and don't currently -- maintain First-Amendment level freedoms on their platforms. Therefore they cannot "lose" their First Amendment. They have terms of service and rules which they already enforce -- but badly. So let them have the responsibility that goes with that, and use some of their billions on hiring staff to deal with customer content and complaints.
Everyone remembers the panic spread by Orson Welle's War of the Worlds radio show in 1938. The potential for someone to use YouTube and Twitter or Facebook for some modern version of this -- say a fake Trump declaring war on North Korea or something like that -- is there and not all reporters these days seem to have the cold-call and foot-pad skills to go into reality and find out what is happening (as they did ultimately but not before they Internet got their first -- when nuclear war seemed to be launched in Hawaii). So the moral is to start now to do more reality. There's plenty out there waiting to be covered in person and on foot.
Recent Comments