Dances With Wolves Part 2: Bonus Issues

There are more than just three critical issues with Clegg's analysis of Facebook that need to be considered.

Dances With Wolves Part 2: Bonus Issues
  1. Facebook as 'The little guy'
  2. Facebook as 'the democratist'
  3. The user as 'participant'
  4. The 'phantom editor'
  5. Facebook's ability to provide relevance
  6. Facebook's prohibition on harmful content
  7. Facebook as Absalom
  8. Facebook as under oversight
  9. Facebook as non-provocative
  10. Facebook's 'secondary role'
  11. Time and polarization
  12. Suspect measurements
  13. Facebook's openness
  14. Facebook's 'comfort'
  15. Facebook as the little guy democratist (again...)
  16. Facebook as conduit
  17. Facebook as one of many

As promised in the parent essay, I'm going to respond here to a bunch of issues with Clegg's article. I'll use a quote/call-out format. Unless specified otherwise, all quotes are from Clegg's article:

Facebook as 'The little guy':

"Turning the clock back to some false sepia-tinted yesteryear — before personalized advertising, before algorithmic content ranking, before the grassroots freedoms of the internet challenged the powers that be — would forfeit so many benefits to society"

As a $900B+ company, with Whitehouse meetings and 8-figure lobbying expenditures, Facebook is so hilariously not 'grassroots' and falls so clearly under "the powers that be" that it is ludicrous to me that some PR person permitted that blurb be included in the final edition of Clegg's article.

Jaron Lanier addressed this mindset very well in 2018:

We used to be kind of rebels, like, if you go back to the origins of Silicon Valley culture, there were these big traditional companies like IBM that seemed to be impenetrable fortresses. And we had to create our own world. To us, we were the underdogs and we had to struggle. And we’ve won. I mean, we have just totally won. We run everything...We have no sense of balance or modesty or graciousness having won. We’re still acting as if we’re in trouble and we have to defend ourselves, which is preposterous. And so in doing that we really kind of turn into assholes, you know?
- Jaron Lanier

Facebook as 'the democratist'

"The internet needs new rules — designed and agreed by democratically elected institutions — and technology companies need to make sure their products and practices are designed in a responsible way that takes into account their potential impact on society."

Clegg's nod to democratically elected institutions (note that while this sounds like governments, it is not necessarily) carries little weight given Facebook's predilection to court authoritarian governments.

To be fair to Facebook, it is difficult and I believe increasingly impossible for international companies to navigate the patchwork of conflicting laws. But Facebook's track-record is sufficiently tarnished that Clegg's 'Honor by Association here is not practical.

The user as 'participant'

"You are an active participant in the experience"

I've already talked about Clegg/Facebook's lie of 'Control,' and this demonstrates how while Facebook's talking points assure consumers of user control, Facebook intends and understands that users don't truly control anything.

These are not high ‘agency’ words that convey control. "An experience" is generally something that is undergone rather than something that is done. A "participant" is not engaged in leadership. Clegg understands that fundamentally users are ingesting what Facebook has determined they shall have, and user agency comes secondary.

The 'phantom editor'

"There is no editor dictating the frontpage headline millions will read on Facebook. Instead, there are billions of front pages, each personalized to our individual tastes and preferences, and each reflecting our unique network of friends, Pages, and Groups."

It is simply false to imply that there is no editor dictating what millions read on Facebook. Facebook is the single editor dictating the frontpage headline, that editor is just providing unique headlines on a per-user basis.

Also, it's billions of users, not millions of users. Facebook is very keen to inflate that metric as much as possible, and it is impossible that their VP of Global Affairs would mistake those numbers by three orders of magnitude. While this is conjecture, I believe the deceptive choice of referencing "millions" instead of the more accurate "billions" was intended to make Facebook appear more innocuous.

Facebook's ability to provide relevance

"It means you get the most relevant information and therefore the most meaningful experience."

This is at least aspirational, and probably false. At best it actually means that we get the information that FB believes is most relevant. How does Facebook measure relevance defined here? How is meaningful defined here?

Clegg gives a very weak description of relevance as something you might "like" or "find that viewing it was worth your time." Despite Clegg's extensive use of "relevance" and "meaningful" (a combined 25 times) he never states how Facebook defines them.

As long as Facebook deems relevance and meaning as virtues to be pursued but withholds explicit definitions from the public any discussions of control or transparency are merely public relations efforts.

Facebook's prohibition on harmful content

"Facebook has detailed Community Standards, developed over many years, that prohibit harmful content — and invests heavily in developing ways of identifying it and acting on it quickly."

Generation of policies and effective enforcement are separate problems. I believe that Facebook has succeeded at neither, but it is demonstrable that Facebook cannot prevent harmful content and also cannot keep it off.

It is also not Facebook's place to define "harmful content."

Also, Facebook accelerated genocide in Burma/Myanmar, which is sort of the capstone of "harmful content."

Facebook is fairly good at picking out known harmful content, but it is very bad at detecting harmful patterns. Part of this is because an individual mote of content is inconsequential to Facebook, but major patterns are valuable to them ( people organizing and being connected is of value for Facebook, even if it is for genocide of the Rohingya people, or insurrections at the capitol).

As a result, Facebook's responses to widespread harmful patterns are always going to be hamstrung due to the fundamental conflict with Facebook's business interests.

Facebook as Absalom

"It would clearly be better if these decisions [on what content is acceptable] were made according to frameworks agreed by democratically accountable lawmakers. But in the absence of such laws, there are decisions that need to be made in real time."

Two things here. First it is very ‘Absalom in the gate’ of Clegg here, with strong "If only I were appointed judge in the land" vibes. If you are unfamiliar with the story, I recommend you read it in context, or even better read a quick commentary on the passage.

Second, this is just not how things work. American law doesn’t look kindly on an attitude of ‘sure some laws should be developed, but since there were no laws and action was needed I just did it on my own.’

I've been very brief and restrained on this point, but it is a major issue. A longer response didn't fit into this article, but if there is enough interest on this point, I'll write more extensively on it.

Facebook as under oversight

"Last year, Facebook established an Oversight Board to make the final call on some of these difficult decisions. It is an independent body and its decisions are binding — they can’t be overruled by Mark Zuckerberg or anyone else at Facebook."

Zuckerberg supports Clegg's assertion that the Oversight Board can make binding decisions:

"The board’s decision will be binding, even if I or anyone at Facebook disagrees with it."

However, as Evelyn Douek notes (on page 53, sentence 3 of Article V) Facebook retains the ability to override the Oversight Board's decisions.

Sections 4.3 and 4.4 (Page 3) of the Oversight Board's charter says that the Board may "Instruct" Facebook. As a not-lawyer I can't state if that instruction must be followed by Facebook. However, Clegg's statements are at best misleading.

First, the Oversight Board is on track to make perhaps 30 decisions per year. Those decisions relate to specific posts that have/have not been taken down by Facebook. We're not talking about systemic change here.

Second, the Oversight Board issues a decision (that is covered by Sections 4.3 and 4.4 of the Charter and which may be binding upon Facebook), as well as policy recommendations. Those policy recommendations have no authority or force over Facebook.

So, Clegg may be correct in saying that the discrete decisions of the Board are binding, but those decisions have extremely limited scope to an individual post. And the broader work of the Oversight Board has no binding force upon Facebook.

If you are interested here are some quality resources on the oversight board.

Facebook as non-provocative

"But Facebook’s systems are not designed to reward provocative content. In fact, key parts of those systems are designed to do just the opposite."

First, we don’t have to take anything Clegg says on faith. In fact, based on Facebook's past actions it is unwise to do so.

Second, again, since Clegg has not defined "provocative" it is hard to evaluate his claim here.

Facebook's 'secondary role'

"A Harvard study ahead of the 2020 U.S. election found that election-related disinformation was primarily driven by elite and mass-media, not least cable news, and suggested that social media played only a secondary role."

This is highly misleading and, while it may not be technically false, deceptive.

You can access the PDF of the study, which has not been peer reviewed, here.

First, while Clegg says that the study "suggested that social media played only a secondary role" he is only sharing part of the data. In most places the study says that social media played a "secondary and supportive role" (emphasis mine). Clegg chose to leave that second part out to further his narrative that Facebook's potential harms to society are minimal.

The actual quote Clegg is likely referring to is at the top of page 4 of the study. The authors--while addressing what the study dubs the “Mass Media Leads” model--say that “... but social media mostly serves to recirculate agendas and frames generated through mass media, and plays a secondary or supportive role.”

Second, the study makes a distinction between:

  • content that originates on social media and is shared and circulated there, and
  • content that originates elsewhere and is shared and circulated on social media.

While I believe that this distinction is flawed, the study does not even apply it consistently. On page 12, President Trump's twitter account is labeled as a mass media outlet, even though it was an individual's social media account.

The study bases its assertion that social media plays only a "secondary and supportive role" on the fact that most content is generated off of social media, even when social media ingests and amplifies that content. This is a little bit like Douglas Adams' saying that "It's not the fall that kills you; it's the sudden stop at the end."

Time and polarization

"An earlier Stanford study showed that deactivating Facebook for four weeks before the 2018 US elections reduced polarization on political issues but also led to a reduction of people’s news knowledge and attention to politics. However, it did not significantly lessen so-called “affective polarization,” which is a measure of someone’s negative feelings about the opposite party"

I assume (reader beware!) that--since it leverages a lizard brain 'us against them' mentality--affective polarization takes longer to change than someone's views on a specific issue. I also assume that affective polarization is correlated to polarization on political issues, and that a sustained reduction in polarization on political issues would result reduction of affective polarization.

I didn't see any explicit research on how long it takes for affective polarization to change, but the studies I skimmed were all measuring affective polarization in years or decades.

Suspect measurements

"One thing we do know is that political content is only a small fraction of the content people consume on Facebook — our own analysis suggests that in the U.S. it is as little as 6%."

Clegg doesn't specify how this is calculated, so it is not possible to quantitatively assess his claim. Two main notes about his tack here:

First, he is using a style of measurement here that he actually fights against elsewhere in his article: quantity consumed rather than 'relevance.' Throughout the rest of the article Clegg trumpets Facebook's ability to provide 'relevant' or 'meaningful' content and hammers those terms repeatedly. Here, though, he suddenly shifts to a raw-quantity measurement of consumption that he eschews elsewhere in his essay.

Second, he is using very fuzzy language. "Suggests that" and "as little as" are not high-confidence language.

We can run a smell test against this claim pretty easily: the 'Facebook's Top 10' twitter account maintained by New York Times tech columnist Kevin Roose uses the Facebook-owned CrowdTangle analytics platform to track the top-performing link posts on Facebook. A cursory review of that list shows that political content/commentators consistently dominate that list.

Again, Clegg's exact claim here is likely correct—the fuzzy phrasing and lack of quantitative data make it impossible to know at this point—but the accuracy of his intended point is highly suspect.

Facebook's Openness

You should be able to better understand how the ranking algorithms work and why they make particular decisions, and you should have more control over the content that is shown to you.

While I agree with the specific statement here, Clegg's suggestion that Facebook is taking concrete steps toward permitting you to "train your algorithm" is incompatible with Facebook's actual business model: as long as Facebook uses algorithms and keeps them private, users will not have sufficient control.

Facebook's 'comfort'

This turns off algorithmic ranking, something that should be of comfort to those who mistrust Facebook’s algorithms playing a role in what they see.

This one is a bit hard to put concisely, so under time constraints I'm going to resort to analogy. Go ahead and skip past (or, better yet, buy a copy) if you haven't yet had the fortune to read That Hideous Strength from C.S. Lewis' Space Trilogy.

In section 4 of Chapter 6 ("Fog"), Mark Studdock writes articles for two news outlets. An excerpt from the one for "the most respectable of our papers" follows:

The second moral to be drawn from last night's events is a more cheering one. The original proposal to provide the N.I.C.E. with what is misleadingly called its own 'police force' was viewed with distrust in many quarters. Our readers will remember that while not sharing that distrust, we extend to it a certain sympathy. Even the false fears of those who love liberty should be respected as we respect even the ill-grounded anxieties of a mother. At the same time we insisted that the complexity of modern society rendered it an anachronism to confine the actual execution of the will of society to [an outdated and incapable organization]'
-- from C.S. Lewis (emphasis mine)

Clegg's tone is redolent of Studdock's. That isn't a fully convincing indictment of Clegg, but should arouse suspicion due to:

  1. The fact that Clegg's audience is similar/identical to the audience Lewis imagined for Studdock;
  2. The fact that Clegg and Lewis are two men from the same culture (UK 'elites');
  3. The striking similarities of tone despite 75 years time between Lewis' writing and Clegg's article.

I believe that Clegg is using rhetorical device commonly employed by UK elites when trying to hoodwink a more 'highbrow' audience, and that Lewis had observed and marked out this practice 75+ years ago.

If you're not convinced, fine and good. I'm mainly addressing this point because the similarity between Clegg's tone and Studdock's tone is so marked that it tickled my interest.

Facebook as the little guy democratist (again...)

Political and cultural elites are confronting a raucous online conversation that they can’t control, and many are understandably anxious about it.

Before addressing the concerns of elites more deeply, let's skip lightly over the blatant hypocrisy here, since Facebook's C-Suite is packed with billionaires and centi-millionares, Zuckerberg himself attended Harvard before becoming a centi-billionare himself, and the guy who wrote that quote is the former deputy PM of the UK. Let's even skip over the unbridled arrogance required to brand democratically elected leaders "political elites."

With that out of the way, 'elites' aren’t the injured parties here. Sure they are concerned, but the injured parties are vaccine-hesitant or anti-vax families who have been dragged down a path of conspiracy theories spread on Facebook. The victims are flat-earthers on YouTube. The victims are Qanon adherents. The victims are dead Rohingya in Burma, victims of genocide Facebook admits its platform created an "enabling environment" for, even while pushing back against transparency and accountability. The victims are Officer Sicknick. The victims even include Ashli Babbett.

Facebook as conduit

Should a private company be intervening to shape the ideas that flow across its systems, above and beyond the prevention of serious harms like incitement to violence and harassment? If so, who should make that decision? Should it be determined by an independent group of experts? Should governments set out what kinds of conversation citizens are allowed to participate in? Is there a way in which a deeply polarized society like the U.S. could ever agree on what a healthy national conversation looks like?

Facebook is not just a platform that ideas flow across. Facebook (and all other engagement-driven social media companies with ad-based revenue models) amplify certain content. As Matt Stoller said:

For instance, Alex Jones’s monstrous claims about mass shootings reflect a deranged individual, but YouTube recommending his videos to users 15 billion times reflects a policy problem.
--Matt Stoller

That is a business model problem, which mains both that Facebook is not going to change that behavior unless forced to (ideally by consumers, but likely only by regulation), and also that Facebook will not acknowledge the source of the problem unless forced.

Facebook as one of many

Consider, for example, the presence of bad and polarizing content on private messaging apps — iMessage, Signal, Telegram, WhatsApp — used by billions of people around the world. None of those apps deploy content or ranking algorithms.

Similarly, those apps don't cause harm on the scale that Facebook does. This is an apples-to-oranges comparison. The app with the most harmful effects—telegram—is perhaps the most Facebook-y, with open channels instead of just groups and private messages.


Photo by Guillaume Archambault on Unsplash