WYSK: 06/24/22
This Week: 0. One Good Thing; 1. Dead Voices; 2. Metaverse Standards; 3. Money and Speech; 4. Coercive Prediction
What you should know from the week of 06/24/22:
- One Good Thing: Maddie's Place!
- Dead Voices: Amazon's Alexa unveils deepfake capabilities to mimic the voice of dead relatives;
- Metaverse Standards: Meta/Facebook forms a "standards" body for the metaverse, but why does it leave out key players?
- Money and Speech: A federal court ruling this week that boycotts are not free speech restricts the 1st Amendment;
- Coercive Prediction: Sun-Ha Hong publishes a paper addressing how prediction limits choices.
Alright, obviously it feels like there is really only one news story this week: "SCOTUS overturns Roe v. Wade."
I wrote on this when it was a draft ruling, and I'll leave it there for right now. This week's articles actually don't address the various SCOTUS rulings that came out this week; these are some of the most hotly-discussed topics today and are being covered from almost every angle. Instead, this week is focusing on the significant stories that you might not have heard about this week.
One Good Thing:
I just learned about "Maddie's Place" this week: a "non-profit, free-standing recovery nursery for babies experiencing withdrawal due to prenatal substance exposure." This is a surprisingly large and growing issue, that doesn't get the coverage and care it should.
The compassionate, comprehensive, and non-condemning care that Maddie's Place gives is excellent, and they are intentionally smoothing the path for more /other recovery nurseries as they get all of the required laws passed and other bureaucratic tasks required to operate a nursery like this. Very cool.
Dead Voices:
CNBC's Annie Palmer wrote on a new feature for Amazon's Alexa voice assistant:
At Amazon’s Re:Mars conference in Las Vegas on Wednesday, the company demonstrated a feature that enables its Alexa voice assistant to emulate any voice...The feature, which is still in development, could be used to replicate a family member’s voice, even after they’ve died.
Deepfakes are already weird and ethically suspect, but this is a fairly bizarre step.
In The Verge's coverage, James Vincent notes that:
Prasad [Amazon’s head scientist for Alexa AI] introduced the clip by saying that adding “human attributes” to AI systems was increasingly important “in these times of the ongoing pandemic, when so many of us have lost someone we love...”
...“While AI can’t eliminate that pain of loss, it can definitely make their memories last"
Key thoughts here:
First, "letting go" is a necessary part of grieving, it is just one of the hard ones.
And so since people don't like it, companies (especially ones that with a mania about seeing people only as customers and not as human beings) naturally want to give customers an alternative. But it will not be good for people.
Second, commoditizing people is a gross act.
From The Verge's article: "In a demonstration video, a child said, “Alexa, can Grandma finish reading me the Wizard of Oz?”"
Three elements jump out to me:
- Alexa is the gatekeeper to Grandma, as the child asks Alexa to give them a hit of companionship. This creates a dependence on Alexa itself as the giver of companionship, rather than reinforce an existing relationship with a real person.
- Grandma can be summoned at will. Again, this does not sustain a loved one's memory, but commoditizes them and reduces them to slavish compliance.
- This is a cute(ish) but cherrypicked scenario, and an actual rollout of this feature is certain to result in abuses. Amazon proudly touts its ability to train a model on merely one minute of audio, making it trivial for the tens of millions of Alexa users to have nearly anyone's voice spout out custom text. A few possibilities for abuse: A person's voice could be replicated to support "vishing" scams over the telephone; someone's voice could be made to read existing or custom obscene text; or custom dialogues could be crafted with an Alexa-fabricated voice playing one role. All of these cases of abuse exist in more primitive forms today, but would be enabled at greater scale and with lower barriers to entry with this technology.
Metaverse Standards:
From Euronews this week:
Meta, Microsoft and other tech giants racing to build the emerging metaverse concept have formed a group to foster development of industry standards that would make the companies' nascent digital worlds compatible with each other.
Compatibility is very important, but Apple, predicted to release an AR/VR headset in January, was left out of the body:
Conspicuously missing from the member list for now however is Apple, which analysts expect to become a dominant player in the metaverse race once it introduces a mixed reality headset this year or next.
Any body comprising a subset of 'metaverse' players is evidence of a focus on market capture, not standardization.
At this point, it is unclear whether it is Apple, or if it is Meta and the rest of the 'standards' body who are prioritizing market capture. But Apple's exclusion demonstrates the fundamental truth of metaverse technologies: while the image presented is of brave tech companies simultaneously creating and exploring a new frontier for human expression and interaction, they are really focused on staking their claims rather than exploration.
Money and Speech:
Elizabeth Nolan Brown reported on a ruling this week from the U.S. Court of Appeals for the 8th Circuit (additional reporting in WaPo) that states that boycotts are not protected speech or expression under the First Amendment:
Boycotts aren't protected speech, says federal court. The U.S. Court of Appeals for the 8th Circuit has upheld an Arkansas law saying public contractors can't boycott Israel.
Text from the ruling provides additional clarity:
In 2017, Arkansas passed a law requiring public contracts to include a certification that the contractor will not “boycott” Israel. Arkansas Times sued, arguing that the law violates the First Amendment. The district court dismissed the action. Sitting en banc, we conclude that the certification requirement does not violate the First Amendment and affirm.
The judges' conclusion was that:
[The law] only prohibits economic decisions [decisions that are "purely commercial, non-expressive conduct,"] that discriminate against Israel. Because those commercial decisions are invisible to observers unless explained, they are not inherently expressive and do not implicate the First Amendment."
Two key takeways:
The law in question contains some vague definitions (emphasis mine):
(i) “Boycott Israel” and “boycott of Israel” means engaging in refusals to deal, terminating business activities, or other actions that are intended to limit commercial relations with Israel, or persons or entities doing business in Israel or in Israeli-controlled territories, in a discriminatory manner.
As one of the dissenting judges noted, 'intent to limit commercial relations' is a very broad and subjective definition that can cover a very wide set of constitutionally-protected activities.
Citizens United already addressed commercial/economic decisions as speech:
In Citizens United v. FEC, the Supreme Court established that spending for political outcomes is protected 1A expression. Citizens United covered spending on ads, even when those ads weren't seen, which hits directly to the heart of the judges' verdict that boycotting is not inherently expressive since it is not inherently visible.
And finally, really just as a minor note of interest that conveys the badness of this law: the law contains language so companies can pay to be exempt from the law by discounting their products by 20% more than their closest competitor.
Coercive Prediction:
Excellent research this week from Sun-Ha Hong—Assistant Professor at Canada's Simon Fraser University—about how tech-driven prediction limits people's choices and therefore reduces their agency.
The PDF of the research can be found here.
The research is pretty academic and jargon-y, but is worth reading. Hong walks through several significant cases where predictive analytics were clearly bogus and harmful, our society's obsession with prediction, and some of the harmful myths about prediction.
I've included his abstract with added emphasis and annotation:
I argue that data-driven predictions work primarily as instruments for systematic extraction [or reduction] of discretionary power – the practical capacity to make everyday decisions and define one’s situation. This extractive relation reprises a long historical pattern, in which new methods of producing knowledge generate a redistribution of epistemic power: who declares what kind of truth about me, to count for what kinds of decisions? I argue that prediction as extraction of discretion is normal and fundamental to the technology, rather than isolated cases of bias or error. Synthesising critical observations across anthropology, history of technology and critical data studies, the paper demonstrates this dynamic in two contemporary domains: (1) crime and policing demonstrates how predictive systems are extractive by design. Rather than neutral models led astray by garbage data, pre-existing interests thoroughly shape how prediction conceives of its object, its measures, and most importantly, what it does not measure and in doing so devalues. (2) I then examine the prediction of productivity in the long tradition of extracting discretion as a means to extract labour power. Making human behaviour more predictable for the client of prediction (the manager, the corporation, the police officer) often means making life and work more unpredictable for the target of prediction (the employee, the applicant, the citizen).
Interest piqued? Disagree? Reach out to me at TwelveTablesBlog [at] protonmail.com with your thoughts.
Photo by Tamara Gak on Unsplash