Section 230: Challenges and Enforcement

While allowing companies to be recognized as Information Content Providers as defined in Section 230 of the Communications Decency Act comes with challenges, the enforcement benefits exceed the risks.

Section 230: Challenges and Enforcement

This is a follow-on essay to my previous essay discussing how to enforce CDA 230 to mitigate harms from algorithmically-promoted content while minimizing negative impacts to free speech. You  can read it here.

Classifying companies as Information Content Providers under CDA 230 presents enforcement challenges, but these challenges are not so great as they may appear at first, and are certainly less significant than the squeals of the major players in this industry will make them appear.

It is critical to recall that without CDA 230 protections, companies are not suddenly legally responsible for any harm caused through their technologies. Rather, they are open to potential liability which will still need to be proven in a court of law.

Here are the key enforcement challenges:

Who is responsible?

If CDA 230 is enforced, tech companies will seek to respond by claiming that their algorithms function merely as dumb amplifiers that accelerate what is put into them, and that therefore they are not producing new content, but just amplifying the content produced by users. This is disingenuous. This tactic is particularly common in Facebook's rhetoric, and I responded to a particularly egregious example of this from Nick Clegg in June.

While wrong, this argument does have a convincing veneer of truthiness and needs to be dealt with at greater length. I'll publish an essay dealing with this in the context of CDA 230 shortly.

Search Engines

Search engines (like Google) are perhaps the most explicit example of ranking and recommending content, and clearly provide value to society. How search engines are handled under CDA 230 is the thorniest issue of enforcement.

Additional legislation could be written to provide specific protections to search engines, but this will be exceedingly difficult to write from a technical perspective and will likely age poorly. Alternatively, search engines could be exempted by making their algorithms public and peer-reviewed, but this makes search engines exceptionally vulnerable to malign actors gaming results. Search engines could be optimized to rely more on user input through a variety of filters—rather than on the perceived omniscience of a series of algorithms created by biased developers, but this approach still doesn't entirely solve the problem of how search engines differ other social media products.

Likely some combination of these and other solutions would need to ultimately be adopted in order to achieve the best outcomes.

'Shuffle' and simple processing

Spotify's 'shuffle songs' algorithm isn't just a simple random shuffling of songs; instead they use an algorithm to produce a sorting of songs that feels random to users even though it technically isn't.

Considering Spotify an information content provider under CDA 230 for providing a shuffled list of songs does not introduce material risk to Spotify. In general, as a provider of Interactive Computer Services, Spotify would be protected under CDA 230 and would not be liable for artist content. Spotify could only be liable for the way that Spotify presents songs in a shuffled format.

For example, Spotify could not be held liable for obscene content on its platform provided by someone else, but if it was found to be producing new content ('for you' playlists or other recommended content) that incorporated that obscene content then Spotify could be liable for harm caused by its own content.

Small Companies and Regulation

Regulation presents a risk of harming smaller companies who do not have as many funds or organizational capabilities as larger companies and are therefore less-equipped to handle a new regulatory burden.

However, in this case, the elements that would lose liability protections (i.e. elements that might be considered under regulation) generally are advanced and proprietary algorithms that are most commonly seen in large tech companies due to the AI/ML expertise used in developing those, as well as the extensive data sets those algorithms are trained on.

The likelihood of chilling innovation or harming small companies is slim.