The United States Supreme Court recently handed down decisions in two closely watched cases that had the potential to dramatically lessen the legal shield that Internet service providers have enjoyed since Congress enacted the Communications Decency Act (“CDA”) in 1996. Rather than send shockwaves through the Internet community, the Supreme Court largely punted, much to the collective relief of Google, Facebook and the other major internet providers.
When the Internet was still in its infant, but clearly burgeoning stage, Congress embraced it and enacted legislation expressly designed to promote the internet as a free market of competition and ideas. Congress codified the CDA in 47 U.S.C. 230, which provides broad immunity to Internet providers for the content that they host, with the theory being that the hosts should not be liable for content that is posted by others. A key provision of the law states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In principle, the Internet is a large public bulletin board and the owners of the bulletin board should not be responsible for what is posted on their bulletin board. That is why, for example, restaurant owners typically have no legal recourse against Yelp when a disgruntled patron posts a scathing review that scares away customers – even if the review is entirely inaccurate.
But should that broad protection still exist? It depends on who you ask, but one thing is undeniable — the Internet is not the same Internet as it was in 1996 when Congress granted the broad immunity. In 1996, approximately 40 million people in the world were using the Internet. Today, over 4 billion people are using the Internet, and the amount of data being uploaded has exploded exponentially.
Additionally, Internet providers are much more active in moderating the content they host, so rather than serving as the proverbial bulletin board where all content remains posted, providers are constantly reviewing and removing material as they see fit. More importantly, many providers are actively curating and steering content to particular viewers based on what the providers believe those viewers want to see. The large Internet providers are undoubtedly active on their so-called bulletin boards – the “content moderator” industry is reported to be a $12 billion industry. Does that increased curation of content suggest that Internet providers should not be able to hide behind the shield of liability?
The Supreme Court has not addressed the liability shield set forth in the CDA since the law was enacted in 1996, and many have been anxious for the Court to do so. Indeed, legal scholars thought the Supreme Court would address the issue head on in two recent cases – one against Twitter, and one against Google. The Court relied largely on the case against Twitter to make its point, namely that it was not going to use these cases to peel back the immunity shield. The case against Twitter was brought by the family of a man killed in a terrorist attack carried out by ISIS in a nightclub in Istanbul, Turkey in 2017. The family sued Twitter under a federal antiterrorism law that allows plaintiffs impacted by terrorism to seek civil damages from those who carried out the terrorist acts and those who aided and abetted the terrorists.
The plaintiffs argued that Twitter not only hosted propaganda from ISIS supporters, but it also used algorithms to effectively steer terrorist content to terrorists, thereby contributing to the killing of their family member. Justice Clarence Thomas, writing the opinion for the court, acknowledged that Twitter hosted content from ISIS including videos that fund-raised for weapons of terror and showed brutal executions of civilians. Yet, Justice Thomas and the Court stopped short of finding that Twitter’s hosting of that content sufficiently aided and abetted ISIS such that Twitter participated in the attack. With that finding, the Court avoided having to wade into CDA jurisprudence, much to the disappointment of those who believe more control over internet content is warranted.
Thus, for now, consumers and businesses still cannot look for legal recourse from Facebook, Twitter, Glassdoor, Yelp, YouTube or other Internet service providers when they feel they have been defamed or otherwise treated unfairly by content posted on those platforms. The Supreme Court’s deliberate decision to not tackle the broad immunity shield of the CDA, however, has promoted calls inside and outside of Congress for legislative action. Many argue that the federal law is no longer appropriate for a very different Internet than when enacted. They argue that in 1996, Congress could not fully appreciate what the Internet would become and certainly did not intend to shield large Internet companies from consequences for disseminating violent, discriminatory and knowingly false information.
But do these decisions end the issue, or will they spur Congress to act? As a society, do we want Congress to change the current law, or do we think Congress got it right when it protected Internet service providers with immunity? Large tech companies insist that if they do not have a legal safe harbor for content posted on their sites, the Internet will change dramatically because the tech companies will remove significant content and increasingly censor uploads and the free marketplace of ideas will wither on the vine. On the flipside, by virtue of their already pervasive curation of posted materials, perhaps the free marketplace of ideas is not so free anymore and tech companies should share responsibility for how they manipulate content uploaded to their sites.
The Supreme Court did not answer this question for us, but stay tuned as Congress may take this into its hands. And if it does not, there are plenty of cases in the Supreme Court’s pipeline that could lead to dramatic changes in this area.