The Coming Judicial Confrontation With Social Media.

Google has banned people from criticizing the vaccine as giving “misinformation.”  Yet they allow the Taliban to promote violence and Dr. Fauci to continue to lie to the public.  The First Amendment is about free speech—no one ever thought that colleges, schools and social media would replace government as the dictators of speech.

“Is social media space a public square … one that is properly subject to government regulation, or is it private property, buffered by the First Amendment from government constraint?

Judge Hinkle, the presiding U.S. District Court in Netchoice, hinted at this, even while blocking from going into effect those sections of the Florida law that he ruled violated the First Amendment. Some months earlier, Justice Thomas, in his concurrence in Biden v. Knight First Amendment Institute, expressed a similar uncertainty. In crafting a jurisprudential blueprint for how to regulate online platforms, Thomas warned: “We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”

TIs social media space a public square, as some have argued, one that is properly subject to government regulation, or is it private property, buffered by the First Amendment from government constraint? The question confounds First Amendment jurisprudence and causes the uneasiness increasingly becoming evident in cases like the ones just cited.”

This fight is not about social media and the public.  It is about a shadow government controlling our thoughts and information.  Slowly honest social media is being created—the public has to ignore the dishonest, like Facebook, Instagram, YouTube and Twitter.

The Coming Judicial Confrontation With Social Media.

The double bind Big Tech exploits to avoid accountability.

By Robin Ridless, Human Events,  9/18/21  

Commentators on the recent district court’s order for a preliminary injunction in Netchoice, LLC v. Ashley Brooke Moody et al. have focused on social media’s victory against the State of Florida, celebrating the court’s opinion that Google, YouTube, and Facebook are private companies beyond the reach of Gov. Ron DeSantis and the Florida legislature’s newest rules restricting Silicon Valley’s ability to censor, deplatform and block users. These writers have neglected the tone of irresolution in this and similar cases decided in favor of Big Tech, however.

Is social media space a public square … one that is properly subject to government regulation, or is it private property, buffered by the First Amendment from government constraint?

Judge Hinkle, the presiding U.S. District Court in Netchoice, hinted at this, even while blocking from going into effect those sections of the Florida law that he ruled violated the First Amendment. Some months earlier, Justice Thomas, in his concurrence in Biden v. Knight First Amendment Institute, expressed a similar uncertainty. In crafting a jurisprudential blueprint for how to regulate online platforms, Thomas warned: “We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”

TIs social media space a public square, as some have argued, one that is properly subject to government regulation, or is it private property, buffered by the First Amendment from government constraint? The question confounds First Amendment jurisprudence and causes the uneasiness increasingly becoming evident in cases like the ones just cited.

To expand on Biden v. Knight and how it fits into this overall picture, some background is needed. In 2019, the Second Circuit Court of Appeals decided that then-President Trump’s Twitter account was a government-controlled public forum, precluding him from blocking users from accessing its comment threads. The Knight Institute jubilated over its triumph. Yet sometime later, Twitter bounced Trump from the platform entirely, barring not just some but all users from interacting with his messages. How then could the account have been government-controlled? This past April, the 2019 appeals court decision that was supposed to be a major defeat for Trump was vacated by the U.S. Supreme Court. Still, it took Justice Thomas in his concurrence in that decision to expose the double bind.

Not that everyone heard. A few months later, Judge Hinkle, confronted with the Florida case, toyed with but finally failed to adopt Thomas’s perspective for going forward. He blocked Florida’s new laws prohibiting online providers from, among other things, removing political candidates from their platforms or shadow banning “journalistic enterprises.” He did this largely based on his acceptance of the plaintiffs’ claim that they were being forced to speak by, among other things, hosting speech against their policies or altering their editorial practices by being prohibited from adding warning language. (Policies whose workings, it needs to be added, the online platforms in their court papers claimed as their right to apply without necessarily making the public aware of what they are.)

The invocation of the First Amendment for the time being saved the Big Tech plaintiffs, who were represented by a trade association. But reading Netchoice through the earlier Thomas concurrence reveals Knight-like contradictions in the district court’s ruling. Clearly, the internal contradictions adumbrated by Thomas weren’t going away. Indeed, the need to reconcile them will likely shape future court battles as more states pass legislation similar to Florida’s.

THE WAY OUT 

As Thomas conceived it, dominant online providers resemble two classes of highly regulated private property: traditional common carriers like railroads and telephone companies, and places of public accommodation, which is what the law calls public-facing businesses that provide goods and services to all who want to patronize them. Like common carriers, platforms dominate their markets by controlling communication networks. And like places of public accommodations, they offer services that implicate communal norms and the public interest.

Hinkle, on the other hand, in reaching his decisions in Netchoice and employing reasoning based on classical First Amendment law, skips over economic realities…

Historically judicial doctrines have responded to the leverage exercised by both of these industries by curtailing their “right to exclude.” Telephone companies, for example, can’t stop serving subscribers because of the political content of their messages. Likewise, restaurants can’t discriminate against individuals who choose to eat at their establishments. Historically, concerns over interstate commerce as well as racial segregation occasioned the passage of these laws. In his concurrence, Thomas suggests there is no reason this longstanding jurisprudence shouldn’t serve as a basis for restraining online platforms.

Report Ad

In assessing the situation this way, it is clear that Thomas deems it highly significant that the online platforms exercise a monopoly of access to online information and content. Hinkle, on the other hand, in reaching his decisions in Netchoice and employing reasoning based on classical First Amendment law, skips over economic realities and looks instead to “a monopoly in the marketplace of ideas.” In other words, Thomas abhors the ability of online platforms to suppress a multiplicity of viewpoints by preventing some users from participating in the collective conversation. Hinkle, by contrast, believes that intellectual competition over content ultimately determines who gets heard.

This divergence in their basic frameworks is felt throughout the opinions of both judges. Justice Thomas, for example, zeroes in on such things as Google controlling the market in e-books through de-indexing. Hinkle, by contrast, singles out public statements by Governor DeSantis that he claims prove that the sole purpose of the “Florida Statutes” was to neutralize the attacks on conservative speech, thereby unconstitutionally authorizing the state to make viewpoint discriminations. We could go on. Thomas, to take another example, examines barriers to entry, such as high start-up costs. Hinkle, on the other hand, focuses on precedents such as Miami Herald Publishing Co. v. Tornillo (1974), which, he maintains, stands for the proposition that it is not up to the government to level the playing field between the proponents of competing ideas.

But the point of Thomas’s foray into judicial doctrine in the first place was to address the kinds of problems that come up repeatedly by looking solely and reductively at the realm of ideas. Thomas’s concurrence marks a turning point because it wrests the discussion from one about speech between factions to one about the distribution of speech to the broader public (which purpose, as Thomas points out, is the basis of the immunities awarded online platforms under Section 230 of the Communications Decency Act).

Seeing online platforms as serving as a conduit for speech to the public at large means that Thomas pays special attention to the empirical fact of which platforms have dominant market share and, as a result, should be subject to the “right to exclude.” Hinkle, staying within the penumbra of Tornillo and the marketplace of ideas, dismisses the relevance of market share and economic dominance, reiterating that the government should not act as a referee, no matter how power is skewed between the parties. What Hinkle is leaving out of the mix, if one listens to Thomas, is that today a complete exchange of ideas depends upon access to the communications infrastructure. That this access is controlled by a few powerful private entities puts these entities in a position analogous to transporters of physical goods, like railroads, which Congress has long seen fit to regulate in the cause of seamless interstate commerce and national integrity.

Thomas, put otherwise, offers a way out of stale rehashing about intellectual diversity by drawing parallels with how the law has over time dealt with economic monopolies in other areas of industry. It is true that he doesn’t confront the case of a regulated entity that accuses the government of compelling it to speak, as the Netchoice plaintiffs have done. He knows full well that state regulation of common carriers and places of public accommodations cannot infringe on an entity’s First Amendment rights without counterbalancing such imposition with a “compelling state interest,” a test that is notoriously difficult for states to pass. He also acknowledges that First Amendment challenges will complicate any future regulatory schemas federal or local governments might implement, requiring a lot of working-through to achieve the regulatory objectives he favors. Partly in response, in fact, many commentators have recently begun to speculate on how regulation of social media might mesh with First Amendment jurisprudence. A recent journal on First Amendment jurisprudence has debuted dedicated to just this topic.

All this activity signals that the solutions are out there, notwithstanding the complications. What follows teases out what, in my view, one of those possible complications might look like.

ALGORITHMS HAVE MORE FIRST AMENDMENT RIGHTS THAN HUMANS?

One of the benefits of Thomas’s framework is that it prompts side-by-side comparisons with precedents involving parties seeking First Amendment protection from the enforcement of “right to exclude” regulations. An obvious example is Masterpiece Cakeshop v. Colorado Civil Rights Commission (2018). Jack Phillips was sanctioned by the Colorado Civil Rights Commission for violating local public accommodations law by refusing to make a custom-order cake for regular customers of his on the occasion of their same-sex wedding. Part of Phillips’s defense was based on the Free Speech Clause.

Why should algorithmic outputs not have the last word on deplatforming?

Phillips asserted that his hand-made, highly decorated cakes were acts of creativity, embodying protected speech and expressive conduct. The State of Colorado, he claimed, was forcing him to endorse a message that violated his beliefs. The left ridiculed this argument, insisting his cakes were ordinary commodities. Jennifer Boylan, in the New York Times, asked (presumably rhetorically), “Is a well-manicured lawn a form of art by this definition?” Ultimately, the case went to the Supreme Court, which in the aforementioned 2018 decision, dodged the First Amendment and Free Exercise questions entirely. More recently, the Court refused certiorari to a similar case involving a florist.

I reference these cases simply to emphasize a single point. Whereas Phillips’s individualized, value-laden, and consummate honed artisanal labors are still in First Amendment limbo, the automated, algorithm-manipulated decisions of online platforms—over which, in many cases, no cognizant agency presides over—are not. The latter have been squarely covered by First Amendment protection in cases involving search engines. Although Hinkle acknowledges that “the overwhelming majority of the material [over which the State of Florida was to extend regulation] never gets reviewed except by algorithms,” he assumes that “ideologically sensitive” content does get reviewed. The editorial discretion involved in such review, in Judge Hinkle’s reasoning, should have protected status. This assumption provides the basis of his application of the First Amendment and, thence, strict scrutiny, which the State of Florida, in his ruling, flagrantly fails.

But again, to fit this case into traditional First Amendment jurisprudence, Hinkle must turn mental somersaults. He has to treat nonhuman editing and curation processes as speech rather than conduct, the latter of which isn’t covered by the First Amendment unless it is “expressive,” that is, unless it represents ideas through symbols and images in ways the courts have considered similar to verbal speech. In fact, many algorithmic processes create content blindly rather than via any personal agency. They are self-propelling programs that make, as one scholar put it, “unguided adaptation toward a goal.”

It is not as if the Netchoice plaintiffs deny this, either. In their court papers, they give the absence of human review as a reason why it would be infeasible to try and monitor their allegedly inconsistent decisions rejecting select users. In a similar vein, online platforms have tried to depict their attacks on their political enemies by linking their censorial practices to pretextual goals such as community welfare or election integrity. One gets a sense of their heroics in the second paragraph of Netchoice’s Complaint, which lays out the myriad of ways Silicon Valley is out to save the world:

These unprecedented restrictions [passed by the Florida legislature] are a blatant attack on a wide range of content moderation choices that these private companies have to make on a daily basis to protect their services, users, advertisers, and the public at large from a variety of harmful, offensive, or unlawful material: pornography, terrorist incitement, false propaganda created and spread by hostile foreign governments, calls for genocide or race-based violence, disinformation regarding COVID-19 vaccines, fraudulent schemes, egregious violations of personal privacy, counterfeit goods and other violations of intellectual property rights, bullying and harassment, conspiracy theories denying the Holocaust or 9/11, and dangerous computer viruses.While courts have accepted this linkage, the extension of First Amendment coverage to algorithmic-manipulated discriminations in the name of such pretextual goals is a fallback that a number of prominent scholars have questioned. Judge Hinkle, in other words, relied on his own assumption that a substantive message is being transmitted by online platforms when they kick off a user. In reality, such actions may be completely uninformed by human intention.

Why should algorithmic outputs not have the last word on deplatforming? According to commentators, a transmitted message must be at least designed to reach a recipient to be speech. Yet, a major part of the complaint being addressed by the Florida Statutes is precisely that the basis for targeting specific users is neither obvious nor communicated. Lest we forget, moreover, what Thomas points out, namely, that Twitter’s terms of service permit the company to remove any user “at any time for any or no reason.” So Big Tech companies are basically contending that although they can act 100% mindlessly and randomly against their political enemies, or otherwise unilaterally determine what is in everyone’s self-interest, any attempt to hold them to account for their conduct is an imposition on their free speech.

FAKE DUE PROCESS MAY BE WORSE THAN NO DUE PROCESS

This counterintuitive result is what comes of squeezing social-media cases into ill-fitting categories of traditional jurisprudence. What emerges is a kind of fake due process in which those with concentrated control over our networks succumb to the megalomanic lure of their own self-righteousness. Indeed, Jonathan Turley’s dystopian discussion of the dangers of evading free speech protections through corporate proxies show the ramifying cost of elevating machine learning, with all its defects, to the level of actual discourse. As Turley put it:

The government cannot implement a censorship system under the Constitution–but it can outsource censorship functions like Facebook and Twitter. Just this week, the White House has admitted it has been flagging ‘misinformation’ for Facebook to censor. [Link omitted.] At the same time, Democrats like Sen. Richard Blumenthal (D-Conn) have demanded that Big Tech companies commit to even more ‘robust content moderation’—an Orwellian term for censorship.For Hinkle, these dangers are easily set aside by virtue of the fact that the internet affords speakers and political candidates more communicative opportunities than ever before. This, according to him, disproves the myth that online platforms dominate the media. Again, to resort to this reassurance requires a very different starting definition of dominance than the one developed by Justice Thomas. Thomas already understood this in April. He anticipated the common refrain that Hinkle would have recourse to and answered it before the fact this way:

It changes nothing that these platforms are not the sole means for distributing speech or information. A person always could choose to avoid the toll bridge or train and instead swim the Charles River or hike the Oregon Trail. But in assessing whether a company exercises substantial market power, what matters is whether the alternatives are comparable. For many … nothing is.Again, by premising his standards on (primarily) common carrier doctrine (and to some extent) public accommodations doctrine, Thomas takes the conversation out of boxes like “hate speech” and “misinformation” that too often skew these discussions, miring them in ideological arguments that endlessly rehearse themselves without bearing resolution. Meanwhile, the sheer number of publishing outlets in digital society becomes reason itself for leftists to debunk the value of speech. Independent avenues of expression, however modest, become part of the excuse for Big Tech’s campaign of suppression. Parler is the perfect example. It is the Oregon Trail of today’s internet.

So where do things now stand? So long as courts construe the conflicts at issue as being about content rather than access, about ideas rather than the means of their distribution, they will likely wind up with results that, like Knight, will in a matter of time stand revealed as facially incorrect, if not an ironic mockery of First Amendment values. Whatever else may be flawed about the Florida Statutes, one thing seems increasingly clear. Some sort of reckoning awaits. With Florida appealing the Netchoice order and former President Trump suing Facebook, YouTube, and Twitter for outlawing his online communications, the time to evolve new applications of existing legal doctrines forecast by Justice Thomas may well already be upon us.