What Policymakers Need to Know About the First Amendment and Section 230
The Supreme Court just heard two cases - Twitter v. Taamneh and Gonzalez v. Google
- that could dramatically affect users’ speech rights online. Last
week, EFF hosted a panel in Washington D.C. to discuss what legislators
need to know about these cases, the history of Section 230, and the First Amendment’s protections for online speech.
Alongside EFF Senior Staff Attorney Aaron Mackey, the panel included Billy Easley, Senior Public Policy Lead at Reddit, and Emma Llanso,
Director of the Free Expression Project at the Center for Democracy and
Technology (CDT). Senator Ron Wyden (D-OR), one of the co-authors of
Section 230, gave opening remarks.
Senator Wyden and the Supreme Court
Senator Wyden opened the panel with background on the law: it
simply establishes the principle that the person who creates and posts
content is responsible for that content. Thanks to Section 230 and the
First Amendment, websites can take down what they want. Section 230 is
essential to smaller companies and startups: “The big guys can take care
of themselves,” but the small guys should be able to compete with the
big guys, Wyden explained. The law democratizes speech, and elevates the
choices of users. Thanks to Section 230, people are able to speak out.
Wyden then discussed the latest Supreme Court cases. In Gonzalez v. Google,
the petitioning plaintiffs made a radical argument about Section 230.
They asked the Supreme Court to rule that Section 230 doesn’t protect
recommendations we get online, or how certain content gets arranged and
displayed. In Twitter v. Taamneh, the
U.S. Court of Appeals for the Ninth Circuit ruled that online services
can be civilly liable under the Anti-Terrorism Act (ATA) based on claims
that the platform had generalized awareness that members of a terrorist
organization used its service.
In our view, the decision in Gonzalez should be clear: online
recommendations and editorial arrangements are the digital version of
what print newspapers have done for centuries, directing readers’
attention to whatever might be most interesting to them. Deciding where
to direct readers is part of editorial discretion, which has long been
protected under the First Amendment. Regarding Taamneh, the Court should
interpret the ATA to create liability only when platforms have “actual
knowledge that a specific piece of user-generated content substantially
assists an act of terrorism.” In other words, online services should not
be liable under the ATA based only on claims that they had some
generalized awareness of terrorist content on their platforms.
In Wyden’s view of the cases, the Justices seemed to recognize
that removing Section 230 protections for algorithms is the same as
taking away Section 230 protections generally. Ultimately, what we need
is strong consumer privacy laws that remove the incentive for hoovering
up personal data and monetizing it, and we need better antitrust
enforcement.
Lastly, Wyden closed with a warning. Those members of Congress
that want to scrap Section 230 need to be careful what they wish for. FOSTA,
the only law that has amended Section 230, was supposed to eliminate
sex trafficking. All it did was “drive the bad guys into the dark web,”
creating even less accountability, more harassment, and more violence
against sex workers. Without Section 230, it’ll be a lot harder for
marginalized voices to call out wrongdoings by powerful people, and
it’ll be easier for the government to set the terms of public debate.
But the last few years have shown more than ever that we need places
where smaller voices can be heard.
Speaking up for Users, So Users Can Speak Up
Reddit’s Billy Easley opened the panel describing the goal of the
brief filed by the company in the Gonzalez case. First, they wanted to
reorient the discussion back to users, which Section 230 empowers and
protects, especially those involved in a moderation role. Reddit uses
community moderators, and Section 230 empowers them to take down hateful
content. It also helps them protect their users, for example, from
defamation claims.
Second, they wanted to educate folks also about how Reddit uses
algorithms, lest people think of “the algorithm” only in terms of what
YouTube and Facebook do. For Reddit, the algorithm is simple: You upvote
stuff and more people see it. You downvote and nobody sees it anymore.
There is also an automod that flags content from newer users or flagged
posts so that moderators can review the content before it goes up.
That’s not what a lot of social media entities do—and that community
moderation would be potentially on the chopping block without Section
230.
Generally, Easley said, people should remember three points:
- 230 protects people and platforms online. It allows platforms and people to be innovative and experiment online.
- Algorithms are just tools. They can be used for good and bad.
- It’s not just about Facebook, TikTok, YouTube, and Reddit.
This should be a conversation about what kind of internet we want to
have and whether we want platforms and users to be innovative.
EFF’s Aaron Mackey spoke next, explaining that EFF’s core
concern in these Supreme Court cases is how an interpretation of Section
230 will impact the abilities of users to express themselves online. In
Taamneh, the question is how far does liability extend when the
platform—Twitter in this case—has only the most attenuated link with a
terrorist attack. If you make Twitter liable for merely providing a
service that some bad actors used for speech that ultimately supported
an organization that perpetrated terrorist attacks, what does that mean
for speech? We know from the case law that when you put liability on an
intermediary, they will begin to overcensor and only distribute speech
that they are confident is inoffensive. That blunts people’s ability to
distribute and share their expression online.
In Gonzalez, the distinction made by the petitioner and
Solicitor General, if adopted, would create a less useful internet for
users and audiences. Without recommendations the speech online becomes
much less organized and more difficult to find. Recommendations are
good; you want to be shown the stuff you want, not what you don’t want.
Lastly, Emma Llanso of CDT, which also filed briefs in the
cases, pointed out that the First Amendment should be a guide when
considering how the Taamneh case should be litigated. As we saw with
FOSTA, over censoring occurred just because of potential liability. If
the courts don’t allow protections for recommendations we will likely
see similar harm.
Questions and Answers
The first question for the panelists was whether Congress, not
the Supreme Court, should amend Section 230. Easley explained that
Congress should identify what they are most concerned about online and
take a step back to assess the best way to protect the population they
want to protect. Llanso agreed; the question is always “what is the
problem you’re trying to solve.” There may be Section 230 angles, but
there are other issues to be addressed.
The second question was about a type of law we’ve seen recently that’s frequently formatted as: “Do X
or you lose your Section 230 protections,” e.g., your company only gets
protections if researchers are allowed access to the data on the
platform. Llanso explained that one of the challenges with that kind of
structure is that it’s essentially coercing an outcome that the
government can’t compel through law, creating First Amendment problems.
Mackey pointed out that these bills are often imprecise: it’s unclear
when you lose protections, and the scope of that lost immunity. Easley
summed it up: using Section 230 as the sword of Damocles is the wrong
idea. It hurts users.
Third: Should platforms have an agreed-upon accountability
standard that they are liable for upholding. EFF has concerns about the
government setting terms of accountability. But, Mackey explained, EFF
co-created the Santa Clara Principles—a voluntary effort that calls for
platform transparency, based on Human Rights principles. We want
companies to adopt a good regime that works for speech, and is also
self-motivated. Llanso pointed out that companies are happy to be with
us on First Amendment and Section 230 views, but their support does
often disappear once we talk about privacy. Companies should absolutely
be accountable to their terms of service, but holding them liable for
everything that violates those terms is inherently error prone.
The final question was about a new batch of laws offering
protections of various sorts for young people online: should there be
specific special laws for content specifically related to children?
Easley said we should interrogate the specific ‘targeting children’
part. State bills talking about platforms that target children are
extraordinarily broad. any websites are general purpose and used by both
teens and adults. We can all agree that kids’ data shouldn’t be
collected. But when laws require parents to have access to all direct
messages—like S.B. 152 which is on the governor’s desk
currently in Utah—that makes dangerous assumptions about parent-child
relationships. A lot of “kids online safety” bills paint too broad a
brush and we need a little more thinking on it.
These bills are mixing two goals. First, a concern about
targeting and collecting personal and private information about children
online. Second, they seek protection for kids from ‘harmful content.’
But what is harmful content? The bills require age gating and age
verification, and that allows more targeted data collection of children
in the name of protecting them. These restrictions stop teens and young
adults from finding communities online. When these bills combine
children’s privacy and protecting children online, they fail to do
either.
A lot of the legislation doesn’t think enough about kids’
independent rights, including from their parents, Llanso said. It’s a
murky constitutional sliding scale, but older minors do have their own
rights. Understanding how to protect and empower children is better than
wrapping kids in bubble wrap. Easley pointed out that if age
verification is required, collecting documents to verify age will also
be required. That not only violates privacy but creates a data breach
concern. And some of these bills also create specific duties for any
platform for practices that cause physical, emotional, development harm
for those under eighteen and, honestly, no one knows what that means.
Easley closed the panel out with a simple plea: “Remember the
users. Remember the impact any change in Section 230 can have on
users.”
When congressional offices are thinking about Section 230, both
EFF and CDT are happy to help. We’re in the unique position of having
policy expertise as well as litigation expertise.