The Washington PostDemocracy Dies in Darkness

Free to State: The New Free Speech

MS. SELLERS: Good morning and welcome to Washington Post Live. I’m Frances Stead Sellers, a senior writer at The Washington Post.

It gives me great pleasure today to welcome the president of Columbia University, Lee Bollinger. President Bollinger, welcome to Washington Post Live.

MR. BOLLINGER: Thank you very much.

MS. SELLERS: We're delighted to have you.

So, President Bollinger, you are an expert on the First Amendment, one of the nation's foremost scholars, and also, you've written extensively on freedom of the press. I'd like to start by stepping back and looking at the unique nature of America's speech laws. The country's possibly the most permissive of hate speech of any Western democracy. How did we get here?

MR. BOLLINGER: So that's a fascinating question. I think one has to start with the fact that even though the Bill of Rights, and the First Amendment, as the first one of those, was really part of the Constitution from the 18th century, there was no Supreme Court case interpreting those words in the First Amendment until 1919. So, from 1919 until today, which is obviously a century, the Supreme Court and lower courts have built up a body of jurisprudence that is remarkable. I mean, it's the most elaborate exploration of what freedom of speech and freedom of the press mean in any country.

And the degree, as you point out, the protection that has been afforded to speech in the United States, really since the last half century, is the strongest, most protective system that has ever been set up by a society.

MS. SELLERS: Do you believe this expansive reading of the First Amendment is appropriate today or are your views evolving in the Internet age?

MR. BOLLINGER: So, I think one always has to reflect on that. I think one does have to be prepared to evolve as facts change and our understandings of the needs of society change. But I'm largely of the view that what has been developed by the Supreme Court since that 1950s, 1960s period is right. I think that was a profound achievement. The body of doctrine that we have from that period has stood the country really well, has been a model for the rest of the world, and on the whole, I think is really stunningly important.

MS. SELLERS: So just briefly tell me what precipitated previous controversies or debates around the First Amendment, and how are those tensions similar to the ones we see arising today, on college campuses and newspapers and, of course, online?

MR. BOLLINGER: So, you know, there's not a year that goes by that there are not new free speech controversies, and that has been true, of course, for centuries. If you go back to the origins of the Supreme Court jurisprudence, which I said began in 1919, they came out of controversies about America's involvement in World War I, out of controversies around ideologies of communism and socialism, about issues of labor. And you have, in that period, many, many people arrested, prosecuted, imprisoned, including a candidate for president of the United States, Eugene Debs, because of opposition that they expressed towards these various national policies.

The other period that is really notable in this regard is the 1950s, and, of course, the McCarthy era. And there the arrest prosecution treatment of people who were deemed to be communist or communist sympathizers was another point of enormous controversy, and a low point in the development and history of the First Amendment.

The 1960s brought this incredible flourishing of thinking and imagination applied to how to deal with these sorts of issues of extremist speech, of libel and defamation, and so on, and that, of course, was the civil rights era, which had a profound effect on the development of the First Amendment, antiwar movements, and so on.

So that continues today, and, of course, all the issues that we have surrounding the policies, the election, voting, issues of racism and so on in the society, generate really profound First Amendment, free speech, free press questions.

MS. SELLERS: I'm interested in these periods of time where you talk about when these issues came up, and, of course, some going back to the 1930s or a little bit later with communism and the arrival of fascism, also coordinated with the arrival of radio and television and how much the extent of a new medium allowed these debates to arise. What's your thought on that?

MR. BOLLINGER: So, this is again the really profound and highly interesting area. It has been the case, I think, that any new major technology of communication unsettles people. People fear that it will lead to manipulation of public opinion, manipulation of behavior disinformation, spread of seditious ideas. People are wary of new technologies of communication. It is also the case that they may well make life much more difficult, and so there are good reasons to be concerned about them.

In the 1920s, radio was introduced into America, and it was legislation in 1927, followed by, of course, the major '34 Communications Act that took that new technology of communication, broadcasting, which eventually incorporated both television and cable, and put them in a different kind of regime of regulation and a different doctrinal structure of the First Amendment.

By the late 1960s, early '70s, we had a quite--a differential treatment is what I've called it, of these two, of print media and the broadcast media. Print media have been totally free of government regulation, protected in that. Radio and TV and cable have been subject to public relation--licensing system, some degree of content regulation like the fairness doctrine or equal time provision, and that has remained, more or less, in place up to today. The Internet is, of course, the new major development in a new technology of communication and we're still trying to figure out how that fits into this jurisprudence and our public discussion of ideas.

MS. SELLERS: I'm going to ask you more about that in a minute, but first I'd like to take you back to a 2019 talk you made in which you referred to President Trump's comments following Charlottesville, that there were very fine people on both sides. And I'm going to read your words. You used that as an example of departing from norms and said, "In order for us to protect extremist hate speech we have to agree basically that's horrible."

How do we set the moral compass today in such a partisan world? Where is that center that you seem to be reaching for in the speech?

MR. BOLLINGER: Well, I was making a point there, which I'll get to in a second. But I think you reach that moral compass by people speaking out and articulating a moral compass. And we do it through enactment of laws and we do it through constitutional adjudication. I mean, there are countless ways in which we are constantly setting the moral-ethical standards of the society.

The point I was making then is this. As you said, and I repeated at the beginning of the discussion, the United States has, since the 1960s, for the last half century, given more protection to speech--extremist speech, hate speech--than any other society. It is a very complicated issue. But the result of Supreme Court adjudication has been clear. Neo-Nazi speech, Klan speech, et cetera, all these have resulted in cases, and the Supreme Court has developed the doctrine that these ideas, as odious as they are, are nevertheless protected until the point where they incite imminent lawless action. That's the kind of test that has been devised. So, we have to live with these ideas. We have to counter them. We must speak about them. We must use our speech to counteract the evil effects of these ideas.

The entire system of extreme protection for speech in the society depends upon both the courts and public leaders, especially, condemning the ideas as, at the same time, they are being protected. So, if you look at all the Supreme Court cases that deal with extremist speech, they include very explicit rejection of the ideas. Once it happens that there is condemnation of these ideas, the critical condition for that level of protection begins to change, because the worst thing that can happen is that we end up looking and feeling and being a society in which people think we are neutral towards really bad and potentially evil ideas.

So in one sense it was, of course, problematic and terrible in the context in which this happened, just to have that seeming approval, but it also plays into a deeper structural sort of understanding that we've arrived at in the United States about how to think about protectionist speech and at the time how to maintain a moral compass, as you said.

MS. SELLERS: So, I have a question, struggling with this online where the notion of countering bad speech with good speech can be very hard, because online one can become caught up in a reinforcing area of speech and you don't see that counterbalance. How do we manage with that today? How do you counter what I'm talking about?

MR. BOLLINGER: Right. So, I think, you know, that is one of the great questions of our time. If one agrees with the approach that we've taken over the past 50 years, as I do, and I've written about this and various rationales for why this makes good sense, constitutionally and legally and just as a matter of social life, if you agree with that you then have to face the question, have the circumstances changed in material ways with respect to that speech and the dangers that that speech now poses, because of the introduction of these new technologies--the social media platforms and the Internet generally.

I think the jury is still out on that question. I mean, that will become, over the next decade, something that we will all have to face, including the Supreme Court of the United States.

If you have a society in which those really, really dangerous ideas have spread really broadly within the society and are aided in that by the type of means of communication that are available to them, you do have a different context than you did in the 1960s, when the Supreme Court decided that a small group of the Klan, that met and was put on television, really did not justify the government coming in and censoring the speech. Well, if the scope of these really bad ideas is much greater if the density of these ideas in a society, much deeper, you may perhaps have a need for a different result. But that is an open question and it remains to be seen.

MS. SELLERS: So, the jurisprudence isn't settled but you've written about the danger of unregulated social media platforms. Earlier on you talked about the different regulatory standards for newspapers than for broadcast mediums. Where do you see social media fitting in? An entirely new regime or somewhere in between the others? How does it work, practically?

MR. BOLLINGER: So, I'm wrestling with this, as I think every First Amendment scholar and every citizen must. My good friend and colleague, whom I've written with, Jeff Stone at the University of Chicago Law School, and I are just beginning to think about a volume that might try to address this in deep and practical ways.

I mean, I do think, I start with the idea that we've seen this before, as I indicated. It's not an entirely new problem. The new technology of broadcasting was feared for the same reasons, that is the risk of the spread of dangerous and really pernicious ideas and beliefs, required some kind of public intervention, and the fear that the monopolization of public discourse by these new companies in the broadcast arena required some degree of protection of people receiving all different forms of ideas. That's another issue that has come up, of course, with respect to social media platforms, that not only is there a risk of the spread of dangerous ideas and bad ideas but also the risk of really great censorship that these private companies can exercise because of their incredible control over the public discussion, public ideas.

So, we've seen this before. We've set up a system to deal with this. It has worked, I think, reasonably well. It was certainly upheld by the Supreme Court. I've written a lot about this. I think it was justified and right to do what the court said, to have a system for the print media and a different one for the broadcast media.

My sense and my inclination is that we're going to have to figure out something along these lines for this new state of affairs, but I'm not certain yet exactly what that will look like.

MS. SELLERS: So how do you respond directly to those who argue that Facebook or another company should have total control over their content? What's your response at this point? Is it wait and see, or --

MR. BOLLINGER: Well, there are a lot of people doing extensive work on this question. That is, what are the consequences for public discussion, of public issues especially, as a result of the ways in which we communicate and the ways in which these technologies are organized, and what is the upshot? I think scholarship is our first requirement, our first condition, that is, what do we actually know? It's very easy to take single anecdotes and examples and to extrapolate from that what should be a result. We really want to be extremely careful about this.

I mean, one of the things the First Amendment has been conditioned on is a belief that government intervention into the arena of speech is highly dangerous and should only be permitted under very careful circumstances.

So, we need scholarship, we need to know what we're facing, and then we need to figure out how counteract it. I mean, we do have a tension. We have private companies that are designed to make money, and that's what, of course, private enterprise is all about. Public discussion of public issues is not solely a matter of profit-making institutions. And, I mean, in the print media you know that, of course, profit-making, but on the other hand there's an ethos, a culture, about how to discuss public issues that is longstanding and extremely important. Will we develop that in the context of the media, social media platforms? It remains to be seen.

MS. SELLERS: I can't resist asking the journalist question, but does scholarship move quickly enough in this era of instant communication?

MR. BOLLINGER: Yeah, it's a good question. It could be that we suffer enormous consequences too late. But there again, one has to make a judgment, and I think one of the things that the First Amendment and the jurisprudence and the case law and the writings about it teach us is that government regulation of speech really should be the last resort, and only when we're really, really clear that this is required in order to save us from worst consequences should we allow intervention.

And I'm happy to say that there is an extensive body of scholarship now looking at these questions and publishing about them.

MS. SELLERS: So, we've got a vastly increased number of information gatekeepers but also groups like Wikileaks, which publish information that could be deemed dangerous to the government. How do you think we should manage them in this era?

MR. BOLLINGER: So again, Jeff Stone and I have just completed a book on this subject, "Pentagon Papers." It will come out in the spring. And here again, we developed, in the 1970s, this extraordinary approach to how to balance the government's interest, completely reasonable, and being able to operate with some degree of secrecy, and the interest of the public in knowing what the government is doing. We all know that the government overclassifies, is overly secret, and we do need to have some countervailing interest of the public in knowing what's going on served.

Well, the system was the government can operate in secret, that people can leak information to the press, and they can be punished for that. But the press has total freedom, basically, to publish the information and to make the judgments about what should be published and what should be kept secret. That system of Pentagon Papers I think most people would say, and I certainly would say, has served us very well.

Now, again, we have the new technologies of communication and we have different actors, much greater material, classified information can be released on a computer. I mean, Daniel Ellsberg was 7,000 pages, but Edward Snowden was millions, hundreds of thousands of pages. And we have players like Wikileaks, as you say, that are not The Washington Post, not The New York Times, do not have the interests of the United States at heart, have an underlying belief in disclosure way beyond, I think, what is reasonable. And now the question is, should the Pentagon Papers regime be revised in light of these new circumstances? Is the threat to government interest in secrecy now just at real risk because of the introduction of these new players? That's a profound First Amendment question as well.

MS. SELLERS: So, I want to circle back to our discussion about social media platforms and Trump's desire to repeal Section 230. For those viewers who don't know what that is, is a 1990s law that gives liability protection to companies that post third-party matter on their sites.

Talk to us about Trump's request to repeal this, what it means.

MR. BOLLINGER: Well, as you know, as Don Graham and I wrote a week or so ago on this issue. President Trump, in apparent anger at Twitter for having fact-checked and labeling a tweet as misleading, potentially misleading, announced that in sort of punishment for this Twitter would be subject to attack, as other social media platforms would as well, by changing the law to take away the protection they have against being sued for speech that they publish, which is extremely important to them, and some other things, which I won't go into.

The main point here is that whatever one thinks about the underlying laws of Section 230 and other policies, it is simply inconsistent with the First Amendment and the constitutional development of the past century and beyond, for the government to try to punish speakers--and that includes the press or social media platforms at the moment--because of the content of what it is that these speakers have communicated. That is a deeply, deeply troubling motivation. I mean, we should all be concerned that the law would be turned and twisted and changed, not because of the balance of interests but because of a desire to punish speakers for what they say and the content. And so that really needed to be highlighted for the seriousness of what was involved there.

MS. SELLERS: So, you told me you teach a large 101 class on the First Amendment. I'm curious about how student views have changed. You obviously didn't grow up in the era of the Internet; they did. How have your classes changed as you've taught this class over the years? How have the views changed?

MR. BOLLINGER: Well, I think that students today are--I think they are aware, just like we are, and what you said earlier is an indication of the breadth of this, aware of the concerns that the ways in which public discussion and public issues are being conducted today is deeply concerning, maybe even alarming, and maybe even requiring some degree of public intervention. So, I think that they are receptive. They certainly see the problems, and I think, as any reasonable person should be, receptive to thinking through what should be the system for this new world.

I think there's also a--you know, the problems of hate speech and what we talked about earlier, extremist speech, if you look back over the past 100 years, the Supreme Court has taken diametrically opposed views on this. So, there's a case from the early 1950s where group libel, as it was called, racist speech in that case, against African Americans, the majority of the Supreme Court said regulating that kind of speech is just fine. But it was in the late 1960s that that decision was not overruled but implicitly overruled and a different approach was taken.

How to think about the limits of protection for speech at these edges, where this particularly bad speech takes place, is something that is, you know, extremely difficult. Lots of different concerns and something to struggle with over and over again. So, I am very attuned, I think, to students feeling that that is something that they want to struggle with too.

MS. SELLERS: I have probably only a minute left but I do have one question I'd like to ask you briefly, and I'm afraid it's a big-ish question. But you're the president of a huge university, a private university. How do you decide who appears on your campus? What are the issues and how are those decisions made? And I apologize for a big question in a short moment, but just give me an idea of the issues at play there?

MR. BOLLINGER: So, I think, you know, you're not a partisan institution. I mean, the universities take no position on trade policy, but you do want to be a center for discussing things. I mean, people do serious work on this and there are practical consequences, and universities should be a place in which all ideas are discussed.

So, we really try, I think, across the institution, to get an array of different views and debates and so on, on public issues. But I'd have to say, I mean, scholarly work is driven by other kinds of concerns--how to add new knowledge, the disciplines, and how to think through things, discover new ideas. So, there are two different parts of a university--the scholarship, the research, the teaching, and then the center, the forum for public debate, and on that we try to be as balanced and comprehensive as we possibly can.

MS. SELLERS: President Bollinger, many thanks for joining me this morning. That was fascinating.

MR. BOLLINGER: Thank you very much, Frances.

MS. SELLERS: I'll be back in a few minutes with two legal experts on online discourse, Mary Ann Franks from the University of Miami Law School and Daphne Keller from Stanford. Join us again soon. Thank you.

[Video plays]

MR. GILL: Good morning, everyone. I'm delighted to include in today's conversation a short discussion with Suzanne Nossel, the CEO of PEN America. PEN America is a global leader in the fight for human rights and free expression, and Suzanne recently authored the book "Dare to Speak: Defending Free Speech for All."

Suzanne, I want to ask you a question that really picks up where the last conversation left off, which is some of the generational change that we're seeing in views about free expression. And one of the things we've seen at Knight Foundation in surveys that we've conducted, as you know, that's perhaps not disconcerting but certainly feels new to us, is when we've asked college students, in particular, whether they think the First Amendment protects people like them, on the surface level you see unanimity.

But when you start to look at the intensity of that belief, students who strongly agree with that sentiment, you see differences by race and also by gender. So White students are twice as likely as students of color to say that they feel the First Amendment protects people like them, and a majority of men, 55 percent, are likely to strongly agree with that sentiment, but only 39 percent of women are likely to strongly agree with that sentiment. So, I just wanted to start by asking kind of what you make of this finding.

MS. NOSSEL: Yeah. Look, I think it's great that Knight is doing this research and honing in on these disparities. I think you're seeing something real. Overall, young people support the idea of free speech and the First Amendment but opinions do differ. And I think what we're seeing there is the differential impact of speech on particular groups, which is something I address in some detail in "Dare to Speak: Defending Free Speech for All."

In a diverse society that is grappling with this lingering, pervasive legacy of inequities, the fact is that students from vulnerable groups, whether it's women or students of color, are just more exposed when it comes to noxious speech, and they are hit harder when they are on the receiving end of that. If you spend your whole life hearing slurs, stereotypes being directed at you or people who look like you, it's not surprising you'd be more alert to the downsides of free speech and more open to the idea that some people need and deserve protection from those most noxious sentiments that free speech does indeed protect.

So, in the book I talk about the concept of conscientiousness and duty of care. So, elements of voluntary restraint in the use of speech so that we can create a marketplace for speech that doesn't lead people to sort of pull the circuit breaker and call for government intervention to suppress speech in the name of inclusion or equality. I think if we can be more effective in addressing underlying feelings of marginalization that are behind those numbers then the willingness to tolerate offense is going to increase.

MR. GILL: What do you think, though--I mean, Lee Bollinger referred to that too, right, that you have to have strong protection for speech paired with an ethos. And I think some of those communities that now have access to universities, have access to public fora say, yeah, easy for you to say, you know, what we need is an ethos, what we need is restraint. I'm the one on the receiving end.

What are some of the specific techniques or practices that you document in the book that you think could help make that real for people?

MS. NOSSEL: Sure. I mean, counter-speech, so that when there is an incident that happens people feel like they have the support of leadership, whether it's at a university or a political leadership. You know, this has been a real problem, because we've seen this kind of subordinating, hateful speech from the highest levels of government. And when that happens there's a sense, you know, hateful speech has been uncorked through our society. It's kind of coursing through our streets. We need to do something about it, and that intensifies the calls to ban and punish speech.

So, it's leadership. I think it's also education. A lot of these young people don't know a lot about how free speech works, how it interplays with their concerns for equity and inclusion. So, we do a lot of work, and one of the major purposes of writing my book was to try to expand this idea of how these core principles fit together.

I thought one of the most striking things in your survey was that among the rising generation they are equally committed, in equal numbers, with seemingly comparable intensity, to the goals of equality and inclusion and to the protection of free speech. So, the question really becomes, how can these ideals fit together? And I sort of set up all these principles, 20 principles, in the book for how to use free speech in ways that don't trample over concerns of equity and inclusion.

MR. GILL: So just a last question, maybe as a segue to the next segment. We will hear from two great scholars thinking a lot about technology. What role do you see for technology in this conversation or what role do you see for the dominant platforms in which a lot of digital speech is occurring?

MS. NOSSEL: Look, I think they have a central role, and we're seeing them become more aggressive in policing speech and they are doing it because of pressure from their users, from advertisers increasingly, from legislators. I think we're going to see some form of regulatory action very likely over the next year or so. And I think the key is that we include, in whatever we do, fail-safes to ensure that those measures aren't overbroad when it comes to free speech, that they don't target particular marginalized groups, which is something that we've seen happen in other jurisdictions when they get more assertive in policing online speech.

So there's a lot to be careful and watch out for, but I think it's inevitable that this weaponization of free speech online, whether it's through online harassment, the spread of disinformation which is a major focus for us at PEN America, or just pure vitriol has gotten out of hand and the platforms have to get better control over that but in ways that are transparent and respectful of free speech precepts.

MR. GILL: Well I'd encourage folks to check out the book. It's "Dare to Speak: Defending Free Speech for All." Suzanne, great to talk to you as always, and we'll hand it back to The Washington Post.

MS. NOSSEL: Thanks so much, Sam.

[Video plays]

MS. SELLERS: Welcome back to Washington Post Live. I'm Frances Stead Sellers. I'm glad to welcome now two experts, legal experts, in online content regulation. Mary Anne Franks is from the University of Miami and she works on cybersecurity issues, cyber civil liberty issues. Daphne Keller is from Stanford's Cyber Policy Center, and she was formerly the assistant general counsel at Google.

Mary Anne and Daphne, welcome to you both.

MS. FRANKS: Thank you.

MS. KELLER: Thanks.

MS. SELLERS: Delighted to have you. I'm looking forward to an interesting conversation with you both.

So, I'd like to one of the conversations close to the end of my conversation with President Bollinger, on Section 230. Mary Anne, perhaps you can start by taking us back, explaining that 1990s law, and why the president objects to it so strongly.

MS. FRANKS: Well, the first part is a little bit more complicated than the second. So, the first part is in 1996 the Internet is a fairly new medium, and the story, at least the perceived story, is that the concern was that if you regulated this industry too much that you would end up choking this really wonderful new opportunity for people to communicate and to truly enforce the principles of free speech. And so, this law was, at least, again, according to the conventional narrative, passed as a way of saying let's kind of have a hands-off approach to these platforms. Let them have their own sort of abilities to assess for themselves what kind of content should be promoted on their platforms, and see what happens.

And you could say, of course, that really what was going on in 1996 was either this, either a kind of moment where government officials were really very prescient to know that the Internet was going to be so important, and took the right steps to ensuring that government regulation wouldn't interfere with it, or you could say that this was another example of the government recognizing there were powerful commercial and other interests at stake in the Internet and that providing this extremely broad shield was going to do what those kinds of protections tend to always do, which is to protect the most powerful and to really excuse people from the negative consequences of their actions.

But regardless of how we think it was originally intended, what we've got today is a law that has been really interpreted to make sure that these companies have no incentive to deal with the negative consequences of their patterns of behavior, other than perhaps public pressure, which can be powerful if we have a functioning society, but otherwise not really facing any kind of negative pressures, at least in the legal sense, for the kinds of conduct and information and communication that happens on these platforms.

And then you get to President Trump, who is opposed to Section 230, as you might expect not because he's principled or because he cares about either free speech or about communications generally, or principles generally, but is upset because he thinks that the social media companies aren't properly deferential to him. So he is taking the position that Section 230 is not making it as good for him as he would like it to be, in terms of being able to promote whatever kinds of information or disinformation he wants to promote, no matter who he wants to harass, or how much he wants to lie, or how much he wants to spread really deadly, deadly misinformation and disinformation. He wants the ability, essentially, to commandeer these social media platforms and have them become basically propaganda outlets and no more. And so, it's--

MS. SELLERS: I'd like to turn to Daphne on that one and ask about the implications of the president's executive order for free speech, and whether you believe that conservative voices are being tamped down unfairly.

MS. KELLER: Sure. Well, that second question is pretty hard to answer. You know, there is no meaningful data out there suggesting that the conservative bias narrative is true and that conservatives are being disproportionately silenced. On the other hand, there's just not very good data at all.

And so, it's not particularly surprising that lots and lots of groups across the political spectrum think that they are being uniquely penalized. You hear complaints from Black Lives Matter, for example, suggesting that maybe African American speakers are being disproportionately penalized. So, this this concern that sort of the gatekeepers of our most important discussions might be putting the thumb on the scales in a way we don't like, isn't really unique to conservatives.

But what is unique to conservations right now is political power. And so, we see things like President Trump's executive order in June, and following up on that the draft legislation proposed by the Justice Department just a couple of weeks ago, and similar legislation proposed by legislators, including Lindsey Graham. And, you know, what that legislation would do--and here I very much agree with Mary Anne's description--is effectively sort of try to dictate new speech policies to platforms, telling them what content, what user speech they can take down without worrying about liability, and what speech they might want to leave up for fear of risking liability.

The complicated thing, though, is that a lot of this speech is what you might call "lawful but awful." It's speech that many, many people disapprove of on moral grounds or on policy grounds--disinformation about medical issue or about elections, hate speech, things that Congress can't regulate because of the First Amendment, or at least because of the First Amendment as interpreted by courts now--but that many people want taken down by platforms. And so, in this "lawful but awful" category, right now platforms have extremely broad discretion to take down whatever they decide violates their policies and not face lawsuits from the people whose content they took down. That's part of CDA 230.

What would change under the proposals from the Justice Department and Senator Graham is that there is sort of an enumerated list of government-approved reasons for taking down lawful but awful speech. Platforms can safely take down pornography. They can safely take down advocacy of terrorism or, you know, barely legal harassment. But what's conspicuously missing from that list is things like white nationalism, hate speech, organizing the Charlottesville rally, electoral disinformation. Those are things that platforms can safely take down now, but if these proposals pass, they would not be safe taking those things down and they would face new lawsuits.

MS. SELLERS: So just to go back to Mary Anne on that point, I think you've written that in fact-checking Trump's tweets that Twitter was exercising its own First Amendment rights as a sort of counter-speech. Can you elaborate on that a little bit for me?

MS. FRANKS: Certainly. This is one of the things that tends to get lost sometimes in these conversations about social media platforms is that they are private companies that have their own powers of the First Amendment to speak. And particularly when a social media platform decides to add its own warnings, or when it wants to promote statements that say this is disinformation or provide its own resources to say here is better speech, that is sort of a classic example of counter-speech. If you want to try to speak back to bad speech, one of the classic ways that's been recognized by the First Amendment doctrine that we have is to speak back. And as a private actor, Twitter has that power, as their own First Amendment protected liberty, to speak back.

And so, the particular irony of the complaints being made about Twitter finally, very belatedly, taking some very modest steps against rampant disinformation or other harmful content, is that the criticism of them for doing that is basically criticism of free speech itself.

MS. SELLERS: Daphne, you referred a few minutes ago to some of the legislative proposals out there. Just briefly give me a sense of the breadth of those proposals, and I'm not asking you to have a crystal ball but what do you foresee happening? What would you predict at this point?

MS. KELLER: Ask me again in a month, or maybe two months.

MS. SELLERS: Good answer.

MS. KELLER: You know, the outcome of this election is really determinative of so many things. It's certainly determinative of directions in this space. You know, both President Trump and former Vice President Biden have said that they want to repeal CDA 230. I think that when Biden says that it is a proxy for something much more nuanced, and when President Trump says that, maybe it isn't.

But, you know, as of now there, I think, 17 bills that have been introduced over the past year--I tweeted a list of them yesterday so you can find the list if you're interested--from both sides of the aisle, often seeking conflicting outcomes. You know, a Democratic proposal is often seeking to make platforms take down more content. Republican proposals often seeking to make them take down less content. Many of them are sort of political theater and don't have much future, but some of them have some traction, including a law called the EARN IT Act, which sounds really good. It is targeting very serious problems with child sexual abuse material, but is doing so by introducing a set of rules that are very, very poorly thought through. So that, unfortunately, is one that has relatively more traction.

There is one bipartisan bill from Senators Schatz and Thune that's probably the most nuanced attempt to actually get into the operational questions of how do platforms take down content and what rules do we want to set so that they take down the right content rather than just taking down any content that somebody alleges is illegal. Because a big problem that we know crops up in notice-and-takedown systems, which is what we would have absent CDA 230, is that they are abused. People send in false allegations to try to silence the speech of people they disagree with or try to cut down traffic to commercial competitors.

There are really outrageous examples, like the government of Ecuador using bogus copyright complaints to silence critical journalism and take down videos of police brutality. We know that there's a big problem with platforms erring on the side of taking down important and lawful speech if there aren't sort of procedural rules in the law to try to correct for that.

And I think we should be looking for, to the extent that there is CDA 230 change, we should be looking for proposals that do pay attention to those operational details, and don't just say, okay, we're eliminating CDA 230 entirely. It's a free for all. Or everybody be reasonable, or everybody don't be negligent, and impose sort of fuzzy standards that don't tell platforms what to do. MS. SELLERS: Mary Anne, in terms of self-regulation, Facebook now has its "Supreme Court" of content. It's an international group. Tell me whether you think that's a good thing, how you think it's working. Is this the way ahead?

MS. FRANKS: I think it's a very telling thing, that what you have on the part of Facebook is, first of all, the adoption of this kind of quasi legal body, this quasi legal language, which is part of why, arguably, we've gotten into this mess to begin with, the fact that Facebook thinks of itself as a kind of legislative body or thinks of itself as a quasi-legal institution when, in fact, it is not. So, I think that that's not helpful in the sense that it kind of contributes to this idea that Facebook is that kind of entity.

But it's also ironic that, of course, what Facebook is doing here is appointing its own boards for oversight, and that tells you a lot. It doesn't mean they can't do good work. It doesn't mean that they're not going to get some very valuable information from some very deeply sensitive and nuanced thinkers, but it's not at all a response to what is actually happening when it comes to the deep problems with this industry. And that is to say what you really need is some kind of objective measure of what is going on in these platforms and what's going wrong. So, while it's, in some ways, a good sign that you see that Facebook and other companies are acknowledging that they have problems and that they need to have experts in the room, this is kind of a backwards approach. That's the kind of thinking they should have been doing before these platforms were rolled out, before they add features like live streaming, before they allow people to communicate instantaneously without any way of dealing with the aftermath.

So, all of this, I'd say, is really too little, too late. It may make for some less bad practices in the future but it's not going to stem the tide of--it's really just not going to get us out of the information dystopia that we're in right now.

MS. SELLERS: That very phrase, "too little, too late" takes me back to Daphne. I had a question for you about news yesterday that Facebook is stepping up efforts to clamp down on QAnon. Is that too little, too late, or do you see it as a promising step?

MS. KELLER: A little bit of each. You know, I think Facebook has made a lot of missteps. At the same time, I have some sympathy for them. They are in a situation where no matter what they do with very politically contentious speech, about 50 percent of very powerful people in Washington will get angry with them.

And so that puts them in a situation where there's one question about what's the right thing to do, and, you know, many of us have strong opinions about that, and another question about what will be the real-world regulatory consequences of what they do? And we know from Trump's executive order that sometimes the real-world consequences of not doing what powerful people in government want you to do can be very real. You know, it can be a directive for the Justice Department to try to promote new legislation against you. It can be a directive, as in that order for the federal government to look into maybe not running any ads with you anymore because it disapproves of your editorial policy.

And so, I think a larger problem that we should be thinking about here is not just what should platforms be doing as an ethnical matter, but how should government be using its power? Like is it appropriate for government actors to effectively try to strong-arm platforms to adopting particular editorial policies that are the government's preference but that override, as Mary Anne was saying, the First Amendment rights of the platforms themselves to set their own editorial policies.

MS. SELLERS: Mary Anne, you're an expert on cyber civil liberties. You've done work on cyber bullying, harassment, revenge porn. So, should Section 230 be used as a way of reining in this sort of content?

MS. FRANKS: Well, at the moment what Section 230 is doing, you know, in the worst instance, is it's actually encouraging this kind of content, and that is something that I think we're not grappling with. So, it isn't just a question of should we repeal, how should we repeal it, it's I think we have yet, really, in a broad sense, understood and confronted the fact that the reason why these things exist in the form that they do is because for 20 years this industry has essentially had a blank check. So, there's no problem that we could really point to today, whether it's revenge porn or harassment or medical misinformation, that can't be attributed in some ways to these tech platforms themselves and their irresponsibly when they were rolling out their services and their platforms.

So, to the extent that Section 230 is continuing that status quo, and importantly here, I think, to keep in mind, is that this a status quo that is devasting for free speech. We can't separate these things. When people have to worry that they are going to be attacked online, that their private information is going to be posted publicly, or that they're going to get death threats, or that they're going to become a target by an online mob, they can't speak freely.

So, there's no way to separate out these issues from the question of preserving free speech. We always have to think about free speech in terms of who is getting to speak, and if what you have, whether it's through government channels or through private channels, if all that you're really getting is domination by the same forces that have always dominated the channels of communication, we haven't achieved anything good in terms of our free speech principles.

So, can Section 230 do something about that? Yes, because right now it's serving as the excuse that a lot of these companies have to not do anything. If there isn't any kind of legal responsibility--and I'm overstating it slightly. There's some, but if there's not much legal liability, if there's very little chance that any of these companies ever have to take responsibility for the kinds of harm that are being facilitated on their platforms and services, we have to ask ourselves what possibly incentive they have to do anything about it, other than bad P.R.

So, I think that the question here has to be how can we modify Section 230 to make it what it was allegedly supposed to be in 1996. There's a part of the title of Section 230 that calls it the "good Samaritan section," which, if you think about what that means in any other context, a good Samaritan is someone who doesn't have a duty of care, who decides that they're going to try to help, regardless, and if they do try to help then they're not going to be sued for their efforts.

We should make Section 230 into a proper good Samaritan law, which would mean, first of all, that it can't apply to platforms that do have a duty of care, at least not to allow for people to be threatened and harassed and have their lives ruined on these platforms and services, and make it so that that immunity attaches to that idea, that if you don't have a duty of care and you are gratuitously doing good, or attempting to do good for someone, to try to prevent or address injury, then you shouldn't be on the hook for any kind of litigation that comes out of that. And apart from that they shouldn't be protected.

MS. SELLERS: Thank you. Daphne--

MS. KELLER: I'd like to jump in on that.

MS. SELLERS: I was going to ask you to address that from a company point of view, since you've been assistant general counsel for Google. So, if you could address that issue. We haven't got many minutes left, but please do take it up, Daphne.

MS. KELLER: Well, I certainly can't represent the company point of view. I left Google in 2015. But I think many people would characterize the way CDA 230 works quite differently. It was explicitly designed as a law to encourage and enable platforms to go beyond what the law requires and to have moderation policies for all of this lawful-but-awful speech that we're concerned with now.

And so, the freedom that platforms have to enforce policies against hate speech and against disinformation is directly rooted in CDA 230. This was Congress' intention. They wanted to ensure that platforms would be able to do this.

And so, I think as we think about is it possible to change CDA 230 in a way that improves incentives, the question is whether we risk taking away the incentives the law gives them now and the freedom that the law gives them now to go out and do the moderation that we are seeing. I have yet to see a concrete proposal that would enable and encourage platform content moderation without risking exposing them to liability for undertaking that very moderation, and essentially running into what's called the moderator's dilemma, where because the platform is carrying out due diligence, because it is acting more like an editor, because it is engaging more with user content, courts decide that it faces liability, for whatever unlawful content gets accidentally left up.

And so, while I think at a high level it may sound appealing to say, well, you should have to do good things to maintain the immunity, in practice it's very hard to define a system that would actually achieve that more than CDA 230 already does.

MS. SELLERS: I'm going to finish with a very different question that you both need to answer very quickly. That is, if you could magically create a regulatory system, what would it look like? And again, a couple of sentences from each of you. That's all we've got time for.

[Overlapping speakers]

MS. FRANKS: I would just say--

MS. KELLER: Go ahead.

MS. FRANKS: I would just say that it's not that hard to do. What we need to do is treat the tech industry essentially like other industries are treated, which is to say you have to absorb the consequences of your negative actions. So, if you are acting with deliberate indifference towards injurious conduct or content then you can be sued. It's not a bad thing for a company to worry about being used. That's how we get big companies, that are multibillion-dollar companies to care about the harm that they are causing to people. That harm is going to happen. That injury is happening right now. We are seeing the consequences of this every single day. And until we actually have a regulatory system that it tells these powerful companies, you have to take some responsibility for that, we're not going to achieve anything close to a functioning democracy or free speech principles.

MS. SELLERS: Mary Anne, thank you. And Daphne?

MS. KELLER: I would want to see a lot more transparency with companies disclosing more about what they are doing and when they are making mistakes, taking down the wrong content, taking down content in ways that have disparate impact, potentially, based on race, gender, considerations like that. And I would want to see a lot bigger role for courts. I think it is a problem if we open up new liability and then just pitch it to companies and say, "Here, you decide what to do, and you don't even have to tell us what it is, and you're going to do it in the face of fear of liability."

So, if we did want to make any changes it would be essential for there to be a role for courts in saying this is what's illegal and this is not what's illegal, and for companies' obligations to stem from real judicial determinations, not from whatever they decide in the back room.

MS. SELLERS: Mary Anne Franks, Daphne Keller, thank you very much for joining Washington Post Live today.

MS. FRANKS: Thank you.

MS. KELLER: Thank you.

MS. SELLERS: We have a great lineup for you in the coming days. Tomorrow at 9 a.m. Eastern my colleague, Jonathan Capehart, will be here to give you the lowdown on what happens in tonight's debate. Then later in the week join us to hear from Pete Buttigieg and Amy Klobuchar, former Democratic presidential candidates.

Thank you for joining Washington Post Live. I'm Frances Stead Sellers.

[End recorded session.]