With Elon Musk purchasing controlling interest in Twitter for $44 billion, the big question now remains … what is he going to do with it? Musk already said he wants to take the company private and promote “free speech.” But what does that mean?
There is considerable legitimate concern over having one man – and especially one as controversial as Musk — determine what is “free speech” to the 300 million daily users of the massive social media outlet that influences news and politics worldwide.
But Musk’s purchase raises bigger questions: Who polices speech online? Who ensures the protection of individual rights on the internet? Who makes the rules that govern online expression? Who enforces them?
Joining the UCI Podcast to discuss these issues is David Kaye. Kaye is a professor of law at UCI, and author of “Speech Police: The Global Struggle to Govern the Internet.” He’s also worked with the United Nations human rights council on freedom of expression, where he monitored free speech issues around the world.
To get the latest episodes of the UCI Podcast delivered automatically, subscribe at:
The United States has a long history of wealthy men coming in and taking control of our media. I don’t recall this kind of uproar when Jeff Bezos bought the Washington Post. What’s different here?
Yeah, it’s a good comparison. I mean, I think when Jeff Bezos bought the Post, he basically said, “I’m going to own it. I’m going to support it, but I’m going to give the editors of the Post all of the kind of leeway that they need and independence that they need in order to make their decisions about editorial content, about the content in the paper.” And that’s a very different approach. And I think that’s held, by the way. I think largely Bezos has been hands off, [with] huge support and great growth of investigative reporting. But, but no, there’s no sense that he’s getting his finger in the pie. Totally different situation with Elon. I mean, Elon Musk is suggesting that he doesn’t like the rules that Twitter has adopted. And he’s saying, “I want to own this in part, because this platform is broken in some way [and] it’s overregulating speech. And I want to get rid of the rules.” At least that’s part of what he’s suggesting. So his approach is sort of the anti-Bezos, in a way. It’s like “I want to own it. And I want to shape [it].”
After Musk announced plans to by Twitter, the journalist Max Boot tweeted, “I am frightened by the impact on society and politics of Elon Musk acquires Twitter. He seems to believe that on social media, anything goes for democracy to survive. We need more content moderation, not less.” Is Max right?
He is right. Although, I mean, one thing I would say is as much as Elon Musk is – I think from my perspective and from Max’s perspective – kind of moving in this direction of getting rid of rules. I think that over time, this is true for any platform. Elon Musk will realize, if he’s successful in having ownership, he will realize that he needs rules, that the platform will simply be unworkable without rules. And you could think of it like this. I mean, the platform is a place for all sorts of people to engage in a kind of public conversation. I mean, that’s essentially the rhetoric that Twitter has adopted for many years. It’s about the public conversation, but that public conversation, public debate and our access to, to tweets won’t really work if it’s all about noise. And if it’s all about harassment of marginalized individuals and marginalized communities, if that happens, you’ll end up having a space that is really just a lot of people yelling at each other. I mean, people think that Twitter is where people go and yell at each other. Now it’ll be much worse, if there’s no rules. It used to be much worse than it is right now.
You have told me that these rules are largely responsible for Twitter, Facebook and YouTube being so very popular.
Well, I think that’s true. Although these companies started without rules in, in most senses. So, if we’re going back, you know, 15, 17 years to the ancient archeology of these companies, they basically started by saying, “look, we want to Hoover up as much content as we can. So we don’t want to have rules. We want people to be engaged on this platform to post whatever they like for it to be kind of a free speech.” And Twitter, in fact, early on became known sort of through one of the statements of one of his founders as the free speech wing of the free speech party. And that worked for a while. I mean, that was popular, but they were relatively small platforms. But as they grew, and their impact on society became even bigger, they became places for, you know, real impact on society, but also debate and communication and so forth.
They realized that in order for the platforms to actually function as designed to actually allow that kind of communication, they had to have rules. They had to have rules about hate speech, for example, because a lot of people were using the platform as a place to harass women journalists and minority groups or others. It was certainly online that [this] kind of harassment was designed to kick people off the platform to minimize their voice. And so if you don’t have any rules, you’re going to have a lot of that. You’ll also have disinformation, and you’ll have hate speech of all sorts. And that that’s just not the kind of platform that people will over time or enjoy going to.
Because social media became a large platform for extremist groups to spread their word around the world, where before they were limited to, say, pamphlets.
Right, exactly. In my book, I make this a little analogy, which may resonate in Orange County. If you have the John Birch Society – you know, a kind of a famous, racist organization that had a fairly big foothold in certain parts of Orange County – they had pamphlets, right. And those pamphlets would be disconcerting to people, but they didn’t have the reach that somebody like Steve Bannon [has]. We could even talk about terrorist groups, which did have a presence on the platform for many years. Those kinds of actors on the platform can suddenly get not just access to a few people through a pamphlet on the streets or on the beach down in Huntington Beach, they can get access to millions of people to recruit, to get their ideas out there, to harass people, to promote hateful kind of ideas. And from the platform’s perspective, they’re not governments; they have their own kind of First Amendment rights and free expression rights to design their platform as they wish. You know if they don’t have that ability to control that kind of kind of content, I just think they’re going to be cesspools, which is what a lot of people think social media is already.
This anything-goes approach that Elon Musk is talking about is going to really face stern opposition outside of the United States, where governments are much more involved with moderating social media content.
That’s exactly right. The platforms offer people around the world space that they might not otherwise have. So, if you go to a place like Cambodia, just as an example, it’s mostly state media. I mean, there’s very little independent media that people can get access to, but they can get it on Facebook. They can get it on Twitter. And that kind of availability is really important to people. At the same time, these communities outside the United States don’t have a first amendment tradition. Now Cambodia is different. But if you think about Europe or Japan or South Korea or Australia – places with vibrant democracies – their way of thinking about freedom of expression – which they protect and people really enjoy – it’s not the First Amendment. It’s more related to human rights norms around free speech, which is to promote free expression, but also to promote the ability for all voices to participate.
That’s Article 19 of the UN Universal Declaration of Human Rights.
Exactly. Article 19 protects everyone’s right. And, and there’s a treaty for the international covenant on civil and political rights, which is binding on states. Article 19 says everyone has the right to seek, receive and impart information and ideas of all kinds, regardless of frontiers and through any media. It’s really an amazing right, when you think about it, because if you think about the First Amendment, it just says, “Congress shall make no law, abridging freedom of speech or the press.” And there’s nothing else. You know, we have a lot of jurisprudence, but it doesn’t really articulate in the way Article 19 does that the right to free expression isn’t just about the speaker, it’s also about the audience. You want to protect the audience’s right to get information. And when you have a cacophony of voices that are like trying to take some voices offline or out of the public space, that’s an interference with freedom of expression.
So the platforms need to be thinking in those terms. And the other part of it is that states under human rights law may restrict expression as long as it meets some pretty narrow tests. And those kinds of tests involve us thinking about [whether] it is necessary to restrict freedom of expression in order to, for example, protect the rights of others. You limit certain kinds of expression in order to protect everyone’s right. So we don’t like doxing, where somebody publishes the personal information of somebody as an interference with somebody’s free expression, but that free expression may be interfering with somebody else’s privacy rights. So when we think about the framework that human rights law brings, it’s is an understanding that there are multiple rights going on and that we need to balance and understand that those rights need to exist with one another. It allows us to see free expression and online speech in a fuller way than you get from just thinking in the kind of Elon Musk sense – which is, or at least the way that he suggested he thinks about it, anything goes – you know, free speech in a free speech environment. The marketplace of ideas will make sure that the truthful and good ideas win; that doesn’t work in a market that is broken by hate speech, disinformation and other things
You pointed out that the German government is very progressive in its approach to regulating social media outlets. What can we learn from them?
There’s positives and negatives that we can learn from them. Former President Obama was speaking about disinformation today up at Stanford, and he talked about transparency. Well, the German government has this law’s known as the network enforcement act or NetzdG, which requires the platforms to be transparent about their rules and how they enforce them. So that’s a good thing. That’s something we could learn from. We could have regulation in the US that says the platforms need to be more transparent about what they’re doing, and then we’d know more about the potential harms that they cause. But at the same time, the problem with the approach of Germany is they basically say that the platforms have all of these criminal laws related to speech hate speech, speech related to Nazi paraphernalia, criminal insult, a whole range of other rules.
And they’re saying to the platforms [that they] need to adjudicate claims. And that’s a problem because it pushes the companies to act as if they’re public authorities, when they’re not. They have business interests in mind. And so we shouldn’t be pushing the companies to basically act as if they are governments or courts. That also increases their power, because only certain number of companies can do that. The work is very expensive. So we need to be really careful about what we ask of the companies and what we require of them by law. And I think the model of focusing on transparency and oversight, when [the companies] enter a market or they create a new product, they figure out in advance, this is the likely human rights impact that it’ll have. And we need to make sure that we deal with that appropriately, but we don’t do any of those things.
It seems to me an even in more dangerous subject when it comes to social media is disinformation and propaganda. We’ve seen it here in the United States, probably most famously with the 2016 election and Russian Facebook accounts flooding people’s streams. It’s seen as an influencer. And you point out in your book that governments around the world use Facebook in particular for propaganda purposes. So how can disinformation and propaganda be controlled by the social media companies and by governments interested in regulating them?
It’s a hard question because there are multiple ways of thinking about disinformation and propaganda. If we first start with state disinformation and state propaganda, those kinds of things the companies can address through almost like intelligence operations. They can see which accounts seem inauthentic, which accounts are trying to manipulate and coordinate behavior. To really manipulate the debate in a particular way. You could think of that almost as spam. It’s not about the content as much as the inauthentic nature of this and the manipulation that it involves. Facebook, in particular, has done a lot of work around trying to address what they call “inauthentic coordinated behavior.” And the other platforms can do the same. I think it’s a lot harder when the disinformation and the propaganda is more organic.
[And then] you have things like state media, right? State media, like Russia’s today, you can label [the content]. You can give people the tools to know, “oh, that’s coming from Russia.” So I need to take that with a grain of salt, or I can understand this is understood as a propaganda outlet. So that’s one form and there, there are ways you can deal with that. That don’t involve just plain out, plain old censorship. But the harder problem is when people start to share, say, COVID disinformation or information about the election that they heard somebody say, and then they posted on Facebook. What do you do about that? You know, it’s not necessarily Donald Trump or the Republican party, some other party, pushing disinformation. It’s people sharing what they think is legitimate, but it’s, it’s a lie or it’s wrong, or it’s part of something else.
How do the companies deal with that? That’s a lot harder question because then that gets the companies into deciding like who is truthful and who’s not. And, I just think that becomes a much harder issue. It’s a little bit easier to do if the question is very specific, for example, you know, did the Democrats steal the election in 2020? Like we know the answer to that. I mean, hopefully the audience knows the answer to that, which is, you know, that’s a lie that’s been constructed by politically interested individuals and parties. You can do the same for certain COVID disinformation, about like vaccine harms or chlorine, whatever you might use or, or ultraviolet lights. So you can deal with those kinds of things. But what do you do when people are just getting things wrong or lying?
There’s no law that says you can’t lie. I think it just puts the companies into a very difficult position where ultimately we have to ask, do we want the companies to be going through all of our content and deciding what is accurate, what is not. That’s ultimately where that heads, I’m not saying it’s a good situation where we are, but we need to be mindful of that all of our solutions or potential solutions have pretty significant tradeoffs that involve the companies getting more and more into content regulation.
Your book “Speech Police” ends with some ideas about the kinds of changes that would help companies and governments meet the challenges of policing content, such as establishing human rights, standards, better transparency and decentralized decision making, just to name a few. How realistic is the possibility to implement these values?
I mean, some are more than others. Transparency, I think, is very realistic. And in fact, the European Union is right now considering a new regulation of digital space in Europe that will require, if it’s adopted, the companies to be more transparent about their rules and how they enforce them. And if they do that, because the companies operate at scale, it’s quite possible that the companies will be more transparent across all of the jurisdictions, all the countries where they operate. So, I think transparency in particular, because transparency is not intent-specific or viewpoint specific. It doesn’t put the companies in a position of saying, this is good content, and this is bad. I think that’s possible. The other, I think, is human rights standards. The companies are already moving in the direction of kind of assessing the impact that they have on people’s rights.
So they’re already doing this in practice, but they don’t always use the language of human rights. But Facebook has created a human rights division, Apple adopted a human rights policy, Google adopted a human rights policy. It’s really a question now of sort of taking those policies and holding them to it – like saying, okay, you adopted this policy, how are you implementing it? What’s your oversight mechanism? How are we going to know that you’re actually doing what you promise you’re going to do? And I think the further along you get to oversight, the harder it gets, but actually moving them in incrementally into this space, I think, on both transparency and human rights standards is possible.
So, it comes down to the big question of social media who is to be in charge.
I mean, ultimately it should be us. That goes to the decentralization part, which is as much as possible. I mean, you could think of it now. Not all social media is the same, you know, something like Wikipedia, where you have communities basically editing, moderating their own pages under certain kind of company standards. And Wikipedia has evolved of over the years into a pretty great global encyclopedia. And it’s worked in part because it’s decentralized and yet they have standards. You could say the same for Reddit. You know, Reddit used to be a terrible place for hate speech and disinformation, and there’s still parts of that, but they’ve also decentralized so that they have standards. And yet each of their groups are managed by people who commit to ensuring that standards against hate speech and so forth are maintained. That is a direction that you could imagine.
Some of the problem is that the biggest platforms like Facebook and Twitter don’t really see that as conducive to their business model, because it starts to fracture out their users, and it gets harder and harder to sell them to advertisers. So, at a certain point, we need to be thinking about the business model itself and how that works. And all of this is to say, in answering the question of who decides, we do who we don’t want to decide specifics on content. We don’t really want governments to make those decisions that very quickly – and you see this around the world – gets to state censorship. But we also don’t want, wealthy people – to bring it back to Elon Musk – people with all this money and power that they can make the rules, and they can decide what rules make sense only for them and their kind. That’s also a problem. So, we need to at least use these standards of transparency and have human rights approaches to really give individuals as much power as they can to see this is how the platform operates. These are the rules that they’re using. This is what I can do, and what I can’t do on this platform. And then they can essentially vote with their feet, whether they want to stay in or walk away from the platforms. Over the long term, this is going to be the approach we’re going to need to take.
Well, thank you very much.
Thanks, Tom. Enjoyed the conversation.
The UCI Podcast is a production of the Office of Strategic Communications and Public Affairs. Thank you for listening.