Exposing digital extremism
Understanding the ‘weaponization’ of social media is the first step
Accusations of interference by sophisticated digital disinformation campaigns have made data manipulation the topic of national news as well as public conversation and personal concern – as have allegations that Facebook improperly shared individual information to influence consumer purchases and voter choices. The power of online experiences to affect real-world decisions requires serious study into how digital disinformation is created and spread, how to recognize and avoid it, and how to develop more responsible digital civics.
To that end, “Digital Extremism: Understanding & Confronting the Alt-Right’s Digital Toolkit,” was launched this spring as part of UCI’s new Provost Initiative on Understanding & Engaging With Extremism. The effort – which featured public dialogues, workshops, and student and faculty seminars – was led by Bill Maurer, dean of social sciences and professor of anthropology and law, and Paul Dourish, Chancellor’s Professor of informatics. We talked with Maurer about the current state of digital disinformation.
Q: What are some types of media manipulation, and how do they work?
A: Various features of the popular social media channels are being exploited, namely via hashtag squatting and fake Facebook groups. Both are based on a relatively simple concept: Claim a hashtag or start a group that sounds like a legitimate nonprofit or social movement clique. For example, “BlackMatters” is close enough to “BlackLivesMatter” to make people think it supports – and encourages the discussion of – issues important to the African American community. But it was eventually revealed that white nationalists had established and claimed that hashtag to escalate racial tensions in the U.S.
The point of manipulation is to generate lots of followers, with likes and shares, in order to build credibility. If the post or page looks good and appears authentic, people will start joining, following and sharing to spread the word to their friends and others who are looking online for affinity groups. This online interaction can translate to offline activity. For instance, there was a situation in Texas in May of 2016 orchestrated by an infamous Russian troll farm – the Internet Research Agency, itself a legitimate-sounding company name – in which two fake online groups were actually able to instigate a real-world protest and counterprotest on the same day, at the same time, in the same Houston location.
The Heart of Texas, a group set up by the IRA that attracted 250,000 Facebook followers, issued a call for a protest against the “Islamization of Texas.” Another online group set up by the IRA, United Muslims of America, with 300,000 Facebook followers, called for a simultaneous counterdemonstration. And people showed up! These apparently dueling protests initiated by the same organization through fake Facebook groups attracted only a “handful” of people, according to CNN. But this is a stark example of unknown, illegitimate actors using social media to influence people’s behavior.
Q: Can you talk a little bit about how white nationalists and other extremists in particular have leveraged the power of digital media?
A: The motivation for white supremacists and other extremists is to spread their ideology, to disrupt our political processes and to get what they call “trophies.” A “trophy” is achieved when they start a fake news story and it gets picked up by a reputable news source, which reports it as being true. Acquiring a “trophy” is a process that involves creating the fake news story or doctored image and then sharing it with friends online to get comments on how to make it more authentic. The final product is usually released via Twitter, in hopes of it going viral and then being validated by mainstream media. The actual “trophy” is a screenshot of the news outlet that picked up the fake story.
The tools and techniques I mentioned earlier are also expertly exploited by these groups. One of the most ingenious accomplishments of the so-called white nationalists is their successful rebranding campaign to be called the “alt-right” by mainstream media rather than “white nationalists” or “white supremacists.” “Alt-right” is a much softer, sort of pseudo-intellectual label. Even “white nationalists” is successful rebranding away from “white supremacists.”
Q: What’s the relationship among big data, predictive analytics and the kinds of media manipulation we’re seeing today?
A: The problem is that all our ways of getting information on the internet are powered by marketing-driven algorithms that are self-reinforcing. Google will keep returning similar content based on your previous history. You should always evaluate the results being displayed for a query as if an advertising firm is behind it. When you think you’re looking at authentic information, you may just be getting marketing material. Remember that Google and other search engines are not providing neutral “information” or “facts” but results driven by algorithms intended to market to you in ways targeted to the sum of all your online activity.
Now that we have a better understanding of what search algorithms actually do, it’s a terrific advertisement for using the library’s online search tools instead! Hooray for libraries! They curate information from the perspective of providing a general overview of the subject, not from wanting to sell you something. The “alt-right” has done a tremendous job of exploiting how search engines work, the narrow view they provide, by presenting similar content that they believe is going to confirm what you already think about a particular topic, subject or person.
People, companies and other organized groups can learn so much about you from the information you share online or that can be inferred from your online activity, from basic demographics – including age, sex, income and marital status – to lifestyle details, such as you have a bloodhound dog and live in a blue house. All of that data is bundled together, analyzed and then crunched by an algorithm to present information that is most appealing to that personalized profile.
Q: How can I recognize when I’m being manipulated, and what can I do to avoid it?
A: People need to be aware that these tools are not innocent and that sharing so much personal data can be used to manipulate the information they’re consuming in very subtle yet effective ways. So much of our lives is now spent online, making it important to provide tools to help the public understand and inoculate themselves against manipulation.
The search results you see are also influenced by the type of device and browser you’re using. If you’re reading something on a site and wonder if it’s true, Google will return results that reinforce the information on that site. Use a different device and browser and conduct the same search and compare the results you get. If you’re about to retweet something, if there’s a link in it, check that link. Where does it go? Sometimes these links go to bogus sites, so safeguard yourself from spreading disinformation.
Q: What do you see as the future of digital civics?
A: Formalizing digital civics or “disinformation studies” as an academic field is crucial. It’s an asset here at UCI that we have such a strong interdisciplinary connection among humanities, social sciences and information & computer sciences, so that we can study the intersection of human interaction, society and technology. Our chancellor is very well-known for being a champion of free speech, and it’s great that we have such an incredibly diverse student body who can help shape and form what digital civics looks like. They can raise awareness of it in existing student clubs – or why not form a new club dedicated to fostering solid digital civics practices? Those would be effective ways to get the conversation started and help develop an effective program for fighting this problem – which, short of accountability on the part of platform providers like Google and Facebook, is not going to go away, especially if the political process in this country is foreclosing the possibility of regulation.
Online is the new public square, and we’ve seen the effects of infiltration of spaces people think are safe and that space being manipulated to change opinions and influence social behavior and political outcomes. Academia needs to be at the forefront of building a better digital public square so that we can protect the democratic institutions on which our society rests.
– Pat Harriman, UCI