Life
What Is Astroturfing?
As a vocal woman on the Internet, I get trolled quite a lot (you'd be surprised how many anti-feminists have Twitter accounts). But, as any person who's paid vague attention to the Internet over the years knows, there are many varieties of trolling: leaving anonymous messages, ranting on forums, using targeted attacks to bring down websites, and so the list goes on. (Several of the world's most-trafficked sites experienced troll-like hacking over the weekend, as distributed denial-of-service, or DdoS, attacks hit everywhere.) A new study released this week may, however, make life a bit easier for people targeted by a specific kind of troll: astroturfers, who make multiple accounts to claim to be several people at once. And it may spell trouble.
The new study, from the Univeristy of Texas at San Antonio (UTSA), is based on detecting peoples' identities across multiple posts, Tweets, comments, and review sections — and it's bad news for one person masquerading as many. The researchers have developed an algorithm that can detect people's individual writing styles, even though they might do their best to conceal them. "Word choice, punctuation and context," according to the UTSA's press release, are the things that give trolls away. They even put it to use, and found pretty clear evidence that posts from "different" authors across several sites were actually authored by the same person. It reveals something very interesting: we're much worse at concealing our own traits online than we might think.
Our language, no matter how much we might try to conceal it, seems to betray Martha and Buster as one and the same, provided people are looking in the right place. (And you thought handwriting analyses in Sherlock Holmes were pushing the edge of detective fantasy.) The new detection method may prove to be very interesting for people who are frequent targets of mysteriously identical trolls; but it's not necessarily a silver bullet for online bullying and falsehoods.
What Exactly Is Astroturfing?
To "troll" is now a verb, indicating annoying, abusing, or fooling an enemy, particularly using online means. The anonymity of the Internet (relatively, at least) has also given trolls the opportunity to adopt multiple identities; if you can claim to be a grandmother called Martha from Indiana as well as a young hipster bartender named Buster from Chicago, both of whom are yelling at your Internet enemy and calling them names, won't the impact be greater? The practice is called astroturfing, and it's this fake multiplicity of identities that's the target of the new study.
The Guardian defines astroturfing as "the attempt to create an impression of widespread grassroots support for a policy, individual, or product, where little such support exists. Multiple online identities and fake pressure groups are used to mislead the public into believing that the position of the astroturfer is the commonly held view." In other words, fake accounts, reviews, Tweets, and other communications are meant to demonstrate that a particular view is held by many people, when in reality it's being orchestrated by just a few (or even one) behind the scenes.
Astroturfing has a more prominent history than you might believe. The term was coined by Texan Senator Lloyd Bentsen in 1985, before Internet trolling was really a twinkle in anybody's eye. Bentsen meant it in terms of "grassroots campaigns;"astroturfing, particularly in political contexts, meant that an organization's own policies or ideas were being disguised as coming from the "grassroots," the ordinary population. It's particularly popular in modern China, where the government strives to maintain intense internet censorship and direct conversations on social media; The Economist reported earlier this year that, hilariously enough, a Harvard study had concluded 488 million "astroturfing" posts occur across the Chinese web every year, most probably posted by government officials.
But it happens quite a lot in America, too. Business Insider has collected a group of astonishing astroturfing incidents, from a lobbying firm pretending to be the NAACP (of all things) to Canadian ex-mayor Rob Ford's team creating a false Twitter account full of praise for him. It happens kind of a lot; and those are just the ones we know about.
Why An Algorithm Isn't Going To Solve Trolling
The problem with astroturfing in political contexts, as the Washington Post pointed out in September, is that it's actually got several different methods of working, and the algorithm developed by the Texas scientists might not help with some of them. Twitter bots might be detectable, but there are other astroturfing practices that aren't exactly kosher: Twitter users who've been coordinated to post about a certain issue at particular times, for instance, or anonymous people who post a host of memes with funding and backing from political parties. (Yes, memes are now part of the political landscape. Welcome to 2016.)
And just because we can detect it doesn't necessarily mean we can actually sort it out or stop it from happening. This isn't the first troll-detection scheme that's popped up: in 2011, scientists from Indiana University published a study in which their "machine learning framework" was able to detect astroturfing posts on Twitter with about 96 percent accuracy. Guess what? They're still happening.
The algorithm might actually be more useful in another context: figuring out who's posting fake positive or negative reviews. In the world of online commenting, reviews matter, for everything from books to restaurants; and astroturfing happens there too. Amazon recently went through what was called a "sock-puppet" scandal, in which authors pretending to be other people reviewed their own books (glowingly, of course), and updated their guidelines so that anybody who might have a financial interest in the book's publication wasn't allowed to review it. If we can sic an algorithm on multiple, suspiciously similar reviews claiming that the bisque at a new restaurant is the "best ever," we might have a fairer result, and less substandard bisque experiences.
Either way, just uncovering the identity of a troll, or discovering that they're behind multiple posts, often doesn't do much to stem the flow of abuse, although it can shame and embarrass them in public. The real fight for troll-hunters and victims is about dealing with the abuse, considering how social media can create proper protections, and creating a safe environment without emphasizing censorship.
Image: John Bauer/Wikimedia Commons