Your love life is a little dreary lately, so you turn to the Internet. And after a few false starts, you find someone promising on an online dating site. But after exchanging message after message, it shows no sign of transitioning into a real world romance. Maybe your digital paramour is just shy.
Or maybe she's just digital.
You wouldn't be the first person to fall for a romance bot. Robert Epstein, a Harvard psychologist who co-edited a book on artificial intelligence, fell for bots imitating live women on dating sites not once, but twice. But the rise of social bots isn't just bad for love lives -- it could have broader implications for our ability to trust the authenticity of nearly every interaction we have online.
Bots in general aren't new. Your spam folder is filled to the brim with e-mails from them. But what is new is how difficult it is to identify some social bots and how they are being deployed to influence things outside of our commercial interactions, like political dialogues.
For example, a recent New York Times piece on social bots reported that thousands of Twitter bots started flooding the digital conversation during a dispute over a Russian parliamentary election in 2011, aiming to drown out anti-Kremlin activists. And similar tactics were deployed by the beleaguered Syrian government.
It's easy to see how tyrants could find such a campaign an attractive way to reduce the radically democratizing power of freedom of speech online. Sure, the general public may be outraged by some government scandal and take to the Internet to voice their complaints. But you can deploy an even greater number of bots to fight with them online, and they may give up hope on a cause: If everyone else talking about an issue supports the government on this policy, what chance is there at reform? Since some researchers estimate that only 35 percent of the average Twitter user’s followers are real people right now and that within two years about 10 percent of the activity on social online networks will bots, it's a very real possibility that bots could have major influence on these kinds of debates.
Of course, this is a familiar concept. For years, governments and companies have been paying actual people to comment online, creating digital astroturf movements that obscure and influence real public sentiment. But bots are becoming better at imitating real people, fed by news databases and given realistic sleep cycles. There are also persona management systems (like the one ordered by the U.S. Airforce) that make it easier to keep track of these automatic sock puppets while giving them digital footprints that spread across social networks -- and making it harder to tell the people from the machines.
Some fiction has already caught on to this idea. In Cory Doctorow's recent novel Homeland, a nefarious military contractor developed a persona management system called "Hearts and Minds" and used it to flood online conversations about a leaked surveillance program with dismissive comments.
There's no evidence that anything similar has been happening in comment sections of articles about Edward Snowden's National Security Agency leaks. But there are some signs that bots are joining in U.S. political debates. The same Times report notes that researchers at Indiana University discovered two Twitter accounts that sent out some 20,000 similar tweets, most of them linking or promoting the Web site of then-House Minority Leader John Boehner in the run-up to the last midterm elections.