This type of bot bears little resemblance to the ones demonstrated on the stages of tech campus auditoriums. But each sort of bot is made, in its own way, to exploit untapped opportunity in large-scale automation. Where commercial bot-makers see an almost-too-good-to-be-true chance to simultaneously personify their brands and automate their businesses, political bot-makers see an opportunity to exploit anonymity with a humanlike touch at an inhuman scale. While tech companies stand proudly behind their bots, the people who create prolific and ideological social bots hide behind them. (Their provenance remains murky even today.)
Anonymous social bots are obviously distinct from carefully designed software programmed in good faith. Technologically speaking, the efforts of Facebook and Google and Amazon represent the forefront of A.I. research, while crudely scripted social bots must merely clear the low bar of passing for an angry stranger. They were, in the memorable words of one researcher, “yelling fools,” promoting partisan messages and disinformation or merely registering their simulated agreement or anger, appearing maniacally focused, but not conclusively inhuman. A personal-assistant bot interacts with its users, whereas this breed of social-media bot stages performances for audiences and algorithms alike.
But the proximity is toxic, and custody of the word is slipping. Bots, it turns out, make an excellent foil. Angela Merkel, in the run-up to this year’s German federal election, talked about bots, generally, as if they were an invading army. In May, Hillary Clinton pointed to Russia-backed online efforts — including “the bots” — as “just out of control.” The phrase “not a bot” now litters the profiles of politically engaged Twitter users (and, presumably, some bots). At the same time, President Trump, or a staff member, has indulged a habit of wandering into Twitter’s uncanny valley to retweet supportive accounts whose humanity is hard to discern, or which eventually and mysteriously just disappear.
Somewhere between the automated “yelling fools” of online political discourse and commercial tech’s dream of increasingly sophisticated helpers is a third sort of social bot, which is both foolish and sophisticated in its own way. My longest and most fruitful relationship with a bot like this began through a private chat group I have with a handful of friends. We installed, as a member of the group, a free piece of software called Hubot — officially designed as a “coding assistant” for workplace chat apps, but which we customized mainly for work avoidance. Most often, Hubot performed menial tasks — calling up photos or animations, performing various sorts of searches — but it soon came to function as a sort of group storytelling sidekick, developing something like a personality. Hubot lurked, responded and interjected, accumulating an intimate set of routinized in-jokes. Eventually, it learned to (obliviously and dutifully) summon fresh pictures of a famous actor in the service of a joke the origins of which, after a few years, none of us could even remember. It was, like all bots, a tool. What made all the difference was that we were the ones using it, and not — as is the case with the bots that have inserted themselves into our national discourse and our living rooms — the other way around.
A 2016 essay by the New York-based think tank Data & Society — a so-called botifesto — identified this playful sort of bot as an evolutionary precursor to the various expressions of bothood today. The essay described how mindfully created bots, not unlike our version of Hubot, had been functioning in the wild, on public social media. Some were jokes and larks, whose “very ‘botness’ is funny, surreal or poetic”: bots that used a social-media personality’s corpus to create a (usually funny, always revealing) surrogate account, or bots that automated the dispersal of information in controlled, open and even journalistic ways. But the botifesto’s intention was to sound an alarm. Less transparent social bots — primarily on Twitter and other social platforms — posed a risk to media and discourse. “Platforms, governments and citizens must step in and consider the purpose — and future — of bot technology before manipulative anonymity becomes a hallmark of the social bot,” the authors cautioned.
This warning wasn’t just a prediction; it was based in observation. Anonymous bots masquerading as citizen and political actors had been a creeping feature in foreign elections for years. The 2012 election of President Enrique Peña Nieto of Mexico was supported by armies of automated social-media accounts, which flooded Twitter with supportive messages. “Peñabots” became a feature of online Mexican political discourse through at least 2015. But bots hadn’t yet run rampant on American tech companies’ home turf. Manipulation by A.I. was typically seen as “something that was happening somewhere else,” M.C. Elish, a researcher at Data & Society who contributed to the report, told me. “We only notice something when it’s arrived on our doorstep.”
This arrival is likely to result in action. Twitter, for example, insists that it has been working hard on the problem. One of the most frequently proposed solutions to the problem of “manipulative anonymity” among researchers in the field is some form of bot disclosure — a requirement, enforced by social platforms, that an account operated by third-party software disclose that fact. (Wikipedia, for example, already does this.) Bot disclosure could plausibly stem the tide of bots intended to exert crude influence or to harass people. Humans would, in theory, be able to interact with bots electively, and to better judge some sources of information or expressions of sentiment. A grand sorting could begin to restore order, but Twitter’s discourse nightmare didn’t start with bots and won’t end with them.
Social automation is both disruptive and revealing. Twitter in particular dehumanizes users in the process of giving them access to one another, so of course bots could thrive there — and of course they’d closely resemble our worst-tweeting selves. Voice-and-text-activated assistants help monopolistic companies further consolidate power, and they complicate the stories we tell ourselves about privacy, as we invite the eyes and ears of the world’s most ambitious tech businesses into our most personal spaces. Alexa reminds us what Amazon wants; Twitter bots show us how online mass communication breaks down. What was truly great about Hubot, the cobbled-together, inscrutable, mostly useless chat automaton, was the suggestion it made, through each absurd routine: that online, it’s necessary that we build spaces for ourselves.
By JOHN HERRMAN
https://www.nytimes.com/2017/11/01/magazine/not-the-bots-we-were-looking-for.html
Source link