In order to get an idea of how prevalent and important Twitterbots are, start with this statistic: In a presidential election where the winning candidate’s use of Twitter was one of the touchstones of the campaign, a University of South California study found that nearly 20% of all election-related tweets came from Twitterbots. During the final presidential debate alone, one pro-Trump bot, @amrightnow, generated 1,200 tweets; at the height of the campaign, a pro-Clinton bot, @loserDonaldTrump, produced more than 2,000 tweets a day. That’s an incredible amount of automated commentary, and it doesn’t stop at the election. Individuals, special interest groups, and companies are increasingly using Twitterbots to promote themselves, respond to other people and companies, and defend their actions and products.
But let’s back up. A Twitterbot is a program used to produced automated posts on Twitter, which can take a variety of forms including periodic or pre-timed promotional tweets with links; automatic retweets of tweets with specific words; and automatic responses to tweets with specific phrases. The election statistics just begin to hint at the prevalence of Twitterbots. One study concluded that nearly a quarter of all tweets are generated by Twitterbots.
Twitterbots aren’t necessarily bad. They can produce amusing tweets (like @MagicRealismBot, which takes random practices or tasks and exaggerates them with some sort of magic), useful responses (like @DearAssistant, which provides detailed answers to questions tweeted at it), and content that promotes your brand (like the series of @CBRE promotional tweets focusing on office perks that targeted office managers). However, before relying on Twitterbots, it’s a good idea to get an understanding of the potential PR problems and legal issues involved.
PR Nightmares
An automated Twitterbot will, by design, send tweets autonomously, without waiting for someone to review and approve them. This permits numerous and rapid tweets, promoting your brand and responding to customers in real time, but also opens the possibility that inappropriate or unwanted tweets are published before a real person can overrule them. This is particularly problematic with Twitterbots that automatically retweet or respond to tweets that include key words or phrases. With some careful wording, a third party can manipulate a Twitterbot to generate an embarrassing or offensive tweet by gaming the programmed responses.
Within the last two years, Microsoft and Coca-Cola have created Twitterbots that generated tweets with racist language after interacting with third parties, which was clearly not the intention of the companies or programmers.
Twitterbots and Hate Speech
Although there is some question as to whether or not the First Amendment actually extends to Twitterbots, the First Amendment suggests that autonomous tweets receive as much protection as any other form of speech. The actual text of the First Amendment merely states that the government “shall make no law… abridging the freedom of speech or of the press.” Nothing in that removes Twitterbots from the First Amendment.
The Supreme Court ruled there is no “hate speech” exception to the First Amendment. This means that if your Twitterbot accidentally creates offensive tweets, that speech is likely protected by the First Amendment. Any ordinance, state law, or court action that prohibits offensive automated tweets or creates liability for Twitterbot activity that is based on race, color, creed, religion or gender would violate the First Amendment.
Libel Issues
Similarly, it’s possible that a Twitterbot autonomously tweeting about an individual could accidentally produce an offensive tweet that results in a libel lawsuit. Typically, the plaintiff in libel litigation needs to prove that the defendant made a false and defamatory statement concerning the plaintiff, that the defendant made an unprivileged publication to a third party, and that the publisher acted negligently in publishing the offending communication.
The standard of due care regarding autonomous tweets has not been tested; it’s possible that due to their wide-spread use, the owner of a Twitterbot cannot be held to have acted negligently merely because the Twitterbot tweeted a statement that was false and defamatory, so long as the owner coordinates a withdrawal of the tweet upon notice from the aggrieved party. However, this issue has not been considered by a court.
Like any other marketing and communication tool, Twitterbots are only as useful as the people using them. A thoughtless generator of autonomous tweets is more likely to get in legal or public relations trouble than a well-conceived and well-programmed bot that reflects the interests of its owner. However, as Microsoft and Coca-Cola can tell you, a carefully conceived and executed Twitterbot does not guarantee that there will be no problems. If you want to deploy autonomous tweets, carefully consider what your goals are and how those are served by a Twitterbot.