Skip to main content

Social bots – the good, the bad and the ugly – are gaining relevance in our real-world conversations as legislators grapple with investigations, healthcare organizations combat the spread of misinformation, and consumers curate social platform feeds with presumed autonomy. This special issue article examines variations between bots and their impact, as well as bot detection methods. Additional points are illustrated using the Botometer, developed at Indiana University as a supervised machine learning approach, to discuss accuracy in predictive data and analytics.

“What we create and consume on the Internet impacts all aspects of our daily lives, including our political, health, financial, and entertainment decisions. This increased influence of social media has been accompanied by an increase in attempts to alter the organic nature of our online discussions and exchanges of ideas,” writes Yang K-C, Varol O, Davis CA, Ferrara E, Flammini A, Menczer F.

  • Created and dictated completely or partially by computer algorithms, social bots create content and distribute it to human users.
  • Social bots can often trick human users by imitating other humans or news sources — the sophistication of the bot influences the subtlety and advancement of these interactions.
  • Commonly encountered social bots are capable of a wide range of participation from simple reposts, to boosting fake followers, to “emulating temporal content posting and consumption” by human users.
  • Bots are especially pervasive through domains, such as health and politics, using “fake news” or unreliably sourced content to attempt to sway human users’ behaviors, opinions, and choices. One may look no further than the 2010 and 2016 U.S. political elections for concrete evidence of bots’ influence.
  • With documentation of and future risk of public manipulation, governing bodies and organizations are working to take proactive measures against identifying and blocking bots from social platforms.
  • Effective bot detection methods rely on machine learning algorithms and comprehensive datasets that include both human and bot data.
  • Public engagement is a critical component of combating bots; however, there is the double challenge of (1) limited public awareness and (2) an unwillingness to use tools to identify and combat bots.
  • Indiana University developed a case study to show how countermeasures are perceived and used by the public using “Botometer.”
  • The Botometer used complex rules to identify many distinctive types of bots, and the key takeaway is that countermeasures must be able to rapidly change and adapt to humans and bot interaction.

“The fight against online manipulation must be carried out by our entire society and not just AI researchers. Media literacy efforts may be critical in that respect,” the report concludes. I read the article mentioned above ( and thought it was interesting. While I am not offering an endorsement of a strategy, tactics, thoughts, service nor a company or author, the information was intellectually stimulating and thoughtful and worth a review