top of page

The impact of social bots on elections


Whether it was Brexit, Russian and Ukraine conflict as well as the recently concluded presidential campaign in the United States, there are increasing instances of social bots influencing the outcome of elections and political debates. The following paper presents findings on what the social bots are, how social bots work and the threats they pose. In addition, some examples of the use of social bots associated with political gains have been applied. We end by providing an outlook of what developments can be expected in this area in the near future.


Defining social bots


The term "cure" as an abbreviation of the bot was developed as a description of the programs carried out independently on the internet. But this account is not completely clear, and a more thorough meaning is required. The term is often applied to the letterings used by search engines such as Google to scrub the internet and workstations that have been septic by malicious software and then live a life of their own (Richard, et al. 2015).

But when we talk about social bots these days, with reference more likely to automated accounts, social networking concept, and algorithms that perform routine tasks known as so-called "chat social bots,” we are introduced to a certain simple form of artificial intelligence. These are known as "social bots" and operate by hiding their actual identity as well as making themselves as actual social media user accounts. Over two decades ago, the scientist Roger Clark already highlighted the threats that may arise from an "active digital persona."



How social bots work?


While social bots are currently applied by platforms such as Facebook, Google and IBM as a new trend, they are also known to be of help in making these apps and websites superfluous since users are able to directly engage with the bot assistance whereas social bots continue to be used in a political context increasingly. The aim is to influence the public and specifically selected groups using automated content and interaction. In technological aspects, social bots are highly eased in terms of development these days (Sandra, et al. 2015).


Three elements are required: users registered on social networks, access to an automated interface (API) for the network and the software that controls the bot accounts automatically. The registered user is generally purchased on the Internet.


Those who supply false social media accounts have the alternative of the manual or automated generation with certain users operating merit hacked accounts too. Depending on the quality, currently, 1000 fake accounts can be purchased for between $ 45 (single Twitter accounts) and $ 150 (“older" Facebook accounts) (Daniel, 2015). Sellers generally work from abroad (often Russia).

As a rule, social networking platforms have free API’s that attract developers. However, there are major differences in the registration process and user API, resulting in networks like Twitter and Instagram being infected with considerably more social bots than, say, Facebook - simply because it is easier to get access to the API.


Software to control the social bots are also available. A very high-quality software that can be used to control 10,000 Twitter accounts costs about $ 500. In addition, social bots can also be easily programmed on the basis of existing software library (a very basic bot requires only 15 lines of code).


Social bots are of different types. In the simplest case, autonomous curing measures are limited to sending completed messages (Edwards, 2016). But there are social bots (even more rare) that can interact with users and create new texts independently.

As usual communication on social networks in overall provide no complication, even primitive social bots often do not raise suspicion. A normal Twitter social bot as such can invoke message generation on the premise of pre-chosen websites. These then automatically follow other users and send finished propaganda messages, decorated with the key concepts and hashtags that happens to be popular at the time, either in response to "by clicking on a button "or on a schedule.

When the technology is concerned, it must be remembered that these social bots are basically freely scalable (Vorderer, et al. 2014). Someone who has a program that is suitable for the control of a bot can use it to control an entire army of social bots.


Threats of social bots


The most significant threat at present because of a number of messages that can be through a botnet. The manipulation by botnets in relation to social network trends can affect the processes of political and economic policy outcomes.


With the common term of "big data," increasingly companies in different sectors attach great value to analyses user behavior in social networks to gain insight into how well their own brands do as well as in the behavior of potential customers or to reveal social trends (Richard, et al. 2015). Such analyses are also already used in the political sphere.

Although there is still caution in evidence in Germany in this area, political, social media analysis already developed into a significant market internationally, where players such as Social Analytics (presently engaged on behalf of Hilary Clinton) as well as the Cambridge Analytica (in active service on behalf of Donald Trump).


But if trends manipulated by social bots on a large scale and social bots muscle in all important debates (see next section), these analyses are just inaccurate at best (Sandra, et al. 2015). In the worst cases, they can persuade politicians to address these trends in their statements or even in their policies with the result that the position pushed by social bots can potentially get a level of support that social bots could never have achieved. Second, there is a risk that the social bots affect the opinions of specific groups.


One can probably assume that the bot posts do not affect direct manipulation. All studies show that people do not change their political opinions just because they see messages on social media. But a more subtle form of manipulation is very likely at stake (Daniel, 2015). If the social bots, for example, used to spread the extreme content in a discussion in the context of large-scale (e.g., in a Facebook group or a subject-specific hashtag) in overall such outcomes among individuals with moderate opinions withdraw from the current discussion.

This creates a lively debate atmosphere where people basically lean toward radical positions and feel encouraged. Third, social bots could also be used for specific purposes in a cyber-war scenario. The strategies range from infiltration of social networks to spy on users to the purposeful dissemination of misinformation (e.g. in crisis situations) to cyber-attacks by spreading malicious software or to organize so-called DDoS attacks (Edwards, 2016).


The effects of uncontrolled social bots on the election outcomes


The following example shows that the above strategies are already used on a large scale. An analysis of these fine distributions also illustrates how the likely counter action and the associated risk is a reality. In the recently ended presidential election in the United States, one would hold the assumption of social bots as an important part of the candidates' supporters. The online magazine "Vocative" reported of the share of actual Twitter followers at nearly 60 percent, for both Hillary Clinton as the Democratic Party candidate and Republican candidate Donald Trump.

And that percentage Republican candidate Donald Trump’s fake followers have apparently increased significantly compared with the analysis figures from the summer of 2015. During the US presidential campaign in 2012, there had already been a sudden increase in the number of supporters of the erstwhile challenger in evidence. This was found to be due to the use of fake followers. A large number of fake users have also been identified in connection with the political parties in Switzerland.

In the run-up to the Italian parliamentary elections in 2013, the Twitter followers of a candidate were analyzed using a "bot detection algorithm, " and it was found that more than half were false follower accounts.


The easiest way to manipulate social networks is where social bots produce pure volume without generating new content which at first may seem a relatively harmless form of manipulation, but its consequences are not insignificant. Added to this is the fact that social networks are controlled by algorithms that give preference to popular content. Accounts that have a large following in more favorable treatment to the social network and thus reach more genuine users.

A common method for manipulating trends on social networks is to strive to focus on the key figures fully are purely quantitative, such as "like" and "share" on Facebook and frequency of hashtags on Twitter. During the debate Brexit, researchers found that a very large number of tweets with the hashtag "#Brexit" were derived from social bots. Hashtags related to the “remain” campaign (e.g., "#StongerIn") were used much less frequently by social bots (Richard, et al. 2015). This example also shows that it is easy to overestimate the risk associated with social bots. In an approach based on theory, one can get the appearance of the Brexit campaign being evidently leading in regard to popular support.


This may have caused remain supporters not to vote because the result turned out to be predetermined. But as we know, the general feeling was that there would be a (probably close) victory for the left. Social bots appear to have a noticeable effect on the selection behavior. In the UK, Twitter is used almost exclusively by educated youth. But it was precisely this demographic group that voted against Brexit (Sandra, et al. 2015).


When we also analyze the figures, we identify a large part of the social bots use of hashtags in support of Brexit such as "#remain" was most likely due to multiple instances of them being real political, social bots however mere advertising spam used hashtags which happen to be trends at the time. This example shows that even great efforts to influence the development with the help of social bots do not necessarily correspond to effective management.


Much more complex botnets are described in connection with the conflict Ukraine. This botnet involves some 15,000 Twitter accounts, on average with 60,000 messages a day being published. The content of the tweets is selected to match the likely interests of young Ukrainian men. Social bots talk a lot concerning football, sexist statements as well as spreading links to illegal downloads of recent movies (Richard, et al. 2015).


But between these tweets, propaganda messages "Right Sector" - an ultranationalist Ukrainian confederation with a paramilitary wing - spread systematically. There are various manipulation strategies in evidence here. First distorting also trends with popularizing certain hashtags.

But besides social bots are also purposefully trying slogans like "Maidan" as well as "Euromaidan" or "#Right Sector" hashtag as an inducement towards Twitter's algorithms to direct user searches for "Maidan" "Right Sector" content too. Another strategy that has arisen in this context is to spread misinformation (Sandra, et al. 2015).


Moreover, social bots systematically monitor the Ukrainian politicians to expand its own reach. This is effective because even if the politicians do not be fooled by social bots and convey their message, intentionally or unintentionally, Twitter will be more likely to present social bots tweets to other users sharing the same politicians. The Ukrainian social bots also have a whole arsenal of tricks at their disposal to circumvent the traditional curing algorithms (Daniel, 2015).

They follow each other and consequently have a balance between friends and followers, they follow a schedule in posting of tweets that simulate break and rest periods and yet do not seem random and can also make small changes to tweets so that the message is identical, but the automatic program will not recognize the texts as identical copies.



Long and short term effects and influence of social on the electoral system


Social bots are here to stay. Methods for detecting social bots are improving all the time; the same applies to the social bots themselves. Such is the case for both sides that new methods can be analyzed in a relatively quick time and appropriate action developed. Overall, the volume of social bots from social network users is probably even at a relatively high level (Richard, et al. 2015).

That said, major distortions may occur during the period, especially if the bot activities sudden increase in volume or if there is a new quality boost in bot technology. The former is of particular interest in terms of specific events, such as elections or crisis situations. In these scenarios, social bots actually have a good time because of the manipulation is not revealed until the event is in the past. And there are already clear signs of increased quality curing technology (Sandra, et al. 2015).

An increasing number of excellent development tools for the fields of artificial intelligence involve understanding and generation of text (natural language processing, natural language generation) and is currently available for free because of companies like Google, Facebook, and IBM. The hope is that this will provide significant developmental advance with their technology. Equipped with these tools bot developers are now working on a new generation of social bots that ordinary users will find impossible to detect.


 

While economic interest in the use of social bots also results in a radical change of rules: social bots are almost legalized. They are no longer seen as a manipulation threats but instead as help assistants in daily life. Social bots are often identified (e.g., Slack and Telegram networks).

The legal proliferation of social media content is also likely to change awareness among users. Someone interacting with social bots in everyday life will no longer be surprised by them, and people will be more inclined to wonder if a message has been sent by a human or a machine. In overall, with the growing role of influence and expansion of social media usage we all have to learn to deal with this tool. That being said, the bot is an example that shows that the digitization of elections invalidates a basic truth that has been applied almost universally to date: the audience is ultimately an indication of quality. It is no longer true today as a message spread by the millions of posts can definitely be untrue.



References

1. Daniel, S. (2015). State hostility causes and enjoyment in bot versus bot setting social media campaigns. Journal of Communication.62 (4), 45-719.

2. Edwards, D. (2016). Competition and social bots. Emory University Publication

3. Richard, M. Hammer, M. Linda, K. Joel, G. (2015). The hypercompetitive attitude scales construction. Journal of Personality Assessment.55 (3), 630-639.

4. Sandra, M. Kinnie, J. Terry, C. (2015). Factorial based analysis in scaled measurement competitiveness. Educational and Psychological Measurement.62 (1), 34-176.

5. Vorderer, K. Hartmann, T, Klimmit, C. Schramm, H. (2014). Explanation of the enjoyment of social media campaigns. Journal of Computer-Mediated Communication.11 (4), 45-176.

6. Williams, C. Clippinger, C. (2012). Aggressive competition and social bots. Computers in Human Behavior.18 (5), 15-67.


Trending
Archive
bottom of page