Technology and democracy

According to the OECD, participation[1] comprises of three elements: access to information, consultation and active participation. It is often said that information and communication technologies (ICTs) have increased participation in global governance arrangements by providing global and affordable access to information; offering new possibilities to consult vast number of individuals and stakeholders simultaneously, and enabling active participation in some decisional and non-decisional processes. E-democracy, e-participation and e-government are some expressions that convey the positive influence of ICTs on international relations and global governance. ICTs open a new horizon for multi-stakeholder processes and inclusive governance initiatives. What Rosenau calls a skill revolution, is indeed an empowerment of many individuals on the planet to better understand the world and relate to it.

However, the latest US elections and the Brexit showed a darker side of ICTs: bots, trolls and fake accounts among others. These technologies are used since a couple of years in many countries and by many governments, political parties and governmental agencies. The famous Russian Internet Research Agency (IRA), a secretive technology firm according to US officials, was dedicated to spread fake news and social divisions in the West. According to CNN[2], the monthly budget for IRA was around US$1 million in 2013 – split between departments that included Russian-language operations and the use of social media in English. The “Department of Provocations” had for objective to create news items to achieve specific goals. This Troll factory no longer exists and has officially stopped its operations in December 2016, although a new company is registered at the same address in Saint Petersburg under the name Glavset, and whose director general has the same name as the boss of IRA. Investigative journalist Andrei Zakharov, who works for the business media group RBC, told CNN its work continues.

In May this year, the US Advisory Commission on Public Diplomacy published a report entitled “Can Public Diplomacy Survive the Internet: Bots, Echo-Chambers and Disinformation”[3] as a result of the increasing threats stemming from new technologies and not so new actors. This report questions the role of ICTs in Western democracies and identifies three trends, one of which this article focuses on: recent technology advances transforming the nature of communication.

Many actors, ranging from states to non-state actors, compete in cyberspace for access to personal and corporate data, for attention, and for controlling online narratives. Instead of safekeeping individual personal information, GAFA actors (Google, Amazon, Facebook and Apple) tend to use this data for commercial purposes. Personal data is at the heart of the Big Data phenomenon, where huge amounts of personal data we disclose willingly or unwillingly is then collected, stored, processed, and traded among a small group of data. For instance, LinkedIn sells data to some governments.

Collecting information is one aspect of the informational society to quote Manuel Castells, where data is the raw material used to make wealth and develop innovations. By 2030, 20 billion things connected to the Internet. Sharing data and information can lead to new levels of innovation and economic development. Internet of Things (IOT) is expected to provide an additional of 1 point GDP per year on average. However, the drawback of these innovations is privacy and democracy. In a world where our actions, online and offline, are constantly monitored, our data stored, collected and processed, we are at a crossroad: how do we ensure that individual and corporate data is protected, and also, how do we ensure that the basics of votes and elections are safe? In other words, how to ensure that voters are not wrongly influenced by foreign countries or private corporations through bots? Where is the limit of the influence of technology on democracy? Since ICTs are extremely pervasive and are part of every moment of our lives, influence can quickly become problematic. More today than ever, information is power.

In this setting, the centralization of information raises many questions. Information is centralized in two aspects. First, few major technology players have the capacity to collect, store and process large amounts of data information on a global scale. Often resumed as GAFA, these private actors replaced the state as the main information collector. These players, multinational corporations, centralize most information and the technological capacity to do. Google has information and the almost exclusive technological capacity to identify and follow an individual on a global scale 24/7. Second, information is centralized due to the fact that users only see one part of the information available, which is selected for them, and on their behalf, by the same GAFA players. With the concept of Filter Bubble, Eli Pariser showed successfully how algorithms increasingly restrict our access to alternative information and points of view that differ from ours. The basics of democracy with open debate and open information is threatened. The traditional gatekeepers (elite media for instance) have lost not only some of their readers, but also some of their powers, since the choice of what is newsworthy is not made anymore by a newspaper editor (who respects a journalist code of conduct), but by an obscure algorithm (obscure in the sense that it is hidden) that is developed to maximize advertising and profit objectives.

As Jason Stanley notes, “without truth, there is just power.” Therefore it is crucial to double-down on fact-checking and evidence-based news and information programming. The emergence of social bots, artificial intelligence, and computational propaganda are some of these controversial advances. As Francis Fukuyama argues in the same report, “The speed and scale of today’s ‘weaponization of information’ is unprecedented. (…) The traditional answer to the spread of bad information has been to inject good information into the mix, on the assumption that the truth would rise to the top. But in a world of trolls and bots, where simple facts are instantly countered by automated agents, this strategy may not be adequate.”

The scientific literature on cognitive bias indicate human decisions are often biased. As shown by Kahneman (2003), decision-making processes are influenced by a series of cognitive biases. For instance, making a decision online implies a different context and a form of distancing both in terms of time and space. The psychological concept of distancing predicts that a greater distance implies less familiarity, a reduced similarity to the self, and less allocation of resources (Stephan, Liberman, & Trope, 2011). In other words, making a decision “online” triggers distinct cognitive biases, which might lead to different outcomes.

One of these bias is called the truth bias. As Hancock mentions in the US report, this bias is in fact quite rational: most of the messages that a person encounters in a day are honest. Being biased toward the truth is almost always the correct response.  At the same time people are also continuously evaluating the validity of their understanding of the world. This process is called “epistemic vigilance,” a continuous process checking that the information a person believes true is accurate. With this objective in mind, people can detect lies when they have time, resources, and motivation. Lies are often discovered through contradicting information from a third source, or evidence that challenges a deceptive account (see Hancock, p. 49).

In this context, fact checking is not only necessary to support democracy (Bennett, p. 61; Stanley, p. 71), but also effective, even in hyper-partisan settings (see Porter, p. 55). Thanks to ICTs, disinformation campaigns can be much more easily detected and combated in real time (Henick and Walsh, p. 65). In parallel to regular leaks (panama leaks, paradise leaks, etc.), some actors and leaders challenge the notion of truth with opinion and point of view. The truth is more and more perceived as a point of view, and education becomes crucial to figure out the opinion from the fact, the “alternative fact” from the real fact.

Citizens increasingly need to make sense of a more uncertain and changing world. Framing becomes crucial to explain (when possible) terrorism, delocalization, climate change, and natural disaster. The general assumption that a new generation will live better than the previous one is long gone. The rules of the game have changed due to globalization, the information revolutions, and climate change. These new rules, combined with the emergence of new actors (civil society, BRICS, terrorist groups), impact the “local” and “day-to-day” reality of billions of citizens. Fact are being replaced with narratives, and in this context, who chooses the frame of an event acquires power. The danger is that facts will lose some of their meaning. Numbers, percentages already do not make much sense anymore, since used during political rallies to illustrate “with facts” opinions, and in some cases wrongly used or purely invented.

In this respect, ICTs represent both an opportunity and a danger. ICTs feed disinformation thanks to the generalization of bots, trolls and fake accounts on social media, while at the same time offering the possibility to check facts and combat disinformation. Which one of these two aspects of technology will win the battle? The balance will lean towards where there is a “demand”: a demand for true information or a demand for demagogy. In this respect, education is key. Citizens need to keep a critical approach to ICTs and information (sources and fact checked). The future of our democracies depend on it.

[1] OECD (2001). Handbook on Information, Consultation and Public Participation in Policy-making, 2001. Retrieved from

[2] CNN (2017). Exclusive: Putin’s ‘chef,’ the man behind the troll factory. Retrieved from

[3] Shawn Powers and Markos Kounalakis, editors (2016). Can public diplomacy survive

the internet? bots, echo chambers, and disinformation? Retrieved from