
When U.S. National Security Adviser John Bolton met with Russian President Vladimir Putin in late October 2018, one of the clear messages he delivered to Putin was to stop meddling in U.S. elections. Bolton underlined that Russian interference, particularly in the 2016 U.S. presidential election, had damaged Russia’s own interests and also been “particularly harmful for Russian-American relations without providing anything for them in return.”
The list of Russian actions — all of which Putin has vigorously denied — is long. In 2015, the U.S. Federal Bureau of Investigation (FBI) informed the Democratic National Committee (DNC) that one of its computers had been compromised by Russian hackers. The chairman of Hillary Clinton’s presidential campaign, John Podesta, was the subject of a successful phishing attempt to get him to change his password so that someone could get access to his emails. In early June 2016, Wikileaks founder Julian Assange announced that Wikileaks had received hundreds of Clinton’s emails and would publish them on its website.

Later that month, Wikileaks published thousands of emails it had secretly obtained from an unidentified source — likely Russian — that had been stolen from a DNC server. Some of those emails were so politically compromising that they forced the resignation of DNC chairwoman Debbie Wasserman Schultz. In August, hackers were able to get the personal telephone numbers and email addresses of leading Democratic congressional campaign members. There were more leaks and more compromising postings online, which again seemed to have Russian fingerprints all over them.
Then, exactly a month before the U.S. election, the Department of Homeland Security and the Office of National Intelligence on Election Issues confirmed everyone’s worse fears. In a joint statement, the two agencies declared: “The U.S. Intelligence Community is confident that the Russian government directed the recent compromises of emails from U.S. persons and institutions, including from U.S. political organizations. The recent disclosures of alleged hacked emails on sites such as DCLeaks.com and WikiLeaks and by the Guccifer 2.0 online persona are consistent with the methods and motivations of Russian-directed efforts. These thefts and disclosures are intended to interfere with the U.S. election process. Such activity is not new to Moscow — the Russians have used similar tactics and techniques across Europe and Eurasia, for example, to influence public opinion there. We believe, based on the scope and sensitivity of these efforts, that only Russia’s senior-most officials could have authorized these activities.”
Regardless of whether Putin has hurt his own interests, he’s sown deep dissension in the American body politic. Accusations not just of electoral interference, but of direct political collusion, are the subject of a special inquiry led by former FBI director Robert Mueller. At press time, his final report — following recent court filings, jail sentences and memos — on possible collusion by U.S. President Donald Trump and close associates, was expected soon.
There is little doubt that there is a growing threat to democracy from the digital world, which increasingly seems to resemble a lawless Wild West. And it is not just American democratic institutions that are at risk, but democratic institutions everywhere, including Canada.
In the past two years, major cases of foreign election interference have been detected in the U.S. and French presidential elections, the German (parliamentary), Italian, and Mexican elections, Macedonia’s name-change referendum and Catalonia’s (illegal) independence vote.
The threat comes in many forms. To be sure, the actions of authoritarian regimes such as Russia and China to tamper with electoral systems, hack into political party computers, steal emails and other kinds of information, sow discord by spreading rumours and fake news through online disinformation campaigns and social media and reveal embarrassing personal details of politicians and celebrities are a growing threat, especially if they find willing accomplices who see partisan gain in supporting their activities. But there are other threats. Corporations that harvest personal data from social media platforms such as Facebook, and then sell that data without user consent, are among them.
In late 2009, Facebook changed its privacy settings to make more information public by default. In 2010, Facebook’s founder and CEO, Mark Zuckerberg, blithely stated that privacy is no longer a social norm: “People have really gotten comfortable sharing more information and different kinds, but more openly and with more people.” His words would later come back to haunt him.
In 2018, the Cambridge Analytica scandal erupted when a company that had developed an app for Facebook, ostensibly for research purposes with the informed consent of users, used its access to Facebook accounts to collect all kinds of personal data that it subsequently harvested to develop psychological profiles of each individual. The information was then sold and used to direct highly targeted advertising in political campaigns. The scandal, which was revealed by young Canadian whistleblower Christopher Wylie, who had worked for Cambridge Analytica, was deeply embarrassing for Facebook and Zuckerberg, who found himself dragged before Congress and a British Parliamentary committee to apologize for his company’s actions and the egregious breach of privacy of Facebook users.
The Cambridge Analytica scandal, however, may just be the tip of the iceberg when it comes to the different risks users confront when they go online and how their personal data are collected and manipulated by the unscrupulous or the unwitting. When nude photos of actress Jennifer Lawrence were leaked online without her consent, Lawrence complained to Vanity Fair that the event was no less than a sex crime. “Just because I’m a public figure, just because I’m an actress,” she said, “does not mean I asked for this.”
Although the digital world permits enormous freedom of expression and for all kinds of content to be posted and communicated online, in most liberal democracies, at least until recently, there have been few controls or regulation of online content. Stories that are patently untrue can go viral in nanoseconds. For example, in January 2017, YourNewsWire, a Los Angeles-based website, reported “that Justin Bieber told a Bible study group that the music industry is run by pedophiles” and that, according to a study by National Public Radio, “25 million fraudulent votes had been cast for Hillary Clinton.“ Both stories were untrue.
Hate speech and harassment are also major problems. As New York Times writer Frank Bruni wrote in the aftermath of a murderous attack on a Jewish synagogue in Pittsburgh and Cesar Sayoc’s alleged mailing of pipe bombs to prominent Democrats, the internet “creates terrorists. But well shy of that, it sows enmity by jumbling together information and misinformation to a point where there’s no discerning the real from the Russian.”
At the same time, maintaining diversity of content and allowing different voices to be heard are also growing challenges because of the overwhelming market dominance of a small number of online platforms such as Facebook, Twitter, Google, Amazon, and, in the entertainment world, Netflix. These tech giants enjoy oligopolistic, if not monopolistic, control in cyberspace. Further, the algorithms that manage and curate online content on these platforms generally tend to be written by young white males who lack world experience or the kind of educational background that would expose them to different cultural viewpoints and processes of moral reasoning.

The lack of transparency in the way major internet platforms harvest and curate “big data” also means that the general public and regulators have little knowledge of corporate business models and how data are manipulated and marketed. Algorithms are closely kept trade secrets, much like the formula for Coca Cola.
Some knowledgeable observers now question whether democracy can actually survive in a world of “big data” and artificial intelligence. As the prestigious journal, Scientific American, explained in 2017, “Today, algorithms know pretty well what we do, what we think and how we feel—possibly even better than our friends and family or even ourselves.…The more is known about us, the less likely our choices are to be free and not predetermined by others. But it won’t stop there. Some software platforms are moving towards ‘persuasive computing.’ In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entire courses of action, be it for the execution of complex work processes or to generate free content for internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people.”
In George Orwell’s novel, Nineteen Eighty-Four, a big face gazed down from a wall with a caption that ran “Big Brother Is Watching You.” In today’s world, Orwell’s Big Brother seems archaic and clumsy. Governments and private entities have far more sophisticated tools at their disposal. Scientific American calls this the politics of “big nudging,” in which “on [a] massive scale, governments…[will be able to] steer citizens towards [preferred kinds of behavior]… The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right…. To many, this appears to be a sort of digital scepter that allows one to govern the masses efficiently, without having to involve citizens in democratic processes. Could this overcome vested interests and optimize the course of the world? If so, then citizens could be governed by a data-empowered ‘wise king,’ who would be able to produce desired economic and social outcomes almost as if with a digital magic wand.”
We may think we are still far off from this kind of world. However, authoritarian regimes such as China are already moving quickly to harness these technologies for nefarious purposes. Using facial recognition technology, China is deploying surveillance cameras and control over the country’s digital space to monitor the physical movements and online behaviour of all 1.4 billion of its citizens. As the Business Insider reported last year, “China’s facial recognition surveillance has already proven to be eerily effective: Police in Nanchang, southeastern China, managed to locate and arrest a wanted suspect out of a 60,000-person pop concert earlier this month.”
It comes as no surprise that citizens everywhere are becoming increasingly mistrustful of governments and internet companies, including social media, when it comes to their own data and privacy.
A CIGI-IPSOS global survey of public attitudes in 25 countries conducted in 2017-2018 found that cyber criminals (82 per cent) and internet companies (74 per cent) are the largest sources of distrust online, more so than even governments — and this survey was conducted before news of the Cambridge Analytica scandal erupted. Distrust runs highest in North America (73 per cent) followed by the Middle East and Africa (71 per cent), with significantly lower levels in the BRICS (66 per cent) and Asia-Pacific countries (64 per cent). Interestingly, despite identifying internet companies and social media platforms as a common source of distrust, less than a quarter (23 per cent) of respondents pointed to control by corporate elites as a reason for their distrust.
An overwhelming majority (68 per cent) of those surveyed also exhibited high levels of distrust towards social media, with North Americans experiencing the highest level of distrust and countries in the Asia-Pacific region the lowest.
Ironically, however, social media companies exercise a disproportionate influence over what people see and do online, including the news they see (60 per cent) and their political point of view (42 per cent). For many of the world’s global citizens, social media have become the prism through which they see the world and the online universe.
At the same time, though, people are clearly worried that social media, in particular, have too much power (63 per cent) though there are significant regional variations in this perception.
Middle Easterners and Africans are the most concerned, followed by Latin Americans. North Americans and Europeans are somewhat more sanguine, but again not overly so, with the majority in both regions feeling social media have too much power.
On the one hand, the internet and the digital world have created unparalleled opportunities for freedom of expression, communication, commerce and the dissemination of knowledge. Such opportunities are vital to a vigorous, prosperous open society and democracy itself. On the other, the rapid evolution and growth of the digital ecosystem have led to major abuses and mounting public concern that the digital space needs gatekeepers to prevent such abuses and a further erosion of our democratic institutions.
Regulation of the digital space is “inevitable” as a recent report jointly produced by the Centre for International Governance Innovation and Stanford University’s digital policy incubator concludes. The report, titled Governance Innovation for a Connected World: Protecting Free Expression, Diversity and Civic Engagement in the Digital Ecosystem, argues that the challenge is essentially one of learning how best to apply the norms, standards and rules of the non-digital world — also sometimes referred to as the “analog world” — to the digital. It points out, for example, that “the non-digital world widely accepts that governments legitimately set ground rules in many sectors, for example, telecommunications common carrier regulation, transportation safety rules, broadcasting regulation and radio frequency allocation and spectrum management rules, among others.”
The report argues that the digital world increasingly needs similar kinds of regulation and legislation. However, in this new environment, it notes that “civil society will need to get over its long-standing aversion to having government intervene to control the behaviour of internet platforms and users [and] acceptance may be difficult to achieve because of fears that regulators may not take sufficient care to understand the fast-moving environment. The concern is that government may regulate to solve today’s (or yesterday’s) problems without considering that today’s dominant players can be replaced, which would rapidly make those regulations obsolete and could even work to impede innovation. The
question is really how to avoid undesirable or unintended outcomes.”
When it comes to regulating hate speech, democratic security and privacy online, Europeans are clearly leading the way, though not without controversy. Germany recently introduced legislation banning hate speech online. The law known as NetzDG (act to improve enforcement of the law in social networks) was introduced to deal with growing hate speech online that was directed at the major influx of refugees into Germany. Under the law, social media platforms that have more that two million subscribers are required to remove illegal hate speech content within 24 hours after receiving a user complaint. If they don’t, they’ll suffer fines that could be as high as 50 million euros. A variety of concerns have been raised about the law. The legislation appears to be directed at American-operated social media platforms because of the high subscriber threshold. Because the fines are so hefty, freedom-of-speech advocates worry that media platforms will err on the side of caution and take down posts that fall into the grey zone — for example, those that are controversial, but not necessarily threatening to any group or person. Asking media companies to police hate speech online through their algorithms confers upon them too much power in the absence of proper oversight, accountability and transparency mechanisms.
The European Union’s new general data protection regulation, which went into effect in May 2018, has global ramifications. Under the law, which imposes uniform data privacy and data protection regulations right across Europe, companies are held accountable for the way they handle personal data associated with EU residents, regardless of whether they are incorporated in the European Union or not, with penalties for non-compliance running as high as four per cent of global revenue or 20 million euros, whichever is greater. European residents also have a legal right to access, correct and erase their data, and to move it to another service provider if they so choose. Companies must report breaches involving EU resident data to data protection authorities within 72 hours of the breach and to notify those individuals directly about it. The law also encourages corporations to review the way they handle and manage data on a regular basis and take remedial measures to strengthen security and privacy in their administrative and technical operations, as necessary.
Many countries, including Canada, are studying these requirements to determine whether they should introduce similar rules for the way data are handled, managed and distributed. An excellent series of recommendations to help policy-makers steer their way through the digital labyrinth appears in a report, titled Democracy Divided: Countering Disinformation and Hate in the Digital Sphere and developed by Canada’s Public Policy Forum and the University of British Columbia’s Taylor Owen. The report urges publishers of online content to identify themselves; internet companies (or online intermediaries) to be held legally responsible for content they publish on their websites; all forms of advertising (including political) to be transparent in terms of source and funding; algorithms to be subject to regular audits by external “independent authorities” and the results made publicly available; non-criminal remedies to investigate and respond to hate speech online; and independent panels to investigate disinformation and hate speech online. The report also encourages educational programs to promote digital literacy and greater public awareness and literacy.
Some of these recommendations are sensible, but others won’t wash. Internet and social media companies will fight tooth and nail against having algorithms — their trade secrets — scrutinized or validated by outsiders. Depending on how it’s done, policing “fake news” could curb freedom of expression and thought. Some would argue that the best tonic for the “truth” is more “sunlight” in the form of public debate and airing of contrary viewpoints — not censorship or truth monitors.
Some historical perspective is in order. “Fake news” and the manipulation of the “truth,” especially in the political arena, are not new phenomena. Orson Welles’ broadcast, War of the Worlds, which terrified radio listeners, was one of the first examples of “fake news” in the modern era. So, too, were Nazi propaganda campaigns and censorship, which were far less benign and aided Adolf Hitler’s rise to power. As we debate the need to police the internet and social media, we should remember former U.S. Supreme Court associate Louis Brandeis’s observation that “Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.”
Fen Osler Hampson is Chancellor’s Professor at Carleton University and a distinguished fellow and director of Global Security & Politics at the Centre for International Governance Innovation. He was co-director of the Global Commission on Internet Governance and is the co-author with Eric Jardine of Look Who’s Watching: Surveillance, Treachery and Trust Online.