Among the world’s most influential people in 2018, positioned among heads of state and billionaire entrepreneurs, is an introverted 29-year-old with bright pink hair. Cambridge Analytica whistleblower Christopher Wiley exposed the deep links between alternative right-wing networks and Kremlin agents roughly a year ago, yet he is still the subject of countless security threats and intimidation.
“There has not been a single day where I have regretted it,” he states with pride in an exclusive interview with Kathimerini in the context of this year’s Delphi Economic Forum. “Before my revelations, Russian propagandists functioned in plain sight and regulators could not even realize it,” he adds, while explaining in detail how Cambridge Analytica algorithms hijacked democracy and impacted the Brexit referendum and the US presidential elections.
A year has passed since you blew the whistle on Cambridge Analytica and conflicting reports keep surfacing about what actually went down. Can you give us your account of how the company got involved in democratic elections?
What many don’t know is that Cambridge Analytica has its origins in another company, SCL Group. It is a British military contractor, and when I first started working there CA did not exist as a structure at all. Our task when I joined was to map out and predict the spread of radical Islamic narratives online and to analyze how recruiters are radicalizing young men and coercing them to do terrible things. We would then build systems and tools to act as early warning signals, so that a military or civil agency could interfere with recruiting and radicalization operations in different parts of the world. It all changed as soon as Steve Bannon sat at the helm of the company. All the technologies that were designed to interfere with the effectiveness and cohesiveness of terrorist organizations got fully inverted. With a few tweaks, they were now used against voters in the American elections – we started looking at ordinary Americans the same way the military was looking at radicals, and the dirty game of disinformation began to unravel.
Can you walk us through how exactly these digital disinformation campaigns grew so widespread and ended up having tangible consequences in the physical world?
Initially our algorithms easily identified parts of the American population that were more narcissistic, neurotic and conspiratorial. They were then targeted with messaging that encouraged more conspiratorial and neurotic thinking and lured them into forums, chat rooms and Facebook groups with people who shared the same thinking – or oftentimes bots that were parroting the same narratives. Once these groups grew to include a couple of thousand members, local events would be set up. At that point, even if only 5 percent of users actually showed up to those events, that would be enough to form a tangible community where conspiratorial thinking was completely normalized. What started as a digital fantasy had become their reality. The exact same techniques that the military would use to undermine a narcotics or terrorist operation were being used reversely, to essentially create an American insurgency that then became what we now know as the alt-right.
If anybody can estimate the impact of these disinformation campaigns, it’s you. Can we safely assume that the US presidential election of 2016 and the Brexit referendum would have had different results were it not for the Cambridge Analytica involvement?
It is absolutely reasonable to make that assertion, especially if we look at how narrow the margins were for Donald Trump and Brexit – both results were within a couple of percentage points. In Brexit we are talking about less than 2 percent of the vote. You see, because elections are a zero-sum game, you essentially only need one more vote to be the absolute winner. Even if Cambridge Analytica only added a margin to the effectiveness of the campaigns – and we know that it actually targeted millions of people – the Trump and Brexit campaigns wouldn’t need much more. I have few doubts that the outcome would have been different if these shady techniques were not used.
After your revelations, Facebook began a widespread damage control campaign with numerous public apologies, statements and testimonies. It has since promised that it is evolving toward a future where user privacy is secure. Do you personally trust that evolution?
Absolutely not, and I’ll tell you why. When I decided to blow the whistle in cooperation with a team of investigative journalists from the Guardian, we actually gave Facebook 10 days to coordinate their response. This is a grace period that is very rarely given when someone decides to break a scandal. And yet, instead of using that time to face up to their responsibility, Facebook denied everything and sent me a barrage of aggressive legal threats. They insinuated that they would send me to the FBI for cybercrime and that they would sue the Guardian for defamation. They didn’t realize that, of course, the public would soon find out that they were the true cyber criminals. When the story finally broke, Facebook admitted to it all and publicly apologized – but their initial response shows their true colors. Do not be fooled by Mark Zuckerberg’s posthumous pleas of apology. Facebook couldn’t care less about global societies or democratic values. They only care about their brand image and the value of their shares.
I know you face daily security threats for your actions, yet you have said repeatedly that there hasn’t been a single moment when you regretted it. What reaction or consequence pleased you the most after the CA revelations?
I am most pleased by the fact that a serious and mainstream political conversation is now being had about data and artificial intelligence. Politicians realized that talking about AI is not part of some sci-fi futuristic scenario, but that it concerns structures that are being built right now and have massive consequences worldwide. Before my revelations, Russian propagandists would operate almost in plain sight and nobody had realized it. Today, Republican senators are asking for a Federal privacy law – something that would have been unheard of a year ago. At the same time, I’m also disappointed by how unprepared national civil protection agencies, police forces and militaries still are when it comes to handling cyber threats. We can talk about military spending all we want and keep buying tanks but you can’t shoot a foe through the internet. Just look at how digital spaces are being used by Russia, or terrorist organizations like ISIS. The real war is being fought online – so we need agencies that fully understand it.
You are a huge advocate of big data and AI, and a firm believer that they can be used for extraordinary good and impact. What is the regulatory mentality required to filter out the negatives when it comes to big data and AI?
It is urgent to understand that the internet and social media are an architecture. It is not a service. When we start looking at architectures we start from a completely different regulatory mind-set. You never encounter buildings that have a big banner with terms and conditions on the outside, or with messages that say “because of our user experience and brand concept we do not have fire exits.” That would be ridiculous. So why is it that the digital equivalent is allowed online – conditions that endanger safety and privacy in the pretext that users have opted in?
Until that regulatory mentality takes over, what can users do to protect their privacy online?
It is extraordinarily difficult to limit the information accumulated by today’s algorithms – unless of course somebody goes completely offline. A smart strategy, however, is to try to create noise and confusion for the algorithm with tactics such as visiting non-relevant and random webpages. I recently realized that, thanks to this tactic, the algorithm had classified me as an underaged girl from Korea.
Christopher Wylie met with Kathimerini in the context of the Digital Disruption Sessions, a side event organized by VALUECOM during this year’s Delphi Economic Forum.