MANOLIS ANDRIOTAKIS

Thinking outside the black box

47-year-old journalist and writer discusses how social media algorithms subtly manipulate the human brain

thinking-outside-the-black-box

“I think we got Huxley.” Since 2004, when he launched one of the first blogs in Greece, Manolis Andriotakis’ view of new media has only grown bleaker. Inquisitive, versatile and independent, he has systematically studied developments in the sphere of the internet and social media, and their impact on human evolution. His thoughts on the subject have been the subject of books, articles (he is a regular contributor to Kathimerini), documentaries and seminars.

Andriotakis recently spoke to Kathimerini via Skype, following the release of his latest work, “Homo Automaton: Artificial Intelligence and Us” (in Greek by Garage Books). The book represents his most comprehensive, but also most despondent, view of the phenomenon. Here, the 47-year-old writer analyzes machine learning algorithms’ subtle manipulation of the human mind. Monitoring our every move on the web, these programs filter and individualize the content that appears on our screens with the aim of making us more receptive to marketing content. The result, Andriotakis argues in the book, is dystopic to the extreme: a surveillance society that seeks to predict – and eventually molds – beliefs, preferences and behaviors; citizens with limited intellectual autonomy and willpower. It’s a world where the so-called “black box” elbows out the analogue man to make room for a brave new species: Homo automaton.

“Social media are not a tool of dialogue,” says Andriotakis, who took down his Instagram account, left Twitter and massively reduced the time he spent on Facebook. “If we assume that democracy, its institutions and its founding principles benefit from dialogue and democratic discourse, then all these media do not aid dialogue,” he says. “They are tools of persuasion.”

You have been studying new media since they first appeared. I have noticed a shift in your perceptions and your latest book takes a much more pessimistic stance.

Yes, there has been a shift. I don’t think I’m alone in that. The fears and concerns were always there, but 2016 came as a jolt. The election of [Donald] Trump, Brexit, but also the election here in 2015, brought to the surface not just the toxicity of social media but also their structural shortcomings. We are talking about specific platforms, specific business models. There’s a train of thought that starts in my book “The Fifth Power” (Nefeli, 2005), which talks about how newspapers based on an ad revenue business model end up having their content dictated to them. This is also happening now: Businesses, mostly involved in publishing, rely on the same business model. We’re experiencing a kind of disenchantment of social media.

The risks and hazards of social media engagement are now well documented and confirmed, even by people inside the tech industry itself. So why do you still have more people joining than quitting them?

Because they respond to a human need: the need for communication. They also offer an environment that is very attractive and very carefully designed. Their targeting is very precise, because their analysis is very precise. They are also – ostensibly – free and everyone is on there. This demonstrates their pervasiveness and structural relationship to reality. You want these tools, because this is the world today.

Do you think that the algorithms pervading the operation of social media undermine the institutions that ensure the function of democracy?

To a degree, yes, they do undermine them. Social media are not tools of dialogue. If we assume that democracy, its institutions and its founding principles benefit from dialogue and democratic discourse, then all these media do not aid dialogue. They are tools of persuasion. They claim to promote dialogue, interaction, communication, connectivity and interconvertibility. But this is a mere smoke screen, because the purpose of the AI machine and machine learning is advertising, it is commercial exploitation. From the moment that they can be used by a company to sell shoes, they will also be used by a politician, an activist or a religious leader, each to promote their own message. These media do not obey the rules that govern traditional media. Algorithms use machine learning to predict human behavior. Therefore, they undermine democracy because they do not aid dialogue, but, rather, emotional manipulation and reaction, thus building a wall between the citizen and critical thinking, transforming him or her into an automaton.

If, though, they were deliberately transformed into a tool of control, wouldn’t that be a major political issue?

Yes, it is. It’s like letting an industry control everything – and it is not just any industry, but an industry of knowledge. A free society does not center its philosophy on the manipulation of people. If you want autonomous, independent citizens, you don’t look for ways to control them but to give them the tools that will allow them to make better decisions for themselves. If you want to ensure that you have free and well-informed citizens, you will make use of these knowledge, information and communication structures. Their characteristics are so structural you can’t leave them completely uncontrolled. This is why it makes sense to have good journalism and a good education system. Otherwise, all you have is a carrot and a whip; not a mature society.

I think the issue is also philosophical, though. If we accept that there is even a degree of free will, these media make it possible, on a technical and mass scale, to take it out of the equation, so that everything, from the smallest to the biggest decisions, is dictated. This creates the illusion of choice, a virtual sense of control, when the message is, in actual fact, dictated. What we’re doing is technical intervention, pure and simple.

Who had a clearer view of the future after all? George Orwell or Aldous Huxley?

I think we got Huxley.

Is there a path to emancipation?

I believe in the power of education and intellectual cultivation. The longer we continue to question established ideas and continue to seek better-quality knowledge, and the more we keep up the political pressure, I am confident that we will not end up with Orwell’s scenario, or with Huxley’s. I certainly see elements that trouble me and energize me. I do not want my book to be viewed as a “call to arms,” however, but as a contribution to a necessary dialogue that is not happening. Because this dialogue cannot take place on Facebook, can it?

Can social media exist without the manipulation machine?

I am convinced that there can be non-commercial motives behind the networks and that in the future things may not be as they are now. It seems impossible, but a lot of things seemed impossible before they happened. Some kind of correction will happen; new technologies may even come along to change the landscape completely.

Regulation

The authorities seem fated to play catch-up with the tech industry, always a step behind developments. Can we expect Silicon Valley to self-regulate, or is that asking the wolf to guard the sheep?

Companies are ahead by virtue of what they are, but they are not working in secret; there is no conspiracy. They are obliged by law to submit their patents to the regulatory authorities. The results are made public and after that it becomes a power game based on the degree of pressure exercised on the regulatory authorities, the political forces, to ensure a type of immunity. There are also rules that have been bypassed by the tech industry because it is obviously an industry with enormous promise of profits and enormous investments. They are also favored by a techno-utopian dimension, by the investment of an enormous amount of hope in the digital world.

Are you at all worried that the surveillance mechanisms developed and implemented to contain the spread of Covid-19 will stay with us after the pandemic?

I see no reason why the contact tracing apps developed during the pandemic should stay. The real tug-of-war is with facial recognition software, predictive policing and biometric data. I think that if anything good came from the pandemic, it was raising a little bit more awareness about the positive dimensions of these technologies. Without these incredible tools, databases, processing capabilities, speeds, I do not think that vaccination development or the pandemic management would have succeeded to the degree that they have.

It is interesting that while private companies – in the West at least – know more about us than our own government, personal freedom activists continue to protest against governments. Why is that?

Because it is the governments that should be controlling these companies. They should be setting some kind of limits on them, but they’re having a hard time with that. That’s how democracy works, though; it’s a power competition.