WENDELL WALLACH

‘Who’s to pay for the societal costs?’

Ahead of SNF 2021 Nostos Conference, US technology ethicist calls for measures to mitigate harms of artificial intelligence technologies

‘Who’s to pay for the societal costs?’

Wendell Wallach is uncomfortably pragmatic about the potentially negative effects of artificial intelligence. “It’s not that we lack ways of ameliorating those effects, it’s just that the simple will to do so is not there,” he says during a Skype video call from his home in Connecticut.

Αs a lecturer at Yale University’s Interdisciplinary Center for Bioethics and a senior adviser to the Hastings Center, Wallach has grappled extensively with the ethical and governance challenges posed by AI and other emerging technologies.

Speaking ahead of the Stavros Niarchos Foundation’s 2021 Nostos Conference: Humanity and Artificial Intelligence this Thursday and Friday, Wallach warns that tech and political elites have failed to take effective measures to contain the looming dangers of AI, including an accentuation of biases and injustices.

Far from monitoring the innovation coming from the tech labs in Silicon Valley, he says, “we are allowing those who invest in certain technologies to reap the rewards without any responsibility for the negative consequences or the undesirable societal impacts of those technologies.”

However, notwithstanding his self-understanding as a “cup-is-half-full” academic, Wendell is optimistic that the unsustainable conundrum we are in can trigger an existential rethink about our trajectory as humans.

“We have built a world that, if we don’t act in a precipitous manner, will be robbed of a future comparable to ours regardless of how much money we can will to our grandchildren,” he says.

Μy understanding is that the good ΑΙ versus bad AI debate is now obsolete. Μost would agree that AI is both good and bad. What are your main concerns regarding AI and where do you see the most promise?

Well, my main concern is that we neither have effective governance to ameliorate potentially negative consequences of AI nor do we really have an effective engineering agenda focused on that. There is of course talk about AI for good and human-compatible AI, but I think these are all relatively weak instruments in comparison to some of the negative effects of the revolution which we’re in the midst of. It’s not that we lack ways of ameliorating those effects, it’s just that the simple will to do so is not there. Meanwhile, the will to speed up the development of emerging technologies enriches many people financially, particularly those who have stocks or significant ownership in tech businesses. AI is being weaponized and becoming central to the new forms of defense, whether that’s cybersecurity or more kinetic warfare in the form of lethal autonomous weapons. Now that isn’t to say that there aren’t hundreds of ways in which AI can improve life for some people. AI is certainly a driver, accelerator and amplifier of research in biotechnologies toward addressing health concerns, and for scientific discovery more broadly. It helps us, in some ways, to think through the ramifications of climate change and has contributed toward the development of vaccines. Nevertheless, I think the overall trajectory at the moment is out of whack and we aren’t taking effective measures to right that trajectory.

AI tends to concentrate more power and more control where it already exists, such as state authorities, the military, police or tech giants. Is that correct?

Yes, it is correct. And it’s not just AI. AI is central to the digital economy; and it’s the future of the digital economy. The digital economy has made many of us wealthier during the pandemic while hundreds of millions have lost their livelihoods – if not their lives. So I do think we are actually exacerbating structural inequalities through the digital economy, and this will continue because AI and other emerging tech enriches some of us sometimes at the expense of others. My take is that there are of course many ways in which artificial intelligence improves people’s livelihood and ameliorates some forms of inequality, but the overall effect is not positive. It’s not just in these specific areas of reinforcing biases or injustice. It’s also in the way that it’s altering the human condition. It is altering the human condition in several respects. One is, it is providing more and more powerful tools to manipulate human behavior, playing on unconscious cognitive capabilities of humans. All the tech companies are studying that in great depth. This gives additional power beyond those that have traditionally been utilized for advertising and propaganda purposes. It is also part of a narrative that I think is weakening human agency, not only in the manipulation of behavior, but in the suggestion that artificial intelligence is and will quickly evolve to have better decision-making capabilities than humans. This empowers a narrative that we should be giving agency to the artificial entities – such as lethal autonomous weapons – the ability to make decisions that will be a way of abrogating or alleviating the actual responsibility of those who deploy the artificial intelligence systems. It will also again be weakening human agency with the suggestion that humans will not be as good decision makers as AI. The overall trajectory, whether it’s true or not, built upon this is a narrative that AI can be more intelligent than humans in all respects. We’re not there yet. I’m among the skeptics, as to whether we will actually realize that.

It has been suggested that machine intelligence will be the last invention that humans will ever have to make because once we reach that point, machines will, at least in theory, be better than we are in coming up with new inventions.

If that is the case, which I would like to dispute, I think intelligence is something much more than the property of an individual or a machine. It also raises this profound philosophical question that we hear in different ways as, then what’s the function of humans, what are humans good for, what will be our role in the future, or are we actually writing our own death warrant?

You say AI reinforces biases. Can more data solve this problem?

Better data potentially could solve the problem. Bias is largely a result of the fact that we are building upon existing data that has traditional forms of bias built into it. And, therefore, as the old saying goes, “garbage in, garbage out.” So if an AI is deriving output based on imbalanced, or distorted data, then it’s going to give a distorted output. Human experts working with an AI algorithm’s output can learn to be relatively sensitive to the information involved and the biases that may be inherent within the data. That sensitivity might be improved through data analytics looking at the input data, but there are all kinds of other problems: Data can be distorted by adversarial attacks from bad actors who perhaps have some intentional reason to not want the data as it exists to be utilized or want to make sure that the data is biased. Bias is the most obvious example of how AI enters into traditional forms of injustice and inequality.

Do you think that part of the bias issue and, more generally, the problematic data issue has to do with the lack of diversity within the AI and the tech industry at large?

I think that’s a part of it. I think there are inherent characteristics of those who are attracted to jobs in the industry, but I don’t think it’s just that. I think the deep learning algorithm may be implemented by a researcher who knows the field very well. And the data set has been assembled not by people within the tech industry, but by the history of research within that field. So I don’t think we should overplay the imbalances in the male-dominated techno-enthusiastic orientation of the engineers in the field. But obviously when you’re talking about issues of race or gender having other eyes on both the data input and data output and the way in which the algorithm is designed would certainly improve the state of ethics.

Do you think that the proliferation of ethics panels in tech companies is just a smokescreen to ward off what they’re really concerned with – i.e. more regulation?

That’s a difficult question. I would say I have not seen much from those ethics boards to make me feel that they are anything other than that. Not that there aren’t companies who would have liked to at least address some of the ways in which they are being ethically challenged. So, I don’t question whether Google or Facebook might not want to eliminate, let’s say, lies or mistaken information. But will they do so at the expense of their growth? And can they do so or does a fiduciary responsibility override even what might be good intentions. This isn’t to say everybody is bad out there, or that the tech companies are just engaged in ethics washing. But, yes, they are fearful of regulation that will interfere in their ability to innovate in the ways they want to innovate. And, they’re very active in the cult of innovation that suggests that anything that interferes with innovation is bad. We are allowing those who invest in certain technologies to reap the rewards without any responsibility for the negative consequences or the undesirable societal impacts of those technologies. So who’s to pay for the societal costs created by all the damage coming out of misinformation on social media? It’s certainly not Facebook who’s paying for it. Democracies are suffering, citizens are not getting vaccinated because they believe a lot of dishonest information online. There are intense societal costs, and they’re not getting addressed. Governments don’t have the money to pay for it either, but they certainly aren’t making the companies responsible for those societal costs in the way they have tried to do with, let’s say, chemical companies and other industries whose implementations are socially destructive or might cause potential harms.

Is the existing legal framework enough or do we make more laws and more regulations?

I think even more than laws and regulations we need effective governance instruments that can set good policy standards. Right now, laws and regulations have problematics in them too. They get static and as the technology changes they don’t change very easily. There’s a dramatic lag between the implementation of a technology and our ability to put ethical, legal oversight in place. We also lack effective cooperative frameworks to think through what kind of ethical, legal oversight is necessary. Is it laws and regulations or does it need to be something a little softer? There’s such a thing as soft law which is standards, laboratory practices and procedures. The strength of soft law is that it’s a bit more flexible. You can throw it out if circumstances change. The weakness is that it’s often unenforceable. We perhaps need different kinds of governance regimes where, for example, if those deploying a technology violate existing soft law standards, then they can be prosecuted for violating the public trust. We aren’t going to be able to keep up with the laws and regulations on every consideration. But we do need some way of ameliorating the harms.

whos-to-pay-for-the-societal-costs0
‘I hope that the pandemic represents a little bit of a time-out and a recognition by the public, and maybe even those with resources and capital, that this is a really dangerous moment in human history,’ says Wallach.

What can the average person do in the face of all this?

The average person needs to get more educated and needs sufficient digital literacy. For their own self-protection, people need to know when they’re being manipulated or scammed and what measures they need to take to protect their privacy or their rights. I believe they should also take a little time to see who are the good faith brokers out there, who are the people who they generally trust and are trying to move society and the deployment of emerging technologies in positive directions. If they could find ways to support those who are acting in good faith, that would be a great help. One problem at the moment is that we have a lot of good ideas out there, well-intentioned people, but most of them can’t find the resources and time to do their work. When they for example try to raise capital, they often have to compromise their integrity in order to get that capital. So, yes, maybe they can get some money from the tech industry to work on certain problems but probably not to work on other problems, which are likely to be the ones that the tech industry is most fearful of.

Are you optimistic that the harms can be contained?

I came out of the womb as a cup-is-half-full person. I outline so many things that can go wrong and some people get a bit depressed when they listen to me. I admit that I’m not always giving the full story, I’m giving a particular take on what’s going on emphasizing what can go wrong. But my optimism is not about the present trajectory. My optimism lies in the sense that perhaps we are starting to get it: fires in Greece, fires in Australia, fires in Brazil, fires in California. Vicious once-in-a-century hurricanes are alerting people to the fact that global warming can no longer be debated; climate change is happening. I hope that the pandemic represents a little bit of a time-out and a recognition by the public, and maybe even those with resources and capital, that this is a really dangerous moment in human history. We have entered a precarious time and the world order can unravel in a lot of different ways, whether that’s the collapse of a leading democracy or whether that’s the burning down of a major city or a pandemic that is just ultimately not controllable. My hope is that it’s telling all of us that it’s time to act, even if some of our actions are insufficient or a little naive. Even those who have become wealthy, sometimes at others expense, are starting to get the message when their grandchildren call themselves “the doomers.” Their grandchildren are wondering whether they have a future. So hopefully that message is getting through to those who are in a position to do something, that perhaps we have built a world that, if we don’t act in a precipitous manner, our grandchildren will be robbed of a future comparable to ours regardless of how much money we can will them.

Can such an awakening take place by relying on the same tools, like social media, for example, which are being manipulated by algorithms and so on?

I believe that there is a moral compass in us, that there is a capacity, whether it is a soul or it’s something like a soul that says, “This is working, this is not working,” and can figure out appropriate pathways, presuming that’s our intention. I’m hopeful that the mass of humanity understands that this has to be our intention, including the people of good intention who may have contributed to the problem.

Tesla Bot

Tesla CEO Elon Musk last week said his company is working on a humanoid robot and that it will build a prototype “sometime next year.” What is your take on that?

Manufacturers have been using robotic devices for decades to perform dangerous and repetitive tasks in the assembly of automobiles. In his typical flare for drama and hype, Elon Musk announced that Tesla would be building humanoid robots for similar tasks. Why? Why should they look human? One can only surmise that this is another example of Musk’s mastery at gaining attention and publicity as he positions himself and his companies as the most advanced in the world. However, he will realize, as have many other companies, that truly useful humanoids that function as promised are hard to build and seldom fulfill expectations beyond the ability to perform a few tricks and tasks for which they were specifically designed.


SNF 2021 Nostos Conference: Humanity and Artificial Intelligence, August 26-28, at the Stavros Niarchos Foundation Cultural Center (SNFCC). Wendell Wallach will take part in Thursday’s panel discussion, “AI Future(s) Worth Wanting.” For more details about the conference and admission protocols, click here.

Subscribe to our Newsletters

Enter your information below to receive our weekly newsletters with the latest insights, opinion pieces and current events straight to your inbox.

By signing up you are agreeing to our Terms of Service and Privacy Policy.