What if AI sentience is a question of degree?

What if AI sentience is a question of degree?

The refrain from experts is resounding: Artificial intelligence is not sentient. It is a corrective of sorts to the hype that AI chatbots have spawned, especially in recent months. At least two news events in particular have introduced the notion of self-aware chatbots into our collective imagination.

Last year, a former Google employee raised concerns about what he said was evidence of AI sentience. And then, this February, a conversation between Microsoft’s chatbot and my colleague Kevin Roose about love and wanting to be a human went viral, freaking out the internet.

In response, experts and journalists have repeatedly reminded the public that AI chatbots are not conscious. If they can seem eerily human, that’s only because they have learned how to sound like us from huge amounts of text on the internet – everything from food blogs to old Facebook posts to Wikipedia entries. They’re really good mimics, experts say, but ones without feelings.

Industry leaders agree with that assessment, at least for now. But many insist that artificial intelligence will one day be capable of anything the human brain can do.

Nick Bostrom has spent decades preparing for that day. Bostrom is a philosopher and director of the Future of Humanity Institute at Oxford University. He is also the author of the book “Superintelligence.” It’s his job to imagine possible futures, determine risks and lay the conceptual groundwork for how to navigate them. And one of his longest-standing interests is how we govern a world full of superintelligent digital minds.

I spoke with Bostrom about the prospect of AI sentience and how it could reshape our fundamental assumptions about ourselves and our societies.

This conversation has been edited for clarity and length.

Q: Many experts insist that chatbots are not sentient or conscious – two words that describe an awareness of the surrounding world. Do you agree with the assessment that chatbots are just regurgitating inputs?

A: Consciousness is a multidimensional, vague and confusing thing. And it’s hard to define or determine. There are various theories of consciousness that neuroscientists and philosophers have developed over the years. And there’s no consensus as to which one is correct. Researchers can try to apply these different theories to try to test AI systems for sentience.

But I have the view that sentience is a matter of degree. I would be quite willing to ascribe very small amounts of degree to a wide range of systems, including animals. If you admit that it’s not an all-or-nothing thing, then it’s not so dramatic to say that some of these assistants might plausibly be candidates for having some degrees of sentience.

I would say first with these large language models, I also think it’s not doing them justice to say they’re simply regurgitating text. They exhibit glimpses of creativity, insight and understanding that are quite impressive and may show the rudiments of reasoning. Variations of these AI’s may soon develop a conception of self as persisting through time, reflect on desires, and socially interact and form relationships with humans.

Q: What would it mean if AI was determined to be, even in a small way, sentient?

A: If an AI showed signs of sentience, it plausibly would have some degree of moral status. This means there would be certain ways of treating it that would be wrong, just as it would be wrong to kick a dog or for medical researchers to perform surgery on a mouse without anesthetizing it.

The moral implications depend on what kind and degree of moral status we are talking about. At the lowest levels, it might mean that we ought to not needlessly cause it pain or suffering. At higher levels, it might mean, among other things, that we ought to take its preferences into account and that we ought to seek its informed consent before doing certain things to it.

I’ve been working on this issue of the ethics of digital minds and trying to imagine a world at some point in the future in which there are both digital minds and human minds of all different kinds and levels of sophistication. I’ve been asking: How do they coexist in a harmonious way? It’s quite challenging because there are so many basic assumptions about the human condition that would need to be rethought.

Q: What are some of those fundamental assumptions that would need to be re-imagined or extended to accommodate artificial intelligence?

A: Here are three. First, death: Humans tend to be either dead or alive. Borderline cases exist but are relatively rare. But digital minds could easily be paused, and later restarted.

Second, individuality. While even identical twins are quite distinct, digital minds could be exact copies.

And third, our need for work. Lots of work must to be done by humans today. With full automation, this may no longer be necessary.

Q: Can you give me an example of how these upended assumptions would could test us socially?

A: Another obvious example is democracy. In democratic countries, we pride ourselves on a form of government that gives all people a say. And usually that’s by one person, one vote.

Think of a future in which there are minds that are exactly like human minds, except they are implemented on computers. How do you extend democratic governance to include them? You might think, well, we give one vote to each AI and then one vote to each human. But then you find it isn’t that simple. What if the software can be copied?

The day before the election, you could make 10,000 copies of a particular AI and get 10,000 more votes. Or, what if the people who build the AI can select the values and political preferences of the AI’s? Or, if you’re very rich, you could build a lot of AI’s. Your influence could be proportional to your wealth.

Q: More than 1,000 technology leaders and researchers, including Elon Musk, recently came out with a letter warning that unchecked AI development poses a “profound risks to society and humanity.” How credible is the existential threat of AI?

A: I’ve long held the view that the transition to machine superintelligence will be associated with significant risks, including existential risks. That hasn’t changed. I think the timelines now are shorter than they used to be in the past.

And we better get ourselves into some kind of shape for this challenge. I think we should have been doing metaphorical CrossFit for the last three decades. But we’ve just been lying on the couch eating popcorn when we needed to be thinking through alignment, ethics and governance of potential superintelligence. That is lost time that we will never get back.

Q: Can you say more about those challenges? What are the most pressing issues that researchers, the tech industry and policymakers need to be thinking through?

A: First is the problem of alignment. How do you ensure that these increasingly capable AI systems we build are aligned with what the people building them are seeking to achieve? That’s a technical problem.

Then there is the problem of governance. What is maybe the most important thing to me is we try to approach this in a broadly cooperative way. This whole thing is ultimately bigger than any one of us, or any one company, or any one country even.

We should also avoid deliberately designing AI’s in ways that make it harder for researchers to determine whether they have moral status, such as by training them to deny that they are conscious or to deny that they have moral status. While we definitely can’t take the verbal output of current AI systems at face value, we should be actively looking for – and not attempting to suppress or conceal – possible signs that they might have attained some degree of sentience or moral status.

This article originally appeared in The New York Times.

Subscribe to our Newsletters

Enter your information below to receive our weekly newsletters with the latest insights, opinion pieces and current events straight to your inbox.

By signing up you are agreeing to our Terms of Service and Privacy Policy.