Can an AI System be Sentient?

Will an AI System one day become sentient? Maybe. There has been a lot of coverage in the media lately regarding this partly driven by comments from a Google engineer, Blake Lemoine' about a project he was working on where the AI system was, some claimed, sentient.

04-07-2022
Bcorp Logo


Can an AI System be Sentient?

There has been a lot of coverage in the media lately regarding whether an AI system can be considered sentient, or indeed whether it can be self-aware? This is partly driven by comments from a Google engineer, Blake Lemoine, about a project he was working on where the AI system was, some claimed, sentient. In this blog we take a look at this question and whether it is possible to answer this question.

The Turing Test

Let’s start off by considering what it means to appear to be sentient. This question actually echoes a very old Computer Science philosophical question often referred to as the Turing Test.

The Turing Test, (originally called the Imitation Game) was proposed by Alan Turing back in 1950 in his seminal paper “Computing Machinery and Intelligence” written during his time at the University of Manchester. The original paper posed the question “Can Machines Think?” and used the Turing Test to evaluate that question. It is a test of a system’s ability to ‘exhibit’ intelligent behaviour that is equivalent to, or indistinguishable from, that of a human being.

In the Turing Test a human observer observes the interactions between two actors. One actor is human, and the other actor is a computer system (referred to as a machine in the original paper).

Basic configuration of the Turing Test

Can an AI System be Sentient?


The observer knows that one actor is human and the other artificial, but they do not know which is which. All participants are separated from one another and the conversation between the two actors is by text only – any input is via a keyboard and output is via a text-only terminal.

The aim is for the observer to try to determine which actor is the human and which is the computer system (which we might refer to now as an Artificial Intelligence or AI system).

If the observer cannot distinguish between the human user and the AI system, or if they incorrectly identify the human as the AI system, then the AI System has passed the test.

Of course, this raises the question – “exactly what is the test?”

In fact, the test aims to see if an AI System can be perceived as intelligent and capable of thought (and possibly of being sentient) – not that it actually is any of these things!

Chat Bots

Fast forward 7 decades, and the premise of the Turing Test is now exploited by a common feature on many web sites, that of intelligent chat bots that can answer users’ questions instead of an expensive human giving live support. To be effective, these chat bots are comprised of several modules, including Natural Language Processing (NLP) to parse a user’s question, and a Knowledge Base of past cases and responses - and increasingly, a Natural Language Generation (NLG) module to generate responses.

These chat bots can prove successful in very constrained domains, where there is already extensive data on past questions / problems and their solutions.

However, as anyone knows who has had to go beyond their basic competence, these chat bots can quickly reach their limits and need to escalate to a human adviser.

Perceived Intelligence

You will probably agree that today’s crop of chat bots does not represent an intelligent or sentient system - but are they in the running to pass the Turing Test? Interestingly, it never attempted to check the accuracy of the responses (human or computer), rather it tested the perception of the observer. Of course, humans can be very confident in their questions and answers yet be completely wrong (at least from the observer’s point of view.) So there’s a good chance a chat bot will make the grade.

The rub lies in going beyond a limited exchange between the two actors – the wider range of conversational topic, the greater the knowledge base/data source is required, for both actors.

This means that the AI System would require knowledge of the real-world; being able to apply common-sense, with both quantitative and qualitative reasoning systems, general purpose background knowledge as well as (potentially) ‘expertise’ in a specialist subject. It may also need a sense of humour – which is a moving feast at the best of times.

Over the years a great deal of research has gone into all of these types of AI topics such as Qualitative Reasoning, Deep Learning, Computational Perception and Cognition as well as the intersection of Psychology and Reasoning.

Notably, we have not mentioned a system’s ability to learn; this is not necessarily a pre-requisite for the perception of AI sentience, although many people consider it a feature of intelligence. There are many learning systems and technologies now available from Neural Networks, to Learning Classifier Systems to Reinforcement Learning Systems. Such systems can certainly enhance our view of a system’s intelligence and may well be the lynchpin of artificial sentience.

Sentience

It’s worth attempting to define sentience at this point, which also raises the question ‘How do we know if something is sentient? Whether it is a human, an animal or a software system?’.

The primary way in which we humans determine that something is sentient is by observation; that is, if something appears to be sentient then we accept that it is sentient – which is not a million miles away from the Turing Test.

Thus, if an AI System appears to be sentient we might choose to treat it as such. However, the question remains, ‘is it sentient?’. That is a much harder question to answer.

Can an AI System be Sentient?


You might argue that something based on 1s and 0s and electrical impulses can’t be sentient. However, what are our own brains but a wetware version of this in that there are both chemicals and electrical aspects to how our brains function. The end result appears to be more than the sum of its parts though, and we are self-aware, can reason, learn and can be considered to be sentient. So, could something that is purely electrical achieve the same thing one day?

Unconventional Computing

One argument about the difference between the brain and an electronic computer is that it is alive, that is it is a living organism. However, a subject area known as Unconventional Computing, as exemplified by the Unconventional Computing Lab (UCL) at the University of the West of England, Bristol, includes the exploration of biological computers. The work carried out in the UC Lab in UWE is blurring the lines between living organisms and computers. For example, over a decade ago the lab had a ‘living computer’ that could generate an optimal layout of transport links between urban areas. Recent work on ‘Fungal Grey Matter’ explores the electrical activity of fungi being similar to neurons and the lab is using this technology for sensor and computing tasks.

Summary

At the moment, this human believes we’ve a while before we reach the technological singularity. However, can AI be perceived as sentient? Arguably, yes.

Will an AI System one day become sentient? Maybe. Will future computer systems have a biological element to them? Probably.

At which point, the question of true AI sentience may become increasingly difficult to answer…


Would you like to know more?

If you found this article interesting you might be interested in some of our courses:

Share this post on:

We would love to hear from you

Get in touch

or call us on 020 3137 3920

Get in touch