twitter instagram linkedin rss

 

AI and Consciousness

- reviewed by Esther Weidauer - in: science tech

Talking about artificial Intelligence, one may ask “Can computers think?”

We (=humanity) don’t really know how thinking works. There are lots of scientists working on it but it is complicated.

To answer that question, Alan Turing created the Turing test in 1950(!) - but he changed the question to “Are there imaginable digital computers which would do well in the imitation game?”

So he doesn’t ask whether computer can think but whether humans can distinct between an answer given by a computer or another human being. If humans can’t say which answer comes from a human and which from a computer, the computer passes the Turing test.

In 1966 Joseph Weizenbaum created ELIZA, a chat software that mainly asked questions. It was seen by many as “intelligent”. But it didn’t know or think anything but worked well on some pattern matching algorithm and therefore seemed to interact really with human beings.

That’s when the distinction between weak and strong AI got important.

John Searle created the Chinese room as a thought experiment to illustrate the problem.

There is a person in a room that’s totally locked from the outside, except for a letter box. Outside are some Chinese people writing questions in a letter and put in the letter box. The person in the room doesn’t speak a single word Chinese - but there are books in that room that tell him exactly how to answer the letters in Chinese. In these books a complex algorithm written down that tells exactly how to answer what question - without any explanation what the question or the answer means.

So the person inside is able to answer the questions perfectly with help of these books and gives out a perfect Chinese answer.

The Chinese people outside of that room believe the person in that room can speak/read/write Chinese. But actually that person just followed strict orders.

John Searle uses this thought experiment as a metaphor for computers. Computers do that: they follow algorithms exactly and give answers to what they were asked. However the computer may give out the perfect answer but doesn’t “understand” the question or the answer, as the person in the room didn’t “understand” what they were doing neither.

The main question here is: how does semantics - meanings - evolve from syntax? How do we know something?

That question is still widely unanswered.

Thomas Nagel’s “What is it like to be a bat?” is another think piece.

He asks how it feels to hang all day long upside down in a dark place and go hunting by crying out loud and locate the prey by the bouncing signals of your voice.

His point is: there are living things around that are so distinct from humans, we hardly can imagine to live like that. But even if we could, we still think from our human point of view. We could imagine to fly around and catch insects by a reflection of our voice - we can only imagine it from human point of view - not the one of a bat. We didn’t grew up as bats but as humans - so all our ideas are ideas within a human mindset.

Thomas Nagel is arguing for qualia - the idea that every feel, every thought is to some degree fundamentally subjective and private. Similar to feeling pain - it’s a fundamentally subjective feeling. We can assume how pain might feel for someone in pain - but the exact feeling is only accessible to the subject.

It’s an approach similar to René Descartes “cogito ergo sum” - I think, therefore I am. If I’m doubting my own doubts, I thereby prove that I am doubting. And doubts need some entity to doubt - so I have to exist, too. Even if I hallucinate and can’t trust my senses anymore - I always know that at least I am. (That idea is called solipsism.)

The qualia idea that no-one really knows how it is like to be someone else than the subject opens the argument that maybe computers/robots/AI-systems do have that consciousness and a unique mindset (a quale - singular of qualia), too - and we (humans) simply don’t understand it. If that’s the case we also never fully understand it, because a quale is fundamentally subjective and private.

So the debate ends here because we simply can’t know.

Thinking needs consciousness, some kind of understanding symbols not just editing them. If that’s ever possible for computers? We don’t know, yet.