twitter instagram linkedin rss

2018


AI and Consciousness

- reviewed by Esther Weidauer - in: science tech

Talking about artificial Intelligence, one may ask “Can computers think?”

We (=humanity) don’t really know how thinking works. There are lots of scientists working on it but it is complicated.

To answer that question, Alan Turing created the Turing test in 1950(!) - but he changed the question to “Are there imaginable digital computers which would do well in the imitation game?”

So he doesn’t ask whether computer can think but whether humans can distinct between an answer given by a computer or another human being. If humans can’t say which answer comes from a human and which from a computer, the computer passes the Turing test.

In 1966 Joseph Weizenbaum created ELIZA, a chat software that mainly asked questions. It was seen by many as “intelligent”. But it didn’t know or think anything but worked well on some pattern matching algorithm and therefore seemed to interact really with human beings.

That’s when the distinction between weak and strong AI got important.

John Searle created the Chinese room as a thought experiment to illustrate the problem.

There is a person in a room that’s totally locked from the outside, except for a letter box. Outside are some Chinese people writing questions in a letter and put in the letter box. The person in the room doesn’t speak a single word Chinese - but there are books in that room that tell him exactly how to answer the letters in Chinese. In these books a complex algorithm written down that tells exactly how to answer what question - without any explanation what the question or the answer means.

So the person inside is able to answer the questions perfectly with help of these books and gives out a perfect Chinese answer.

The Chinese people outside of that room believe the person in that room can speak/read/write Chinese. But actually that person just followed strict orders.

John Searle uses this thought experiment as a metaphor for computers. Computers do that: they follow algorithms exactly and give answers to what they were asked. However the computer may give out the perfect answer but doesn’t “understand” the question or the answer, as the person in the room didn’t “understand” what they were doing neither.

The main question here is: how does semantics - meanings - evolve from syntax? How do we know something?

That question is still widely unanswered.

Thomas Nagel’s “What is it like to be a bat?” is another think piece.

He asks how it feels to hang all day long upside down in a dark place and go hunting by crying out loud and locate the prey by the bouncing signals of your voice.

His point is: there are living things around that are so distinct from humans, we hardly can imagine to live like that. But even if we could, we still think from our human point of view. We could imagine to fly around and catch insects by a reflection of our voice - we can only imagine it from human point of view - not the one of a bat. We didn’t grew up as bats but as humans - so all our ideas are ideas within a human mindset.

Thomas Nagel is arguing for qualia - the idea that every feel, every thought is to some degree fundamentally subjective and private. Similar to feeling pain - it’s a fundamentally subjective feeling. We can assume how pain might feel for someone in pain - but the exact feeling is only accessible to the subject.

It’s an approach similar to René Descartes “cogito ergo sum” - I think, therefore I am. If I’m doubting my own doubts, I thereby prove that I am doubting. And doubts need some entity to doubt - so I have to exist, too. Even if I hallucinate and can’t trust my senses anymore - I always know that at least I am. (That idea is called solipsism.)

The qualia idea that no-one really knows how it is like to be someone else than the subject opens the argument that maybe computers/robots/AI-systems do have that consciousness and a unique mindset (a quale - singular of qualia), too - and we (humans) simply don’t understand it. If that’s the case we also never fully understand it, because a quale is fundamentally subjective and private.

So the debate ends here because we simply can’t know.

Thinking needs consciousness, some kind of understanding symbols not just editing them. If that’s ever possible for computers? We don’t know, yet.


Weak and Strong Artificial Intelligence

- in: tech science

Review: Vinodh Ilangovan

I see a lot of missing differentiation between weak and strong artificial intelligence in the public debate about AI nowadays.

Strong AI means, some computer/robot/AI-system has some level of consciousness. This idea is portrayed in many pop cultural sci-fi such as The Matrix, 2001: A Space Odyssey, Her or Westworld. In sci-fi that make sense, as it is “science fiction”. However on the non-fictional realistic science part of it: nobody knows, what consciousness actually is and how it works, let alone how it might be rebuilt in some computer system. So the Strong AI with its own free will is an impressive form of science fiction, as of now.

There are people who believe that consciousness may evolve somehow out of computer systems. They have no explanation how that should be happening. Often they claim it will just evolve over a period of time. However the idea of consciousness evolving is quite old. Already in the 1950s and 60s some computer scientists believed such evolution of consciousness would happen within the next 10 years. However, we have not witnessed this happen until today.

In reality, consciousness did evolve in the past four and a half billion years - via evolution. So maybe it can happen again on a silicon based system; but that’s speculation.

Weak AI means a computer program is solving one problem on its own. Computers became pretty good at solving problems in an impressive fashion. This happens with in the software coded algorithms since the existence of software based computers. (Algorithms are like a cooking recipe: an instruction manual on how to come to an solution.) And in the past decades there were tremendous success stories in that field. Calculators, Excel, the Internet, Google, Amazon recommendations, Boston dynamics robots - all of them belongs in the category of weak AI. All of them solve problems they were programmed to solve. Nowadays they often solve problems better than human beings (without access to a computer).

In the 1980s a new technology evolved, the artificial neural network. It’s meant to work similar to how neurons in a brain work together.

Neuroscientists today talk rather about the Neuron-Glia-Network, as it’s not only the neurons taking part in decision making processes in the brain but the surrounding glial cells as well. The technological answer to that is the multi-layer neuronal network.

That’s the technology used for autonomous cars, facial recognition and the face changing filters in Snapchat (and many other apps). Given the fact how little knowledge about functions of the brain we know, these systems bring impressive results - but it’s still not a brain.

People believing in the self evolution of some kind of consciousness in computers often argue stating that neural networks are able to train themselves towards some kind of consciousness. From the outside, this does look a little like it. A trained neural network doesn’t need much input to generate impressive outcomes (e.g. bunny ears on your head - even if you never showed your face to that app before).

These (multi-layer) neural networks are used in combination with each other a lot for different tasks so it seems they really do all kind of stuff on its own (e.g. the Boston dynamics robots getting up on its own after slipping, opening doors, carrying large items over rough terrain).

However, for each of the tasks a human being decided what neural networks to put into which robot. So far no robot decided on its own what neural network it wants to install next - because so far none of the robots wants something. They only do what they’re programmed to do - even if that happens to be very complex task.

So to all of you who are afraid of some AI taking over the earth: it seems rather unlikely to happen yet. Our complete understanding of how the consciousness and willpower is created or generated is still missing.