Review: Vinodh Ilangovan
I see a lot of missing differentiation between weak and strong artificial intelligence in the public debate about AI nowadays.
Strong AI means, some computer/robot/AI-system has some level of consciousness. This idea is portrayed in many pop cultural sci-fi such as The Matrix, 2001: A Space Odyssey, Her or Westworld. In sci-fi that make sense, as it is “science fiction”. However on the non-fictional realistic science part of it: nobody knows, what consciousness actually is and how it works, let alone how it might be rebuilt in some computer system. So the Strong AI with its own free will is an impressive form of science fiction, as of now.
There are people who believe that consciousness may evolve somehow out of computer systems. They have no explanation how that should be happening. Often they claim it will just evolve over a period of time. However the idea of consciousness evolving is quite old. Already in the 1950s and 60s some computer scientists believed such evolution of consciousness would happen within the next 10 years. However, we have not witnessed this happen until today.
In reality, consciousness did evolve in the past four and a half billion years - via evolution. So maybe it can happen again on a silicon based system; but that’s speculation.
Weak AI means a computer program is solving one problem on its own. Computers became pretty good at solving problems in an impressive fashion. This happens with in the software coded algorithms since the existence of software based computers. (Algorithms are like a cooking recipe: an instruction manual on how to come to an solution.) And in the past decades there were tremendous success stories in that field. Calculators, Excel, the Internet, Google, Amazon recommendations, Boston dynamics robots - all of them belongs in the category of weak AI. All of them solve problems they were programmed to solve. Nowadays they often solve problems better than human beings (without access to a computer).
In the 1980s a new technology evolved, the artificial neural network. It’s meant to work similar to how neurons in a brain work together.
Neuroscientists today talk rather about the Neuron-Glia-Network, as it’s not only the neurons taking part in decision making processes in the brain but the surrounding glial cells as well. The technological answer to that is the multi-layer neuronal network.
That’s the technology used for autonomous cars, facial recognition and the face changing filters in Snapchat (and many other apps). Given the fact how little knowledge about functions of the brain we know, these systems bring impressive results - but it’s still not a brain.
People believing in the self evolution of some kind of consciousness in computers often argue stating that neural networks are able to train themselves towards some kind of consciousness. From the outside, this does look a little like it. A trained neural network doesn’t need much input to generate impressive outcomes (e.g. bunny ears on your head - even if you never showed your face to that app before).
These (multi-layer) neural networks are used in combination with each other a lot for different tasks so it seems they really do all kind of stuff on its own (e.g. the Boston dynamics robots getting up on its own after slipping, opening doors, carrying large items over rough terrain).
However, for each of the tasks a human being decided what neural networks to put into which robot. So far no robot decided on its own what neural network it wants to install next - because so far none of the robots wants something. They only do what they’re programmed to do - even if that happens to be very complex task.
So to all of you who are afraid of some AI taking over the earth: it seems rather unlikely to happen yet. Our complete understanding of how the consciousness and willpower is created or generated is still missing.