Grabbing the dragon’s tail

At last, someone is talking sensibly about Artificial Intelligence (AI). Despite claims for the past 50 years that AI is just around the corner, nothing resembling it has yet appeared. Computers and software, however powerful they have become, can still only carry out operations that have been well defined beforehand.

Even the ‘trial and error’ type of software that was devised for the AlphaGo computer that recently defeated top human Go players was far from having any sort of self-learning ability in entirely novel situations. The software was only sampling potentially successful routines that had already been gained from millions of games

Proponents of AI make a great to-do about imitating the neurons of the human brain — that the more they are used, the stronger the connections between them become, the easier that digital messages can flow along them, and the more waxing and waning of different competing neuronal networks. This is supposed to be ‘self-learning’.

So it is but imitating the thinning and thickening of neurons is only part of it. What about the transmitter molecules in the human brain that carry the messages across the synapses — the gaps — between the neurons? There’s nothing remotely like this in any AI software.

If AI program writers were to try and extend the digital world of neurons into the analogue world of transmitter molecules they would then have to try and simulate hormones. And these, in turn, depend on genetic instructions which are themselves only partially programmed but can also adapt to new situations.

Here then is the basic difference between the human brain and DeepBlue computer than can defeat master chess players or the AlphaGo likewise for Go players. A human player reacts to the last move of his opponent and, on occasion, come up with a novel response.

A computer can only react, not to the last move itself, but to the latest total board situation as whole. It can only make a reply when it has matched the situation with another in the computer’s memory resulting from millions of past games. No novel response is possible — no matter how much self-learning is supposed to be taking place.

It takes a brave person to challenge the fashionable view of AI view in the presence of aficionados, not to say strong proponents, but such is Danko Nikolic at a recent conference in Berlin. A neuroscientist at the Max Planck Institute for Brain Research in Frankfurt he said that AI software and computers can approach the abilities of the human brain as close as you like but still not be able to take that final jump into becoming something that is its equivalent in learning ability.

Learning, Nikolic says, is not just about the neurons of the brain. There are deeper genetic elements, too. The only way that a truly intelligent computer may be devised is by repeating human evolution.

In the brief report I have before me in this week’s New Scientist it doesn’t say why human evolution has to be repeated but Nikolic will know, of course, that some of our genes involved in brain development in the embryo go back to the very beginning of all life forms billion years ago and that it is genes and hormones that actually do the new learning long before neurons are involved — the latter are just the final ‘hardware’, as it were.

As far as computers are concerned, the human variety is what Nikolic calls a ‘singularity’ — like a black hole in astronomy it cannot be entered. So it seems to be, in my opinion.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s