1994_09_september_artintel

It was a dream come true. The other day a computer beat the world champion, Garry Kasparov, at chess. But hang on a moment. Computers don’t dream. Can they be made to dream? To be poetical? To think? The chess feat was not of enormous moment. It was of the same quality as getting a computer to always win or at least force a draw at draughts. It was just of different quantity. In both cases, the computer just crunches the millions of different chess combinations that all must ultimately end in a win, draw or lose, and only moves in a way that will result in a win or draw.

To date most of the research on artificial intelligence has centred around this high-end or high-focus logical thought. But we know that most human creativity comes not from high-focus logical thought, but from quirky links between unassociated things that come when our attention is distracted.

To date most attempts at replication of human thought by computers have worked with the analogy that the hardware is the brain and the mind is the software and that if the software and data are comprehensive enough, bingo, you have a brain, or at least a something which imitates it pretty well. The brain is a giant computer, the theory goes.

A new book by David Gelernter, The Muse in the Machine, looks at artificial intelligence in a different way. And dreaming is an important part of it. The intriguing part of Gelernter’s book, is the novel way he looks at human thought.

There have been many previous questionings of the western-logical way of thinking. in society. We have had A. J. Ayers Language Truth and Logic which exposed the circular nature of standard Greek logic. This comes in the form: “”All trees have leaves and branches; this is a tree; so it must have leaves and branches.” It actually proves nothing. It just tells you how you have defined a tree. It says a tree is a tree.

We have had Edward de Bono’s Parallel Thinking and earlier books with rejected Yes-No logic and similar logic of Good-Bad and I’m right and you’re wrong. Iodine is essential in small doses, but will kill you in large doses, so it is neither good nor bad.

We have had the left-and-right-side-of-the-brain theorists. They say the right side is for language and logic and the left side is the artistic side.

All of these have been very useful in cutting away some of the arrogance of the scientific-logical approach.

Gelernter puts up another description of human thought, which arguably is of similar force as the three mentioned above.

He likens human experiences to a series of photographic slides, each with lots of detail. He then takes an example of a child first learning what “”blue is”. The child is told a pen is blue.

“”Ah, ha,” says the child, “”blue means a long pointy thing.”

Then the child is told a book is blue, and then the sky and so on.

So the child’s concept of blue is built upon boring through all the photographic slides for the instances of blueness.

He then takes the example of a man with a jammed briefcase. If he is thinking intently, or in high-focus, he bores through all the photographic slides sifting out the precise circumstances of previous jammed briefcases.

If he is in lower-focus thought he might think of a particular instance of a jammed briefcase at Los Angeles airport when the yellow flowers smelled sweet near a stack of copies of the LA Times. And that may trigger another low-focus but very precisely detailed memory of a log cabin in the Rocky Mountains and four cross-country skiers walking towards it.

Gelernter pictures a dial that goes from very low focus to very high focus. At the high focus end is abstract, logical, concentrated thought. At the low-focus end is dreaming. With dreaming the images pop up uncontrollably, linked in some inexplicable way.

A little less low focused is day dreaming or musing. Or reading and letting the mind wander a little.

Children and ancient cultures do more low focus thought with bizarre metaphors and a belief that the imaginary is real. With experience, thought gets more concentrated, abstract and logical. The mind penetrates through all the photographic slides in search of scientific proofs or abstract generalisations.

Low-focus thought, though, is more creative because we make bizarre links which prove very illuminating and useful.

What then causes the links between the pictures during low-focus thought if there is nothing logical about it. Gelernter looks at poetry in which such associations are made. He cites Keats and Wordsword. More mundanely, though, Tom Jones (the Welsh singer) expresses it as “”those funny, familiar forgotten feelings that go walking all over my mind”.

Don’t go “”pooh, pooh” just yet.

Gelernter is on to something here. Even those of us who did well in logic and law know about the uncontrollable and inexplicable links the mind makes from time to time. What binds these associations, if anything? Certainly logic does not. And if logic does not, how can a computer be constructed to even get near an imitation of that process?

Gelernter suggest emotion is the sticky bit that binds these associations. This emotion has an extremely wide range, is infinitely subtle, but is remembered and linked to other pictures that have similar emotional content but are otherwise dissimilar.

A tinge of nostalgic sadness in one picture triggers another picture (or memory) that is completely dissimilar in physical content but identical or very similar in emotional content.

Thus the emotional content of a thought or image of a ginger cat in my front garden triggers a very detailed recollection from childhood of a hot peppermint milkshake at the Dolphin Cafe in a country town in Victoria.

Now that is a tricky one for artificial intelligence. If creatively worthwhile things come from such bizarre thought patterns, what hope have we of developing artificial intelligence.

Gelernter argues that it is precisely because artificial intelligence research has not been directed this way that it has been so limited. Artificial intelligence has been pursued on the logical plane.

This is a bit more subtle than saying machines cannot have emotions therefore artificial intelligence is impossible.

To the contrary. Gelernter argues that if you put enough pictures in a computer it might be able to replicate the associations between them and respond with creative associations and fuzzy answers _ not just “”possibly” and “”probably”, but things like “”I don’t like the feel of that” and “”that reminds me of X”.

This brings us to the profound question of what is “”thinking” and is the computer “”thinking” when it makes these responses.

Gelernter puts it another way: not “”is this computer (or other person) thinking”, but “”does this computer (or other person) understand me”.

And the key to that is not merely that the computer or other person has some internal representation of my words, but that they share the emotional content.

Now a computer will never have the same emotional response as a human, and never be the entity that understands us best. But if there are degrees of understanding it might have emotional responses which are at least similar to human ones and “”sort of” understand us.

Whether that means the computer has a sense of self is another matter.

Gelernter did some programming research on computer along these emotional lines, rather than the usual logical ones. He had to hide the fact from his political masters lest they thought him mad and his research a waste of money.

In short, though, if the computer beats the world champion at chess, we are more likely to get insights into the human mind if we look beyond getting computers to perform mere feats of logic that link boxes and categories in high focus and start experimenting on the sticking emotional bits that link thoughts at low focus.

While computer research concentrates on the logical, abstract and high focus, artificial intelligence will be illusive. Humans are not like that. Gelernter argues that if we acknowledge a graduated spectrum of thought process from very high focus to very low focus _ from intense abstract thought to dreaming (images bound by emotional content) and then try to get computers to replicate that, we will get something more like artificial intelligence.

Emotions are part of human intelligence. Much research to date, in concentrating on the logical, has been barking up the wrong tree.

(ital )The Muse and the machine. Computers and Creative Thought. David Gelernter.

Leave a Reply

Your email address will not be published. Required fields are marked *