I have a keen interest in Artificial Intelligence. Mainly because if it can be achieved it poses interesting philosopical questions. Unfortunately I have neither been trained in AI or Neurology, so it’s pretty much like trying to read a language you’ve only ever heard.
That said, I do, however, believe the following:
- AI is possible in my lifetime
- It is only possible through a bottom-up approach
- Neural Networks will have to be understood and mastered in order to achieve this
- Current Artificial Neural Networks fall short in a number of ways
Defining AI
I suggest you read Wikipedia’s definition of Artificial Intelligence before reading mine.
Weak AI is concerned with building a system which acts as though it were intelligent within a certain domain. For example, if you were talking to ALICE you might not realise you are talking to a computer. Alice is then said to have passed the Turing Test for Intelligence. What this means is, the creators of ALICE were clever enough to get her to bahave like a human being within a limited domain. I refer to this as “top down” AI. By this I mean, you start at the behaviour exhibited by an intelligent system (in this case, participating in a text-based synchronous discussion) and try to mimic it, working down from behaviour to the theorised internal mecahnisms and structures that make this possible.
I prefer the strong AI approach, which is interested in creating actually intelligent, sentient, machines. I refer to this as “bottom-up” AI, where one tries to find out what the internal mechanisms and structures of intelligent systems are first, then put them together to simulate real intelligence. My definition of AI is thus: modelling the inner workings of the human brain with sufficient accuracy and detail that the thing we create could be said to be as intelligent as the brain it was modelled on.
This approach obviously assumes a reductionist philosophy which does not sit well with all people. If cognition can truly be broken down into the interactions between the molecules in your brain, does that mean your own intellect is no more special than the workings of a very complex machine? If we can possibly understand and model the workings of the brain could we predict it? If we can predict it, what does that say about free will? Reductionism quickly turns into determinism in this case. I am ultimately interested in these philosophical questions. However, they are only interesting if we can, indeed, create artificial intelligence. So that is what we need to do.
How the brain works
If we are going to model the brain, we need to know how it works. To know how the brain works, let’s look at its smallest useful component: the neuron. There is no need to try model the brain at the molecular or atomic level. That would, perhaps, be a little too much reductionism.
Cognition is currently believed to be an emergent property of the interactions between neurons in the brain. There may be more components that we don’t know about but that will do for now. Because this phenomenon is emergent, not only is understanding the mechanics of the neurons themselves important but how they interact as well. Simply studing a neuron in isolation will not be sufficient.
The Centre for Synaptic Plasticity has simple, accessible description of how neurons work.
Current Artificial Neural Networks
Read this useful report on neural networks which is well written and simple enough to be understood by the likes of myself.
Personally, I don’t like directed learning. Certainly we as intelligent systems can and do learn through directed learning: we call it rote learning. But how do we learn to walk and talk? To the best of my knowledge there is no mechanism in our brain which propagates errors back through the neurons when we make a mistake. Besides, how do we know the mistake? How do we know what the goal is in the first place?
This approach says: neural networks learn by adjusting the weights of inputs to each neuron so as to reduce error. This statement may be true, but the adjusting of weights is emergent from the natural behaviour of the neurons themselves, not a function of some other process which continually runs back over he network, artificially adjusting weights to reduce error.
Just like the recent news article about mouse neurons flying a plane. That’s rubbish! How does the mouse know that crashing is undesireable? If we tell it through force feedback then we are missing the point of undirected learning entirely.