Brains basically consist of neurons connected by axioms, dendrites and synapses in a rhizomatic formation - many parts connected to many other parts, as opposed to a linear sequence. Neurons can be seen as switches or simple processors that receive inputs and give out a consequential output. Simple rules govern each part, but when the whole is considered the result is highly complex.
Within artificial intelligence, computer research and psychology models of this system are called Neural Networks.
Information within neural networks is processed by the activation and inhibition of neurons. The neuron sums the weight of all the signals inputted into it and compares them with a threshold weight. If the signal weight excedes the set threshold the neuron actived, if it is less the neuron is inhibited.When a neuron is activated or fired it sends out a further signal to all the output neurons that it is connected to. In basic two layer systems all the input neurons are connected to all the output neurons, therefore depending upon the weight of the signals from the first layer different combinations of second layer neurons are activated in response to different stimuli.
With repeated stimulation and a corresponding adjustment of the signal weights the neural network can learn. In an actual brain this is achieved chemically, but within a neural network this is done by controlling the electrical potential between components so that it is essentially trained to achieve a certain desired outcome.
Basically this means that neural nets can learn to recognise patterns.
Our initial aim is to create a network of 500 neurons (This is very small, a sea slug for example has approximately 17,000 where as a human has upwards of 10,000,000,000), but this may increase as we develop. Rather than merely trying to train the network to learn an asigned task (ie. recognise letters) we intend to find out what it wants to do. This will still require us to produce and control stimuli, but as we have no particular theoretical motive (ie. a model for mammilian learning, facial recognition systems etc.) we are free to let it find itŐs nature.
This involves find an alternate way of controlling the weighting manipulation than tweeking them to increase the possiblity of a desired outcome.
This is the first major project under the Department of Technology, and will as such be run in collaboration by Bernhard Frankel and Alex Baker.