this is just an experiment with more or less nondeterministic AI learning, author does not guarantee anything just yet.
in Supervised Learning, we are given a set of example pairs (x, y), where x belongs to a set X, y belongs to set Y and the aim is to find a function f : X -> Y in the allowed class of functions that matches the examples.
i think that (x,y) are states of a Neuron.
* x in X is initial state of a Neuron,
* y in Y is target state of a Neuron, achieved after receiving a message from outside,
* token is data in message received from outside. it might include information about message's source, and / or from message's source,
* f(x,token) is a transition function which transforms x into y, depending on x and token.
f can have side effects, such as sending messages to other Neurons.
f might be more or less nondeterministic transition, with random data included either or both in x and in token.
accuracy in reaching desired y-states, measured with a statistical apparatus (for now we use only random events space and simplest tools for such) can be measured in % (for example: using function f, desired y-states were reached from x-states in 84% over 108 tries, in 90-91 cases out of 108).
then we can risk an attempt to extrapolate, to extend solved problem domain past examples.
i think that f can be chosen more or less randomly from possible functions (interpreter's instruction tree can be generated more or less randomly then analyzed. simple or complex constructs can be used in generation of more complex constructs that way, under supervision of a Real Scientist).
for now with one Neuron, later with few and more, perhaps.
See also: Neural Networks, Stitie Space.