**Neural Networks.**
In neural networks we consider error between result that we consider 'correct & desired', and the result we get as an output vector of neural net.

We wish to minimize the error, to teach our neural net to provide accurate results that we can apply to other vectors than the training vectors.

We provide training data, a set of training vectors to the neural net and it adjusts weights of neuron signals accordingly.

After enough of iterations we have 'calibrated' neural net - with appropriate weight values at each of neurons - so it provides fairly accurate results at output neurons for non-training data vectors.

The output neurons answer us abstract questions about data we wish to classify, categorizing input fairly accurately.

**Token Game's Abstracted Neural Networks.**
In this case we do something similar, except in a simpler way.

We provide input data, we adjust weights at neurons, we pass tokens with payload information.

Example 1: Oval Recognition, with an image without the crossing lines & without 'noise' data.

First we need to 'describe' the image by a serie of 8 marker points at a circle bordering image's edge.

We place one marker point at top, one at bottom, one at left, one at right, and four remaining in between - but not on top of each other.

We assume initial weight of each of neurons as of 3.

For each of marker points we create input neuron. For each of neurons we calculate an error - a distance between that marker point and a point closest on the image.

Depending on calculated error value we modify weight accordingly - the more error, the more of adjustment.

First we move marker point left, by a value of sigmoidal function's for calculated weight.

Then we calculate error again, modify weight and move marker point right.

Then top & bottom similar way - repeating the process until we are close enough to the point on the image.

We repeat the process once for each of input neurons, once for each of marker points.

We have described the image with a possible small error.

Then we compare our image with marker points, counting distance from each of marker points to closest point on an image. Then we sum distances to calculate error.

If error is beyond certain threshold, image is not recognized.

We can recognize any of simple images that way.

Example 2: Face recognition, without crossing lines & without 'noise' data.

We recognize face's oval, eye shapes, nose shape & lips shape - passing tokens with marker points to initial neurons of second neural net, along with combined errors that determine initial weight of input neurons in second neural net. We'll call these shapes marker shapes since now, and compare these with face's template on an image.

If first neural net's error was too much, we pass nothing and input neuron isn't activated.

We move marker points as in example 1 until we get close enough to image's templates to 'describe' template shapes that way.

We move centers of marker shapes to align with centers of template shapes.

For each of input neurons, we count combined error between marker shape's marker points and template shape's closest points. If error is not beyond certain threshold, we recognized face. This is weight we might pass on to next layer of neural net if we need.

Example 3: Recognizing flat data.

It's similar as 'describing image' in an example 1, except we move only up & down in case of numbers, and we use sigmoidal function after we count error & weights.

Enumerations can be represented as numbers as well.