7/22/17

Abstracted Neural Networks & Token Game.

Neural Networks.

In neural networks we consider error between result that we consider 'correct & desired', and the result we get as an output vector of neural net.

We wish to minimize the error, to teach our neural net to provide accurate results that we can apply to other vectors than the training vectors.

We provide training data, a set of training vectors to the neural net and it adjusts weights of neuron signals accordingly.

After enough of iterations we have 'calibrated' neural net - with appropriate weight values at each of neurons - so it provides fairly accurate results at output neurons for non-training data vectors.

The output neurons answer us abstract questions about data we wish to classify, categorizing input fairly accurately.


Token Game's Abstracted Neural Networks.

In this case we do something similar, except in a simpler way.

We provide input data, we adjust weights at neurons, we pass tokens with payload information.


Example 1: Oval Recognition, with an image without the crossing lines & without 'noise' data.

First we need to 'describe' the image by a serie of 8 marker points at a circle bordering image's edge.

We place one marker point at top, one at bottom, one at left, one at right, and four remaining in between - but not on top of each other.

We assume initial weight of each of neurons as of 3.

For each of marker points we create input neuron. For each of neurons we calculate an error - a distance between that marker point and a point closest on the image.

Depending on calculated error value we modify weight accordingly - the more error, the more of adjustment.

First we move marker point left, by a value of sigmoidal function's for calculated weight.

Then we calculate error again, modify weight and move marker point right.

Then top & bottom similar way - repeating the process until we are close enough to the point on the image.

We repeat the process once for each of input neurons, once for each of marker points.

We have described the image with a possible small error.


Then we compare our image with marker points, counting distance from each of marker points to closest point on an image. Then we sum distances to calculate error.

If error is beyond certain threshold, image is not recognized.


We can recognize any of simple images that way.


Example 2: Face recognition, without crossing lines & without 'noise' data.

We recognize face's oval, eye shapes, nose shape & lips shape - passing tokens with marker points to initial neurons of second neural net, along with combined errors that determine initial weight of input neurons in second neural net. We'll call these shapes marker shapes since now, and compare these with face's template on an image.

If first neural net's error was too much, we pass nothing and input neuron isn't activated.

We move marker points as in example 1 until we get close enough to image's templates to 'describe' template shapes that way.

We move centers of marker shapes to align with centers of template shapes.

For each of input neurons, we count combined error between marker shape's marker points and template shape's closest points. If error is not beyond certain threshold, we recognized face. This is weight we might pass on to next layer of neural net if we need.


Example 3: Recognizing flat data.

It's similar as 'describing image' in an example 1, except we move only up & down in case of numbers, and we use sigmoidal function after we count error & weights.

Enumerations can be represented as numbers as well.

Mindful Imaging & Editor.

Mindful Imaging Module can read from Stitie Space to visualize it, with machines, states, strategies & links.

As a complement, planning to add Editor that will allow to insert Light Point Objects into machines, to move machines and their links, or transform Stitie Space in an any way.

Light Point Objects' code & state can be inserted into the Stitie Space's 'machine' during the runtime, to be interpreted there - either via network or from the Mindul Imaging-based 4GL interface. A live system can be modelled as with Smalltalk programming language, considering security & permission of course.

See also: Agile Transformation of Information State.

7/21/17

AI Exercise.

(article to be edited once i understand AI Machine Learning theories).


i've played with Java's Weka library for machine learning, for artificial intelligence.

While i didn't understand classification algorithms used, i was able to produce a code that learned how to classify & somehow classified 'plants' using the MultilayerPerceptron classifier. i read it's basicly the neural network, but considering my current knowledge - can't confirm.

i don't know if classification was succesful, as i don't know much about plants. but judging from data-nearness, it looked good.

i believe that experiments with code are very important parts of learning computer sciences, so i did this exercise despite my lacks in knowledge. hopefully it'll be useful for others as well.

i swallowed my own shame of not knowing theories, and posted this article for benefit of others.

There were other tools found such as assessing errors or adding weights to attributes consiering algorithms used, that i didn't understand. i think professional AI programmer should be fluent with all of these ideas & their uses.

Files: WekaTest.java, Datasets.

Library used: Weka.