Ockham's Razor of the Scientific Focus.


Occam's razor, also written as Ockham's razor, or law of parsimony, is a problem-solving principle attributed to William of Ockham (c. 1287–1347), who was an English Franciscan friar, scholastic philosopher and theologian.

The principle can be interpreted as: 'stating among competing hypotheses, the one with the fewest assumptions should be selected'.

According to Ockham, 'simpler theories are preferable to more complex ones'.

Discriminating Wisdom.

Discriminating Wisdom in Buddhism is the wisdom that allows to see things (for example: scientific assumptions) clearly, 'as they are', in a separation as well as a part of larger whole, nondually.

Uses in Science.

Assumptions can be complex or simple. Complex assumptions consist of multiple assumptions, either complex or simple, or of a mix of these.

Complex assumptions can be reduced to a set of simpler partial assumptions, then can be looked with a discriminating wisdom, modelled and analyzed to see if any of the partial assumptions can be removed for a simpler model.

Simpler models often do not restrict us so much, allow for more options based on a fewer of the assumptions.

An Example.

To have a square precisely defined, we need to provide either:

1. A vector/turn definition.
- a vector with a direction,
- a turn at the vector's end.

2. A two straight lines with a point definition.
- a straight line,
- a second straight line, parallel but not overlapping first line,
- a point.

This can be reduced using the 'Ockham's razor' and the 'Discriminating Wisdom' to a basic sets of simple partial assumptions:

[Discriminating wisdom is used to reduce complex assumptions into a set of simpler partial assumptions, then to analyze].

[Ockham's razor is used to remove redundant information, such as the requirement for a point between two lines to be placed exactly in the middle of a square - a perpendicular straight line can be placed through this point, then this point can be moved along the perpendicural straight line into the midst of the square].

Let's not prune too much or too little of the neccessary premises information, however. We need all of required premises to do attribution in a sane way - to jump into a conclusion in a sane way.

1. A vector/turn definition.
- we know a starting coordinates of a vector,
- we know the end coordinates of a vector,
- the starting and end points of a vector are not overlapping,
- we know a turn at the end of a vector: either to left or to right.

2. A two straight lines with a point definition.
- we know an equation of the first straight line,
- we know an equation of the second straight line,
- lines defined by equations are parallel,
- lines defined by equations are not overlapping,
- we know coordinates of a point.

By looking at both definitions, we can see that definition with a vector/turn requires fewer assumptions about our knowledge of a square than a definition with two lines and a point.

Our first definition is simpler and therefore superior by principles of the Ockham's razor theory.

See also, if You wish or need, ... : Buddhism, Arts & Sciences.


'Ola AH': a Concurrent or the Real-Time Language?


Recently had lot of insights concerning 'Realtimeness' of 'Ola AH' Programming Language i am to create - or more precisely, lack of a 'Real Time Language Features' in this language by it's design.

Current State.

'Ola AH' Programming Language was designed without strict real time features in mind, but still concurrency is important part of it. It would take a lot of effort to change this now, and would make 'Ola AH' Programming Language much more difficult tool to learn & use.

'Concurrency' is about running software parts simultaneously & coordinating them, without bothering about 'pessimistic cases of time deadlines' for processes runs. Concurrent systems still can work in real time.

'Real Time Systems' have additional property - when a task execution time exceeds it's time deadline, we can say that system failed. It's concurrency taken to extreme.

Architecture & Operating Systems Dependency.

Exact time of an instruction running depends both on hardware architecture (parts used) as well as on the operating system & it's version.

Currently, only RT Linux supports real-time demands for personal computers, as far as i know.

Other considerations & design.

So far i don't plan to make 'Ola AH' Programming Language to acquire 'Real Time Properties', because of:
- lack of education & experience in Real Time Systems by language's creator,
- 'vendor lock' & dependency on RT Linux,
- amount of effort with regards of change required for programming language semantics (Stitie Space, Stitie Machine, State, Strategy, Router, Events, MATEN, Prism, Mindful Imaging - perhaps more),
- amount of effort required by developers using this language - both to learn as well as to use - they would need to count how much every instruction 'costs' processor-cycles-wise, other-devices-wise, taking in account operating system used, it's version, as well as hardware architecture. Amount of processors & concurrent running adds to the task's difficulty,
- with current design of 'Ola AH' it's still possible to write 'Real Time Solutions', with slight syntactical support ('asm insert' instruction & 'AH' Anti-hack mode), it's just very hard to do still,
- this would reduce this Programming Language's niche severely - language would lose it's general purpose & simplicity, as more competent programmers are harder to find and more costly to hire. personnel rotations would cost even more than that - it would take a lot of time & cost to take over a project from a former team member.


Peer-2-Peer Network with a Distributed Sitite Space.

-=- A Spherical Double-Spiral. -=-

Can be used in forming Distributed Objects in 3D,
including Distributed Stitie Space in 3D, or Distributed Hash Table in 3D,

... it has a communication cycle & fairly efficiently uses 3D Space,
... without forcing client object to reach inside, past the outer layer.
a client object can, for example scan,
(using directional/cone multicast wireless communication for example)
for 108 closest data structure objects, with load report,
then can choose either at random or by workload an object to communicate with.

... see also, if You wish: Stitie Grid.

A POV-Ray source code available for: download.
Credits: Andrea Lohmüller + Friedrich A. Lohmüller.


One can look at Stitie Space as on a data structure that manages objects storing & communication in 3D-addressed space.

Most of ideas in this article requires a version 1.2 of Stitie Space, or later, however.

In-space objects communication.

Objects within can communicate with each other without accessing Stitie Space if they have coordinates & can find path - with or without using the 'long-range links'.

This fits well with peer-2-peer communication model.

Distributed Stitie Space.

When target object's coordinates are not available, Stitie Space can be contacted for these. Client object can request for other information & services as well, but these are more expensive than independent communication & performance.

Many requests to Stitie Space in short time-frame can cause a system overload, delays in communication or crashes. But Stitie Space can be replicated & distributed in similar way as the Distributed Hash Table.

In this case response or a service might not be immediate, because of data synchronization with other Stitie Space instances, and because of not locking resources & services for other client objects - a client object sends request to one of Stitie Spaces, then waits for response - perhaps on a concurrent thread as well. When a Distributed Stitie Space is synchronized (with other Stitie Space instances) & has available time for that, it just sends to a client object information it needs, or performs a service. Security Certificates & Privileges are considered as well.

Stitie Space Services might include:
- Communication between objects - for example when objects move often & when their coordinates change with time - when there are no proper long-range communication links,
- Handling requests to reform Space with MATEN or Prism functionalities,
- Establishing long-range communication links between client objects,
- Reserving destination Coordinates & Empty Movement Paths before a client object movement in space - for a given amount of time, at least.
- Updating coordinates registry in space after reaching destination by a client object.

See also, if You wish or need, ... : Agile Transformation of Information State.


Reasons behind 'Ola AH' Programming Language.

High-level Programming Languages allow for thinking abstractly - on a high level of abstraction - about solutions for problems to be solved by computers.

Mere library with extra semantics (meaning, use cases) for a language is often not enough, because elegant syntax (grammar, keywords) allows for simpler thinking, for acquiring paradigms (ways of perceiving & thinking), for acquiring idioms (good practices), for easier understanding of complex tricks & their use, for easier & faster coding.

With fewer lines of code there's less errors, programs are written, understood & maitained faster.

When reality changes, companies have to adapt & rewrite software quickly - preferably before competition releases their own competing products or services - high level programming languages are tools that give companies this advantage.

'Ola AH' Programming Language is fairly general-purpose language, but has found it's niches as well.


Stitie Space & Robot Swarm Concurrency.


One of use cases for 'Ola AH' Programming Language is for coordinating the robot swarms.

While not a language for the Real-Time Systems, 'Ola AH' can handle concurrency for such swarms well enough.

Communication & Leadership Hierarchy Tree.

Assuming that physical objects can fly in space, their coordinates can be transformed to a coordinating system that contains Stitie Space.

Physical objects form task groups under a leader machine, leader machines also form task groups with their own leaders ... up to the coordinating system with the Stitie Space.

Physical objects can communicate wirelessly using directional rays, to avoid the wireless communication collisions with so many objects. Messages reach their leaders, up to the coordinating system with the Stitie Space.

Coordinator - individual synchronization.

Messages from individual machines can reach coordinator machine by following the communication hierarchy tree. Messages are received by CommunicationSynchronizer object, then Stitie Space is updated.

Messages can be sent to the groups of individual machines as well, using the CommunicationSynchronizer object & the wireless communication (either a broadcast, a directional multicast or series of unicasts for example) - target machines know which parts of the code are addressed to them by their identifier.

Decision Precondition Events.

When a task group reaches it's objectives, then event can be fired by CommunicationSynchronizer object, also a SpaceAware machine can check its own coordinates in Stitie Space & fire events as well.

A precondition token is created in the ConcurrencyFlowGraph object as in the 'Token Game' & 'Decision Filters' articles.

With enough of tokens at proper places (for example, when all of the three task groups reach their targets) next decision can proceed.

Clock Deadline Events.

We can have a clock object which produces the clock events, and use these to control behavior of machines by sending messages wirelessly via the CommunicationSynchronizer object, to control the deadlines for individual physical objects.

A clock object can affect the precondition tokens existence in the ConcurrencyFlowGraphObject as well, to affect decisions made that way.

See also if You wish, or need, ... (including comments) : 'Ola AH': a Concurrent or the Real-Time Language?


Stitie Space, Form & Emptiness.

'Stitie Space' is a 3D Objects Matrix data structure, has certain unique 'convenience methods' that allow for a 'fine manipulation of objects' in 'computer memory', a space to be filled with numbers in itself, perhaps ...

Stitie Space is non-infinite, bounded Space that has convenience method to invoke Forms within.

Using MATEN functionality, a Certain Form Invocation, we can invoke Empty or
Non-Empty Forms within Stitie Space.

Using MATEN or Prism functionalities Forms in Space can change with time, we can have 'animation of objects' as well.

Stitie Space can use recursion, each of it's inner coordinates can contain 'Inner Space' as well - we can use GRID Computing for that as well, for example.

While Stitie Space 1.1 'Sunsail' has stationary machines at given coordinates, with Stitie Space 1.2 'Satellite' it's possible to move machines within, use gaps in space (without machines at coordinates), there's an option of long-range communication between machines as well.

Simple Examples & Professional Practices.

When one tries to learn something, it's worth to start from something simple, from a little of theory & from a simplest of examples.

Complex examples make things harder to understand.

There's the difference between learning something simple by an example and between producing professional & optimal code.

Both in technical books, as well as on Universities - teachers teach one problem solution at a time, using knowledge taught & practiced earlier by students as well.

Professionals still strive for simplicity, for creating 'objects' that handle their responsibilities - their tasks - as elegantly, as shortly & as concisely as possible.

But professionals know much, use advanced tools & known theories to handle the programming eloquently & optimally.

Objects interrelations make code more complex, it's often better to make things work together, best from a perspective of whole groups of objects collaborating - instead of simplifying their behaviors to simplest behaviors considering each of objects in separation of other objects.

Walking one dog is far different than handling a sleigh with a twelve husky hounds afterall.


A Simple Checksum Calculation & File Encryption Code.


If we have a file, we can calculate it's checksum according to a given algorithm.

A checksum calculation algorithm can use a password or a key, to enhance security.

File can be a message or data or even software that can be run. Checksumming is not encryption to hide file's contents, it's just for decreasing a chance that a file was tampered during transmission - either by infrastructure fault, or by an attack.

Unless cracked, even a simple checksum can help to protect data integrity, protect the information to increase certainty of:
- Message's Sender,
- Message's Recipient,
- Message's Timestamp & Timezone,
- Message's Title - if any,
- Message's Conent & Attached Data - if any,
- Message's Additional Information & Metadata.

There's more to this as well.

It's related with Secure Communication over the Internet.


i am planning to write a checksum code & encryption code as an exercise & for use by me and for use for those who care.

It's unfinished a post, will complete a source code later - as soon as i can.

For now it's an incomplete: a top-down algorithm, a plan for code in comments to be filled with details later. i find that a simple planning method useful in Programming anyway - even a code consisting only of comments is useful in the development process - especially if the comments are strict, with invariants, etc ... - but not only.

i've used a 'checksum' word in an abstract way - it's not only summation, it's also other functions. Probably i should use a 'signature' word instead ;)

Optimistically, a chance of two checksums colliding equals to (1/Long.MAX_VALUE) = 1/(263 − 1) = 1/9,223,372,036,854,775,807. Probably in practice it's less favourable for us, but still very good.

Exact worst probability is very hard to calculate for me - but i can say it depends on: a file transferred, a password, an optional key. Using long & varied-inside files (preferably encrypted) as well as using a long private key makes for things in our favour (less collisions).

import java.util.List;

* @author Andrzej 'neo-mahakala-108' Wysocki; email: neomahakala108@gmail.com
public class ChecksumCalculator {
  // TODO: replace with a string read from a file.
  private String inputText = "a test string.";

  // TODO: replace with a string read from a file.
  private String optionalKey = "an optional key";

  // TODO: get password from user instead of using a hard-coded one.
  private String password = "a test password";

  public static void main(String[] args) {

    // 1. Find a checksum calculation function.
    // depending on a password/optional-key - we'll have a lot of simple functions & we'll
    // make a complex function from these. password/optional-key will determine how we'll
    // combine our simple functions into a complex function.

    // 2. Find constants.
    // each of 'data chunks' is a part of the message, number of these depends on
    // a password, an optional key & a message to checksum. we'll use a simple
    // function, derived from a password/optional-key that will transform these
    // strings into a long number constants.

    // 3. Calculate a checksum.
    // we'll use a checksum calculation function with found consant, to calculate a long
    // checksum.

    // 4. Provide a checksum result to user.

  // ...

  private long sumWrapFunction(List<Long> ll) {
    long result = 0;
    for (long current : ll) {
      // we don't care for exceeding Long.MAX_VALUE - it's still a long number.
      result += current;
    return result;


Real-Time Systems.


Real-Time systems are computer systems in which succesful task performance depends on two factors:
- succesful result of computation,
- exact deadline time in which task was performed.

When a task execution time exceeds it's time deadline, we can say that system failed.

Hard, Firm & Soft Real-time Systems.

In the 'Hard Real-time Systems' missing a deadline is a total system failure. Hard realtime systems are created when missing a deadline can result in hardware damage or costs lives or health.

In 'Firm Real-time Systems' infrequent deadline misses are tolerable, but may degrade the system's quality of service. The usefulness of a result is zero after its deadline, but no damage or personnel loss occurs.

In 'Soft Real-time Systems' the usefulness of a result degrades after its deadline, thereby degrading the system's quality of service.


For a computer system to meet criteria of the 'Hard Real-time System', requirements are:
- well understood & fast enough hardware,
- real-time operating system (for example: RTLinux),
- every software piece must adhere to the 'hard real-time requirements'; software must generate results in the deadline time.

Software & other considerations.

In realtime systems, processor(s) time is divided between processes, by an operating system piece called 'real-time scheduler'. At a deadline time a process is 'interrupted' by the scheduler, even if it didn't complete.

Since we do not want any suprises regarding amount of time the process uses to complete, processes should be run & initialized before giving them tasks, and communicate with each other using real-time tools. We probably want to count all of the costs of the processor time, including rescheduling & inter-process communications. We do not wish to forget about other costs that can result in a performance losses, as memory costs as well.

In the Real-time computing, we are concerned mostly with the pessimistic times of tasks' completions, even at cost of average performances.

Since we wish to count exact amounts of the processor(s) cycles, probably the most reliable & best programming language for this is assembler - but perhaps the 'Ada' Programming Language & other Programming Languages can compete as well.

With 'Ola AH' Programming Language we are concerned with 'average time costs' of software parts performances, not with the 'Hard Real-time Standards'. With 'Ola AH' compiler, we'll still strive for a very fast solutions.

... see also, if You wish or need, ... : 'Ola AH': a Concurrent or the Real-Time Language?


Application Design, Use Cases & Automatic Testing.

Application Design & Use Cases.

Often, when a customer orders application, she or he orders a collection of a certain functionalities.

For example:
- logging into online email application,
- deleting all spam in spam inbox,
- logging off automatically after a given time,
- configuring email sorting preferences,
- ...

Use cases are means of specifying these functionalities, defined by a number of steps (click here, scroll here, type something here, read report's field #n, etc ...).

A minimal set of use cases often determines how user interface should look, is often a formal requirement for ordered application functionalities - can be a part of the business contract between a customer & a developer company.

Automated Tests, Changes & Debugging.

Often tests for use cases can be automated, can be performed after any change is introduced into the code ... just before program's compilation, just before running an application, or at any other convenient moment.

By using Automated Use Case Tests, programmers can be comfortable that when they (or their teammates) change parts of the code, the older parts of the code (previous functionalities) that they are responsible for - won't prove erroneous after the new code additions.

Automated Use Case tests often show when part of the code is erroneous after changes, and while these are far from being 'proofs of code's correctness', these are extremely practical nevertheless. Even if not every error is caught by these - carefully designed tests can quickly find basic functionality failures. Other errors can still be found & fixed using other methods, and it's still easier to fix one error than multiple overlapping ones.

More than that - carefully designed automated tests can help programmer to create 'Mental Test Harness' that let's them more boldly & quickly do larger changes in code without inspecting the same things over & over, without fearing of application breakages so much.

This also builds Trust & Responsibility in the teams - with tests it's quick & easy to find out when someone breaks other teammate's code parts - at early stage of failure at that, so it can be addressed before error turns to be too complex to address quickly, before true stress, psychological dramas & employee firings start, before project's budgets & time schedules are endangered.

In many ways, Automated Tests help to develop applications with much more of the speed & security, with only a small amount of extra code at start (tests have to be designed & written too) & with a small amount of maintenance (when requirements change, tests have to be modified).


Selector Methods & Dynamically Created Methods in 'Ola AH' Programming Language.


In 'Ola AH' Programming Language - methods - will be properly typed objects that can be 'passed around', most probably with a proper syntax support.

Method objects can be statically or dynamically constructed as well, with an elegant syntax support.

Any of the instructions can be dynamically created as well, will be properly typed objects that can be 'passed around' - even 'asm' instruction, after that functionality is added to interpreter or compiler, respectively. With 'Ola AH', every high-level instruction or standard library method, will be modelled & documented properly.

Selector Methods.

Selector methods are methods that return a value of the method type, and are given in their arguments list a method collection, as well as other 'arguments list'.

Selector methods will use existing method syntax, with no extra syntactic support - however when a method fulfills the requirements of selector method, we can talk about it as of 'selector method', for easier communication.


One of most basic use cases for a selector method is to select a method from 'method space' depending on other argument(s), then to return it.

An Example.

void m1(char c1) { ... };
char m2(char c2, char c3) { ... };
char m3(string s1, int64 i1) { ... };

method m4 := new method();
m4.setReturnType((char, string, int64)); // replacing a string return type with a tuple. m4.addArgument(int64, "i1");
m4.addArgument(char, "c1");
m4.insertArgument(int32, 0, "i1");
m4.addInstruction( ... );
m4.addInstruction( ... );
m4.addInstruction( ... );
m4.insertInstruction(0, ... );

method alist mal := #{ m1, m2, m3, m4 };

method mySelectorMethod(method alist mal, int8 i) {

  assert i >= 0;
  assert i <= 3;

  return mal.get(i);


method m := mySelectorMethod(mal, 1);
char c := m.execute('a', 'b');

See also, if You wish or need, ... : 'Ola AH' Programming Language Syntax.


Inductive Learning.


This article is a theoretical introduction to more practical examples of inductive learning algorithms, a part of Machine Learning & Artificial Intelligence.

As a requirement for the complete understanding of this article, let's consider the Predicate Logic.

Inductive Inference.

Let's consider following logical consequence rule statement:

P ∧ W ⊧ K.

- P is premises set,
- W is student's inborn knowledge,
- K is conclusions set,
- ⊧ is logical consequence operator.

This statement says that knowledge represented by conclusions K is logical consequence of inborn knowledge W and premises set P.

For our needs, it's convenient to interpret this statement backwards, assume that conclusion is training information T, and assume that student in the process of inductive inference, acquires a certain hypothesis h.

It's also convenient to give following meanings & notations to the statement parts:
- P : knowledge generated during the learning process, also called hypothesis h,
- W : student's inborn knowledge,
- K : training information, noted as T.

Then our statement assumes form:

h ∧ W ⊧ T.

We can say that:
- hypothesis: 'h ∧ W' explains conclusion 'T'.
- 'T' is logical conclusion of hypothesis 'h ∧ W'.
- in this statement: hypothesis explains logical conclusion.

In more elaborate words:

- training information acquired by a student is logical consequence of inborn knowledge and generated hypothesis h.
- inductive hypothesis with student's inborn knowledge explains acquired training information.

Of course, logical consequence occurs when inborn knowledge, training information and hypothesis are correct. In practice often we have to differ from this assumption and settle for approximate consequence.

Finding correct hypothesis in light of above, means detecting in training data certain general regularities, that when joined with the inborn knowledge, explain that data well.

This approaches popular understanding of inductive inference as getting from facts and individual observations to generalizations.

These facts & observations are called 'training examples', and training information given to a student is a 'training examples set'.

Hypothesis, that a student has to find been given the training information, is a generalization of 'training examples'; it's purpose is not only explaining correctly (or approximately correctly), but more importantly prediction of new facts & observations.

There are three types of inductive learning:
- learning concepts (a way of objects classification),
- creating concepts (objects grouping, describing groups),
- learning approximations (mapping objects on real numbers).

Main Types of Inductive Learning.

The goal of inductive learning may assume different forms, depending mostly on the knowledge that has to be acquired by the inductive learning and the form of training information given to a student.

We'll demonstrate the three most important forms of training information, on which the most of the theoretical & practical work focuses, which have the most practical uses as well.

In each form of inductive learning, acquired knowledge is a certain type of mapping of input information on output information.

1. Domain.

Domain is an objects set X. Objects in X are related with knowledge acquired by a student. These objects may represent things, people, events, situations, states of things, etc. - anything that can be argument of a mapping that student has to learn.

2. Examples.

Each of objects, each element of a domain x ∈ X, we'll call an example.

3. Attributes.

We'll assume that examples are discribed using attributes. Attribute is any function specified on a domain. We'll assume that a description of every example x ∈ X consists of values: n ≥ 1 attributes, a1: X → A1, a2: X → A2, ... , an: X → An.

A set of all attributes specified on a domain we'll note as A = { a1, a2, ... , an } and call it the 'attributes space'.

In practice we sometimes identify an example x with attributes vector:

< a1(x), a2(x), ... , an(x) >,

So we'll call an example any element of the cartesian product of the codomain of attributes A1 × A2 × ... × An; this simplification might be misleading, but has it's uses.

For a convenience we'll recall this vector, that for an example x we'll note as <x>A.

Depending on the codomain (a values set), attributes can be divided into types.

Most basic, sufficient for learning purposes is an attributes division as follows:
- nominal attributes: with a finite set of unordered discrete values,
- ordinal attributes: with a countable set of ordered discrete values,
- continuous attributes: with values from a real numbers set.

For each examples set P ∈ X, attribute a : X → A and it's values v ∈ A we'll designate as Pav a set of these examples from P, for which an attribute a has a value v, thus:

Pav = { x ∈ P | a(x) = v }.

Example 1. Points on a plane:

Let's consider a domain X = R2 that is a two-dimensional plane. Examples are points on that plane. Each of examples can be described using two continuous attributes:

a1 : X → R and a2: X → R

that specify proper cartesian coordinates of this point relative to assumed coordinates set.

Similarly, a domain can be assumed to be space Rn for any specified value n ≥ 1.

Example 2. Binary Strings.

Let's consider a domain X = {0,1}n for a given value n ≥ 1. We can assume, that all examples from this domain are n-element binary strings.

Examples are naturally described as n attributes:

a1: X → {0,1}, a2: X → {0,1}, ..., an: X → {0,1}, where:

for each x ∈ X and for each i = 1, 2, ..., n - value ai(x) describes i-th element of a string x.

In this example, we can equalize examples with attribute vectors, and for convenience we can use notation xi instead of ai(x).

Example 3. Geometric Shapes.

Let's consider a doimain consisting of colorful geometrical shapes with differing sizes and shapes. Examples from this domain we can describe with following attributes:

size: ordinal attribute with values: small, medium, large,
color: nominal attribute with values: red, blue, green,
shape: nominal attribute with values: circle, square, triangle.

Example 4. Weather.

Let's consider a domain consisting of possible weather states. Each of examples from this domain we can describe with following attributes:

aura: nominal attribute with values: sunny, cloudy, rainy,
temperature: ordinal attribute with values: cold, moderate, warm,
humidity: ordinal attribute with values: normal, high,
wind: ordinal attribute with values: weak, strong.

Example 5. Cars.

As another example, we'll consider a domain, with elements are car models available on the market. We'll assume that examples from this domain can be described with following attributes:

class: ordinal attribute with values: small, compact, large,
price: ordinal attribute with values: small, moderate, high,
performance: ordinal attribute with values: weak, average, good,
unfailability: ordinal attribute with values: small, average, high.

Learning Concepts.

Concepts are one of forms of our knowledge about world, used to describe & interpret sensual observations & abstract ideas.

With a concept of 'chair', we can point in a large set of various furniture these, that are 'chairs', and these that are not - even if in both groups are furniture pieces with differing size, color, with differing amount of legs, made of differing materials.

In a most basic case, concept specifies division of a set of all considered objects, or domain, into two categories:
- objects belonging to a concept (positive examples),
- objects not belonging to a concept (negative examples).

Sometimes it's convenient to consider multiple concepts described on the same domain, we'll call this 'multiple concept'.

'Multiple concept' describes domain division into categories, of which each category corresponds to one of the 'single concepts'.

def. Concept: Let's assume that on a domain might be specified a class of concepts, noted as CC. Each of concepts c ∈ CC is a function c : X → C, where C describes finite set of categories of concepts of class CC.

In a case of 'single concepts' we'll assume C = {0,1}. In a case of 'multiple concepts' C might be any finite set of categories with quantity of |C| > 2.

'Single concept' describes subset of a domain, consisting of 'positive examples' of this concept:

XC = { x ∈ X | c(x) = 1 }.

In the general case, for a category d ∈ C, certain concept and any of examples set P ⊆ X we assume notation Pcd for these examples from P, that belong to a category d, thus:

Pcd = { x ∈ P | c(x) = d }.

We may omit c in this a notation, and use Pd.

In particular for a single concept c a set X1 = XC is a set containing all of it's positive examples, and set X0 = X - X1 is a set of all of it's negative examples.

Example 6. (Rectangles on a plane).

For a domain of points on a plane R2 introduced in an example 1, we can consider concepts class CC represented by all rectangles with sides parallel to the coordinates system' horizontal and vertical axes.

With a rectangle representing any concept from c ∈ CC we can connect coordinates of it's 'left-down' and 'right-up' points, appropriately (lC, dC) and (rC, uC).

Then a positive examples set of a concept c is defined as set of all points inside or on border of this a rectangle:

XC = { x ∈ X | lC ≤ a1(x) ≤ rC ∧ dC ≤ a2(x) ≤ uC }.

A concept represented by a rectangle as well as several positive examples (filled circles) and negative examples (empty circles) is shown on a following image.

(click on image to enlarge)

Example 7. (Boolean Functions).

For a domain of n-element binary strings introduced in an example 2, concepts might be represented by n-argument boolean functions.

Definitions of these functions have form of a logical formula, in which literals (atomic formulas joined into complex formulas by logical functors) are attributes (whose values of '0' or '1' are interpreted as 'false' or 'true' logical values).

More precisely, for a definition c(x) there can occur positive literals: ai(x) and negative literals: ¬ai(x) for each i = 1,2,...,n.

Positive examples of this domain are these domain elements for which proper formula is satisfied.

For n = 5, example definitions might be:

  c1(x) = a1(x) ∨ ¬a3(x) ∧ (a4(x) ∨ a5(x)),
  c2(x) = ¬a5(x),
  c3(x) = a2(x) ∧ a4(x).

Example 8.

For a domain of geometric shapes introduced in an example 3, we can consider a concepts class CC consisting of all possible single concepts for this domain.

If we assume that this domain is finite, then |CC|= 2|X|.

Certain of these concepts might have a reasonable interpretation for us, for example: 'shapes that resemble fruits', or 'small shapes'.

Example 9.

For a domain of weather states introduced in an example 4, we'll consider a concept class CC consisting of all single concepts, that can be specified for this domain. Selected few of these concepts might have meaningful interpretation from our perspective - such as 'typical mediterranean weather' or 'weather good for sailing'.

Hypotheses for Learning Concepts.

For a given domain and concepts class there's specified, depending on used learning algorithm, space of possible hypotheses, noted as HH.

Hypotheses space consists of all hypotheses that student can construct.

Every hypothesis h ∈ HH, as well as every concept, is a function that assigns examples their categories, so we can write:

h: X → C.

Result of learning is always a selection of a hypothesis from HH, considered as best for given training examples (and possibly, inborn knowledge as well).

Precise learning of every target concept c ∈ CC is possible only if CC ⊆ HH. Then its true that c ∈ HH - that hypotheses space contains hypothesis identical to target concept.

In practice, for certain algorithms, we have however: HH ⊂ CC and we have no certainty that we can learn a target concept. This does not mean, however, that we should strive to equip student in richest hypotheses space possible - because this would hinder learning process.


(to be continued/rewritten as needed or neccessary, when/if i can).


Artificial Intelligence Research Fields.


In this article we'll cover the four main areas of the AI Research.


One of earliest & still developed current in the AI Research is 'automatic inference mechanisms'.

Based on the 'formal logic' achievments, these strive for 'effective deduction algorithms'.

It's about mechanical formulation of logical consequences of knowledge base, with the use of the 'inference rules'.

This field used many formal systems, as:
- propositional caclulus,
- first order predicate calculus,
- non-classic logics.

Automatic inference methods are base for the expert systems, and for the statement proving systems.


Issues of searching large spaces in an effective manner have uses in Artificial Intelligence Research.

Problem-solving paradigm used in this field assumes that a problem is characterized by a possible states space with a certain number of distinct end states that represent acceptable solutions and by operators set that allow to move through this space.

Finding (best) solution can be reduced to finding (best according to a certain criteria) a sequence of operators that guide from initial problem state into one of end states.

Search is about finding optimal solution using as small as possible costs, including memory use cost & computing time cost; In a case of board games this might be about finding a move that maximizes a chance of victory, considering the actual situation on board & possible player moves. This leads to searching the 'game tree' constructed by considering possible player moves, then possible opponent moves, etc.

Both in searching for problem solution, as well as in board games, exhaustive search (in a complete states search space or in a complete game tree) is beyond our means (with an exception of trivially small problem spaces or boards). That's why heuristic methods are developed for searching, which do not guarantee effectiveness increase in a pessimistic case, but significiantly improve effectiveness on the average. These are based on the use of 'heuristic functions', carefully designed by a system's constructor for numeric estimation of states' quality (based on the distance from the acceptable end state) or for numeric estimation of a game situation (considering the chances of a win).


Planning is to a certain extent a result of joining automatic inference mechanisms with the problem solving.

From a 'planning system' we expect to find - in most basic case - a plan of a problem solution in a manner that is more effective than when using the 'search method', even with heuristics. A 'planning system' can search the space using the knowledge about individual operator effects that is provided to it. This knowledge, contained usually in a certain logical formality (for example, in a subset of a predicate language), describes problem state changes that come into effect after using a certain operator. This enables inference about states reachable from initial state using different operator sequences. Occasionally this eliminates the need for search, and in most cases significiantly reduces the search scope at least.

It's said about the Intelligent Planning Systems, that these 'infer about their own actions'.


We can have possible logical formulas (individually representable for example as expression trees or a series of 0's & 1's) represented as a finite state automaton graph, with each node as a possible state (possible logical formula). This finite state automaton graph has certain 'operator transitions', that transform a single formula or many formulas into another formula(s) - moving from a certain graph state into another graph state. A knowledge of our operators can describe certain 'shortcuts' in this state graph, which can make searching for the acceptable end state more effective. A single 'operator' can define zero, one or many 'shortcuts' as well.


AI Learning Field is described in an article: Artificial Intelligence Learning Aspects, can be used to construct a Weak AI, that 'behaves rationally' to fulfill it's task.

Systems that learn can adapt to situation, can be autonomous in action.

There are connections & common points between AI Learning & other AI Research Fields, mostly with 'automatic inference' & 'heuristic search'.

See also, if You wish or need, ... : Finite State Automatons and Regular Expression Basics.

Source: [52].


'Weak' & 'Strong' Artificial Intelligence.

Weak Artificial Intelligence.

'Weak Artificial Intelligence' can be described as:
- System that 'thinks' rationally,
- System that 'behaves' rationally.

With a 'Weak AI' most interrested are Computer Scientists & Developers.

What is 'rational' in this context?

Let's assume that it's succeeding in solving a certain complex tasks well - that when handled by a human would require a significiant amount of work & intellect.

Strong Artificial Intelligence.

'Strong Artificial Intelligence' can be described as:
- System that 'thinks' as a human,
- System that 'behaves' as a human.

With a 'Strong AI' most interrested are Psychologists & Philosophers.

What is 'human' in this context?

Scientists of the 'Strong AI' field have ambitious & far-reaching goal of constructing artificial systems with a near-human or human-exceeding intellect.

This artificial system is assumed to work in a similar, but not neccessarily in an identical way as human thinking processes.

Certain opinions explore the idea of the 'Strong AI' being aware of it's existence and being capable to fight for it's continued survival.

A 'Strong AI' can use 'Weak AI(s)' advices as well, as an input data.

i read in an article in the Internet, that the goal of the 'Strong AI' construction will be achieved in about three decades.

Is it true?

Can't confirm or deny this hypothesis, as of yet.


There are many ways to achieve both the 'Strong AI' and a 'Weak AI', both of which can be achieved using similar technology.

For example, the 'Neural Network' technology can be used to create both 'Strong AI' and 'Weak AI'.

The difference between creating a 'Weak AI' and the 'Strong AI' - both using Neural Network technology - is essentially a question of how complex the Neural Network is.

A simple Neural Network is capable of a 'Weak AI' and we would require a complex Neural Network for the 'Strong AI'.

Source: [52], the Internet.


If you want peace, prepare for war.

'If you want peace, prepare for war'.

It's a truth known to the whole World.

i think it's true, because showing weakness invites enemy forces to strike earlier.

That's why blog's author wishes to study:
- Hacking & Cryptography, Quantum Cryptography as well,
- Cyber Terrorism; Intelligence & Counterintelligence efforts, incuding Sabotage,
- Artificial Intelligence,
- Nuclear & War Politics of Arabian Countries, as well as Missles & Interception Missles Developments - Mostly Long Range Missles, NATO SHIELD & related, as well,
- Advanced Mathematics, Quantum Physics & Nanotech.

These are a Very Dangerous fields, to not neglect.

i (Andrzej Wysocki, neomahakala108@gmail.com) am an amateur hacker, but i wish to work in the Cyber Security in the EU NATO Structure, probably in a small Company or a Corporation soon - located in Warsaw, Poland - if neccessary - in more than one Company or a Corporation.

My main concern is Cyber Terrorism & it's threat to World's Peace, especially when Quantum Computers start to create the Cipher Crisis, resulting probably in Economic Crisis & other threats as well.

Later in life (in about 12-20 years, i think) i wish to create a Corporation, NIDAN Software.


Machine Learning Theory & Related Sciences.


Artificial Intelligence Theory is a broad knowledge field, it's a field of Computer Sciences as well.

There are many approaches & theories of AI Software Construction & Research, these knowledge sets are disjunctive, but still overlap to a certain degree.

This means that a single AI Application can use many theories as is approprieate for a given solution, joining them together as needed & neccessary.

Three Streams in Machine Learning.

There are three streams in Machine Learning Theory:
- Theoretical Stream,
- Biological Stream,
- Systems Stream.

Theoretical Stream covers abstracted & simplified common thinking patterns found in many approaches of machine learning field; it approximates & estimates difficulty of learning, amount of time for AI to learn, amount & quality of information needed for learning, a quality of knowledge possible to be acquired & learned by the AI.

Biological Stream covers modelling systems that learn found in living organisms, found in humans & animals, on different levels of their structure - from a single cell to central nervous system; this stream is closer to Biology & Psychology more than strictly-understood Computer Sciences.

Systems Stream covers algorithms design & use, as well as construction, research & use of their implementations; in simpler words it's a catalog of basic AI algorithms, as well as their analysis.

Related Sciences.

While an Artificial Intelligence field can be joined with any of the sciences to a certain degree, there are sciences that have much in common with AI works.

AI Research has been most affected by:
- Probability Theory,
- Information Theory,
- Formal Logic,
- Statistics,
- Machine Control & Automation Theory,
- Psychology,
- Neurophysiology.

Probability Theory has uses both in Theoretical Stream, as well as in Systems Stream; it has uses in Mathematical Apparatus used in learning algorithm analysis, as well as it is basis for many different probabilistic conclusion mechanisms that have many uses.

Information Theory has uses in a certain learning algorithms design. Occasionally hypothese selection uses Information Theory for setting hypotheses' quality criteria. Learning by Induction problem can also be seen as a properly coded training information.

Formal Logic affected machine learning systems research, mostly as basis for symbolic representation method, especially if these use in rules; there are parts of machine learning that use Formal Logic more directly, however - for example, in Logic Programming with Induction, or for example in Learning by Clarification that uses elements of logical deduction.

Statistics approaches methods of data analysis & related conclusions about data; training information can be represented in form of statistic, using a rich Mathematical Apparatus for that.

Machine Control & Automation Theory is about methods of automatic control & design for different classes of objects & processes; regulator is a machine part that affects controlled object or process to achieve or maitain a certain desired state; regulator's task is often send a sequence of instructions that leads to meeting a certain task criteria, so it's related to 'AI Skill' with that respect.

Psychology, among other things, studies learning process found in humans & animals; most important discovery for machine learning that psychology provided is 'learning with reinforcement (amplification)'.

Neurophysiology is a science of nervous systems, both human & animal; there are significiant traces of it's discoveries found in subsymbolic data representations; neurophysiology is inspiration for neural networks as well, as well as for loosely related 'approximation function'.


Risk Management in IT Sector.


IT Projects are involved with Risks, especially larger, ambitious, more complex Projects.

IT Project is a failure when it exceeds set amount of time or cost.

Majority of IT Projects are failures with that respect (about 70-90% of them all).

IT Projects Risk should be Managed, as a part of Project Management efforts.

There are other threats as well, for example writing incorrect (not meeting customer's requirements), insecure, or bugged application.

Low Code Quality also poses a risk, for this type of software - even if cheap at first - carries greater modification & correction costs - measured in time & cash.

As Reality & Markets change, Companies need to Adapt, software needs to be rewritten anyway.

How to handle the Risks?

1. At beginning do a brainstorming session with employees, not only developers & management; involve customers if You can (not only the risks of technological / development type - address the miscommunication, currency conversion risks & other risks as well),
2. Address greatest of risks first, prototyping software, checking libraries & tools as well,
3. Prepare for extra time & other costs as well; especially for the initial preparatory project phases, as well for last debugging ones,
4. Care for the work quality & team integrity, do not change personnel too often, do not hire too untrusted personnel as well,
5. Adapt Iterational Development Procedures (split project into phases according to risks & priorities, complete one after another, modifying software between iteration versions, keep versions archive as well), instead of a Waterfall Model.
6. Enter unproven, dangerous or Risky fronts (markets, tools) with extra assurance, insurance, time & skill,
7. Have a competent team that can cooperate & communicate efficiently over a long time,
8. Adapt Quality Insurance Policy as well, make sure that team adheres to it,

See also, if You wish, or need, ... : Threat Analysis, Common Problems in Software Company, Application Design, Use Cases & Automatic Testing.


Artificial Intelligence Learning Aspects.


Different types of Machine Learning Software uses different building blocks. These consist of a certain common parts, joined differently.

Learning Aspects.

Most basic parts of a Learning Machine are:
- a knowledge or a skill representation method,
- methods of use of a knowledge or a skill,
- a source & a form of a training information,
- a method of acquiring & perfecting a knowledge or a skill.

Knowledge Representation.

There are many methods of the Knowledge Representation for Artificial Intelligence.

These include:
- decision trees,
- rules,
- predicate logic formulas,
- probability distribution,
- finite state automatons (FSMs).

Knowledge can be represented in two ways:
- symbolic representation,
- subsymbolic representation.

Symbolic represenation is direct, can be easily understood by a human & interpreted by machine.

Subsymbolic representation is metaphorical & complex, not so easily understood or interpreted.

Metaphorically, symbolic representation of animal might be a name noted as a word in database, subsymbolic might be an image with a hand-written name.

Knowledge Use.

Not always used representation method determines how it's used, even if it narrows choices in this respect.

The way of using knowledge in a context of AI is generally determined both by representation & AI goal(s), tasks of a system that learns.

Typical AI tasks include:
- classification,
- approximation.

Classification is determining where objects belong, which patterns they match.

Approximation is representation of objects by a members of a real numbers set.

In a case of classificiation we can use words about learning concepts, concept is a knowledge of belonging to a domain or to a category.

There ae other AI tasks as well, among these:
- problem solving,
- sequential decision choices,
- environment modelling.

This can be that a knowledge use is just passing data representation to an user in a clean & readable form, letting user to decide what to do next, if at all.

Training Information.

Most basic form of training information form & source can be abstracted & simplified, reducing it to the main two of it's aspects:
- training with a supervision,
- training without a supervision.

There are more precise terms for these as well, will be elaborated later.

It's convenient to illustrate a learning system as a machine that accepts input data in a form of information vectors (tuples), as well as responding to these with a proper output. Learning process is then specifying algorithm responsible for output data generation.

Training information instructs student directly or indirectly, as shown on an image:

In the case of a training with a supervision, where source of training information is called teacher, a student acquires information that specifies in a way correct reactions for a certain input vectors set as 'behavior examples' expected from a student.

In the case of a training without supervision, training information is not available at all; only input vectors are given, student has to learn only by observing their sequences; training information is then a part of learning algorithm, we can say that a teacher is 'built-in' in a student.

Often however, it's fairly difficult to classify precisely whether we have training with a supervision or without.

Fairly close to learning with supervision is 'learning with questions'; in this case training information comes from a teacher, but it's only answers to questions asked by a student.

Learning by experimentation is when student acquires information as it experiments with it's environment. it can be performing certain actions (generating output) & observing consequences.

Learning with reinforcement (amplification) is learning by experimentation that uses additional source of training information - often called 'critic' - that provides signals that assess quality of student's behavior. in this case information is more 'valuating' (giving value, meaning, significiance, direction) more than training.

We do not precise any of above-mentioned learning ways too much, because their borders are not too precise anyway; it's best to wait with details & assumptions for certain solution algorithms & uses.

Knowledge Acquisition Methods.

Using acquired training information, system that learns generates new knowledge bits or perfects knowledge known before to perform better at it's task(s).

Knowledge Acquisition & Perfection Mechanism is most often determined by knowledge representation & a form of training information.

Usually there are many available algorithms for learning, associated with appropriate knowledge acquisition mechanisms.

Most popular learning mechanism is induction, a conclusion drawing approach that abstracts unit(s) of knowledge to acquire more generalized (abstract) knowledge.

Noninduction mechanisms are mostly explaining & clarifying knowledge bits, specifying details of initial knowledge of a student; it's used with reinforcement (amplification) of accomplishment rewarding for succesful behaviors; there's analysis of effects of a certain action(s) & rewards.

Attempt at unifying many types of knowledge creation is inferential theory of learning, that perhaps will be elaborated in future articles on this a blog.

Source: [52].

Understanding & Tools as Eclipse IDE.

Software Development & Construction uses tools that speed this process up - more than significiantly.

One of my favourite tools is Eclipse IDE, even if i use only a small part of it's features.

An advice to less experienced students of this a blog - there's no need to learn all of the IDE, only neccessary or needed parts. i do not know most of it as well, myself - yet i am a Professional still.

Eclipse IDE is great because of simplicity & great design, as well as thanks to it's plugin system.

A recent update of Eclipse IDE (Eclipse Che 4.7) includes splitting editors into parts, potentially a very convenient tool.

When a program part is contained in a single screen, it's easier to understand - no need to scroll the window & lose part of the code from sight.

That's one of reasons why we use functions, procedures & methods in our elegant coding as well.

Splitting windows allows to see & analyze longer code portions easier, i think & feel.

See also if You wish, or need, ... : Few thoughts on code quality (for professionals), How code quality can be measured.


Neural Networks.

Artificial Neural Networks, also called Neural Networks are field of an Artificial Intelligence Research, used in Computer Sciences & many other Sciences.

NN have practical uses, are universal approximantion system representing multi-dimensional data sets, have ability to learn & adapt to changing environment conditions, ability to abstract (generalize) acquired knowledge.

Living oganisms's nervous systems research are basis for systems theory & it's uses in practice.

There are models of a neuron cell with:
- input signal weighted addition - with a weighted synaptic importance; input signals stimulate neuron or reduce, how much depends on it's weight measure,
- an activation function.

As neurons activate, weight of it's synaptic signals increases as well, depending on a weight measure as well.

Neurons (cells, graph nodes) & Synapses (connections) can be used in brain modelling as well, even if it's a model of non-human, artificial machine brain with different design & properties.

Living beings' brain activity interacts both ways with a nervous system & spine, affecting muscles & more.

In machines, artificial synaptic-neuronal system called Artificial Brain can interact via electricity with external devices such as cameras, or more abstractly sensors or other electronic machines as engines & more, as well.

Certain aspects of NN Research caught my attention for now:
- Singal Flow Graphs,
- Initial & Modified Weighted Measures of Data Importance for AI's tasks,
- Networks that Adapt,
- Patterns Recognition,
- Neuronal Competition.

See also, if You wish, or need, ... : Token Game, Stitie Space & Stitie Machine 'Sunsail'.

Source: [53], insights.


Idempotence, Invariants, State, Observable Moments.

Idempotence & State.

There's confusion & ambiguity with idempotence definitions in Computer Sciences & Mathematics.

In Mathematics, an unary operation (or function) is idempotent if, whenever it is applied twice to any value, it gives the same result as if it were applied once; i.e., ƒ(ƒ(x)) ≡ ƒ(x). For example, the absolute value function, where abs(abs(x)) ≡ abs(x).

There's more in this Wikipedia article as well.

In a context of this blog, we can call an object's method call 'strongly idempotent' when calling it multiple times - in any amount of calls - gives the same result, on condition that passed parameters are the same.

In a context of this blog we can call an object's method call 'weakly idempotent' when calling it multiple times - in any amount of calls - gives the same result, on condition that object's state is same between (& at) the method calls & passed parameters are the same; that is - this requires independence from external conditions such as database data or CPU Clock's state for example.


Invariants are properties of the class or method that happen all the time, or at least in all of the 'observable moments'.

For example we can have invariant that states:
- during the observable moments, state variable 'a' has a value of less than 5.

Observable Moments, Unobservable State.

Observable moments have something to do with Physics, but in Computing it's also related with concurrency mechanisms.

In Computer Sciences objects' methods' accessors can be 'synchronized' using the Monitor Mechanism for example; then only one thread or process can enter the method at once, during an 'observable moment'.

Other processes or threads have to wait in queue until initial thread or process leaves; during that time of the wait other threads or processes cannot access method & state so it's 'nonobservable moment' for these - at least when all state accessing methods are synchronized with the same wait queue; there are other causes for 'nonobservable moments' as well - for example when inner or external mechanism blocks access to the object's state & methods.

This is little vague explaination can be reduced to a process/thread synchronization & concurrency even if it's internally managed by interpreter, or included by a compiler in executable code. This works the same both with time division between a nonconcurrent CPU's processes or threads, as well as with a multi-core or multi-processor architecture concurrency.

There's also possibility of 'hardware blocks' that make parts of the code 'nonresponsive', it's state 'unobservable'.


Relations & Similarity in Computing & Mathematics.

Data Collections & Relations, Relation Criteria.

We can organize data in for example databases, then check how these relate with each other.

For example we can have a set of 'words':
 { 'plane', 'helicopter', 'plate', 'dish', 'cup', 'star', 'knife', 'place' }.

We can define 'greater length of word' relation for example, that states that a 'word' is in this relation with another 'word' if it consits of more 'characters'.

For example word 'helicopter' with it's 10 'characters' is in 'greater length relation' with 'plate' with it's 5 'characters'; but 'plate' is not in the same relation with a 'helicopter' word' - we can say that this relation is 'not reversible'.

We can define other relations as well, for example minimal amount of the same 'characters' at the same positions in a 'word'.

We can define a 'minimum of 3 same character in a same length word' relation; then 'plane', 'plate' & 'place' words are in this relation; this a relation is reversible.

Let's note that amount of the same 'characters' at the same position is 4 for the above 'words' - therefore lesser amount criteria in regards of character similarity at positions are met as well.

We can define a lot of different relation criteria - for similarity relations or for other relations - using programming languages for example.

Relation criteria is a function that accepts a zero, one, two, or more objects (their ordering is meaningful & important in this case) then returns a boolean value (true or false) depending on whether these criteria are met or not; this has uses in data categorization & sorting for example as well, as well as in defining conditional instructions & preconditions; perhaps more as well.

Relations & Similarity in the Relational Databases.

In relational databases we have a 'like' operator & more.

Relations are also called 'tables', for data meeting 'in relation' criteria can be grouped into a well-defined tables; it's semantics (meaning) is that they are in relation if they are in the same table or in the same view.

If we have a table with semantic similarity, then this can be used as well.

For example, we can have 'air vehicles' table - similarity relation nevertheless - that can contain { 'plane', 'helicopter' } data set; we can say that both 'plane' & 'helicopter' are similar semantically (according to their meanings) because they are 'air vehicles'.

If something is not in this table (for example: 'air drone') then this does not mean so strongly that it is dissimilar yet - only that it's in 'uncertain dissimilarity relation'; we can call 'uncertain dissimilarity relation' a 'weak dissimilarity relation' as well; Computers can fairly well see semantic similarities if they are programmed well & have enough of well-structured data; there are fields of Artificial Intelligence that are responsible for data discovery & learning from databases, to handle database(s)' unknown or partly-unknown structure.

There's much more of mathematics in that as well, including 'Relational Model' Theory.

Similarity Relation with Regular Expressions.

Character strings can be analyzed using Regular Expressions.

Regular Expressions can be used to form 'Patterns', then we can analyze if a character string(s) 'match this a pattern'.

When two or more character strings match the same pattern, then we can say they are similar that way.


Programs Learning.


Attempts to create programs that learn are not motivated by a desire to eliminate effort of programmers & designers.

Attempts to create programs that learn are not challenge to more classic software engineering.

Attempts to create programs that learn are not motivated by the challenges of software complexity - it's solved by modern analysis & design instead.

Attempts to create programs that learn are motivated by complexity of a certain types of tasks given to be solved by a software, that hinder or make impossible to formulate correct & fully detailed algorithms that solve these problems.

Intuition & Imagination.

Program that learns can be imagined as an abstract algorithm that can be parametrized to complete. Learning is then acquiring proper parameters that make it detailed, concrete algorithm to solve the tasks given by a software constructor. Parameters acquisition uses historical data.

Hypothesis, Knowledges & Skills.

Parameters acquired during learning process are called - depending on their type & on assumed point of view - 'knowledge' or 'skill'.

Each of parameters acquired during the autonomous learning is are often called 'hypothesis', coming from 'hypothesis space' that contains every hypothese that student can use to perform task(s). This terminology emphasizes both an uncertain state of knowledge or skill acquired by a student, as well as the fact that it's autonomous subject that is acquired. Uncertain state of a skill or knowledge makes it non-infailible for the task(s) given.

The difference between knowledge or skill is fairly fluid - it has unstrict character.

It's more a skill than knowledge when we require from program to perform certain operations sequence that is acquired during the learning process; it's also called 'procedural knowledge' as well.

It's more a knowledge when we can say it's a selection for a certain case, a choice for a single decision. We can discover how to interpret certain input data, for example it's type or how it's related with other objects; it's also called a 'declarative knowledge'.

A knowledge or a skill is also called knowledge in lesser sense, knowledge with skill together is also called knowledge in wider sense.

Initial & Acquired Parameters.

We do not know perfect initial parameters for task(s), or otherwise we would not need AI; we still use initial parameters to start with learning.

During the learning process a change occurs - parameters are acquired, stored & used; this change occurs because of 'experiences', we can treat these 'experiences' as a 'training information' in this case, at least.

Source: [52].

See also, if You wish or need, ... : Learning Definition.

Learning Definition.

Artificial Intelligence is a software component of a computer system that learns.

def. Learning of the computer system is any autonomous change in this system that occurs because of experiences that leads to performance increase.

Autonomous Change is a change that system introduces in itself, not because of the external factor such as recompilation using a compiler with better optimization, hardware changes, etc.

Performance Increase depends on a well-defined Quality Criteria, should & can be measured using the strict & logical means.

Performance Decrease is for example forgetting - data loss; not every change leads to performance increase.

Experience is data acquired from observation or experiementation performed by the AI.

System that learns is also called: a Student.

Source: [52].


Systems that Learn, Artificial Intelligence.

'Machine Learning' is a field of Computer Sciences that considers Artificial Intelligence Software & it's construction.

This software has capability to learn, or in simple terms: a capability to increase quality of it's tasks performance, based on experiences from past.

Program that learns can be imagined as program that uses abstract algorithm that needs concretization to perform certain tasks. Such algorithm needs to be filled with details that are not known beforehand.

Learning is transforming these 'empty places' into algorithm that fulfills needs of a constructor by choosing proper parameters (details) to fill the 'empty places'.

These parameters, acquired during the learning process, are named 'knowledges' or 'skills'.

Algorithms for acquiring or perfecting 'knowledges' or 'skills' are named 'Learning Algorithms'.

In Literature there are very many of learning algorithms. These can be categorized according to 'knowledges' or 'skills' data representation, tasks types for which 'knowledges' or 'skills' are used, as well as by the method(s) of acquisition for 'experiences' - named 'training information'.

Main motivation to learn AI would be to handle algorithms too complex & unknown to handle precisely, including software handling unknown environment influences, as for example robot navigation on the real streets - there are too many unknown factors as holes in ground or wind or animals, weather changes etc.

Second main motivation is handling tasks with too many parameters to handle, too expensive to execute precisely.

Thrid main motivation is that is a rewarding subject, well documented in a proper literature, but challenging still. Lot of possibilities to earn praise, from scientific works to PhD subjects.

Source: [52].

See also, if You wish: Learning Definition, Programs Learning, Neural Networks, Artificial Intelligence Learning Aspects, An Example Design of Artificial Intelligence for Martial Arts.


Modelling the Tree of Life.

The Qabalistic Tree of Life,
in the Servants of the Light organisation's Hermetic theory.

Anything can be modelled, including the Qaballistic Tree of Life of the Hermetic Qabbalah.

Anything can be modelled using Stitie Space, if/when computing resources allow.

Mindful Imaging module can be custom, well-tailored for this task as well.

(an unfinished article, the coding will follow as well).


Token Game.

i am not sure what exactly 'Token Game' is in the context of the Petri Nets, but in a context of 'Ola AH' Programming Language & Modelling it is a software construction method that involves:

- designing software models,
- filling software models with code & the initial data,
- designing the data flow (tokens can contain 'payload' data, can be in different places at different times),
- checking & managing properties of the model graph with tokens, such as deadlocks possibility, bottleneck relief management; detection, management & handling of graphs' cycles, etc,
- model/tokens application management at the runtime,
- conditionality & events,
- probably more.

Space that contains code & data (state) - including above-mentioned 'tokens' - might be distributed, using either or both GRIDs & clusters, using message queues as well.

Probably Tuple Space(s) also called: 'Linda' might be used to represent conditions, a presence of a condition tuple or a lack of condition tuple might inform us about a condition holding or not. Events occuring might model cause(s) happening. A cause-event occurance might make condition(s) to appear or disappear from a Tuple Space(s), or just trigger a code to start happening.

Let's think that for a certain code parts to start, there's need:
- Cause(s) occurance(s), in a proper moment(s) in time,
- Condition(s) holding in a proper moment(s) in time.

'Token Game' is an 'Ola AH' Programming Language's 4GL method.

for some non-strict ideas inspired by Abstracted Petri Nets, see: Decision Filters.

see also, if You wish or need, ... : 'Talking Objects' Solutions, Object Relationships Modelling & Analysis, Neural Networks, Stitie Space & Stitie Machine 'Sunsail', Stitie Machine 1.2 'Satellite', Causes & Conditions.

(probably unfinished a post, probably will be edited in the future).

MATEN, Prism & Modelling Software Parts.

Every of Software Parts can be modelled.

Every of Software Parts can be modelled in 3D using Stitie Space.

MATEN & Prism Functionalities can be used to reform graphs, to invoke or transform different forms - for the security, for a speed, for a task changes.

Mindful Imaging can be used to Visualize, can be interactive to manage forms manually, with AI hints, or to oversee management automation - with or without a proper Artificial Intelligence.

All of above ways will have a place in the Idiomatic Programming with 'Ola AH' Programming Language.

Both 'Ola AH' Programming Language Concurrency Nicety, as well as Decision Filters are examples of Modelling Software parts in 3D using Stitie Space.

See also, if You wish or need, ... : Agile Transformation of Information State.


Decision Filters.


There are very many ways of modelling decision-making process using computers.

Decision filters are one of these.

Weighted Precondition Sets.

We can have a set of tuples (there's tool called Linda, also known as a Tuple Space).

Each of tuples in this a set consists of: (precondition, score).

WPS is connected with precondition producers, as well as with postcondition receivers.

When a producer decides that a precondition is met, this tuple is added to a set.

When score values in a set reaches or exceed a threshold value, the set is completed - completion event fires & chosen tuples (postcondition receiver decides which tuples to take, it takes as little as possible as well) are transferred to a postcondition receiver node. When all required postcondition tuples reach a receiver or a receiver/producer, decision is made. This decision might be to produce another precondition or something else as well.

Data flows in one direction only, in this model.

Decision Filters.

By designing & connecting producers, WPS-es & receivers properly, we can model decision making in a way.

For example, we can model a part of computer game decision filter that way:

We can react if we have enough forces & when we have enough of warning signals, in a proper way.

Abstract Preconditions/Causes Collection.

WPS part can be abstracted, to include other ways of handling preconditions or causes.

For example:
- whether producer can remove a tuple when it's no longer valid or not,
- whether there's a score part at all in a precondition tuple,
- whether a tuple contains additional payload data,
- what are abstract preconditions collection completion requirements,
- probably more options as well.

Other parts (producers, receivers) can be abstracted as well.

See also, if You wish or need, ... : Causes & Conditions.

(to be continued, probably).


Managed Concurrency Nicety in 'Ola AH' Programming Language.


Nice Concurrency in 'Ola AH' Programming Language is about processor time loans.

These loans & repayments can be managed by a 'leader' machine.

Managed Concurrency Flow Graph.

Using Stitie Space, Concurrency Flow Graph can be built dynamically, during the application's rutime - then it can be visualized using Stite Space's Mindful Imaging part - then it can be managed, modified as neccessary.

Such Management can be automatic or can be done manually by an administrator during program's runtime, to relieve application's bottlenecks.

Threads start when preconditions are met - when there are enough of input data tokens waiting for Threads' consumption.

Threads run concurrently then, producing output data tokens at their end.

Such Thread Communication can be modelled & managed, rearranged as neccessary - new producer/consumer threads can be added to a processing bottlenecks, their loans/repayment strategy can be managed as well - one can think of it as of a 'slider' that changes nicety priority.

Stitie Space can be used to model processing threads graph, and data flow routes as well.

That way, concurrency bottlenecks can be relieved by assigning additional resources, which either depletes 'processing reserves' or puts a stress on other application parts.

This extra 'stress cost' can be occasionally reasonable - for example, when rest of application waits for bottlenecks to finish their tasks.

'Ola AH' Programming Language & Nice Concurrency Automation.

In 'Ola AH' Programming Language, Thread Object will have a two extra properties:
- 'nicety' value - which determines how much of processor time it gets during a time period,
- 'nicety group' value - which determines with which threads it engages in loans/repayments.

'Ola AH' Programming Language Nicety Automation for a GRID.

For GRIDs, 'Ola AH' Programming Language will Include:
- 'worker object' class - basic unit of code execution in a GRID that consumes 'input data tokens' & produces 'output data tokens',
- 'worker strategy' class for transferring code to run on a different 'worker object',
- 'foreign strategy' field on a 'worker object' with a foreign IP Address for guest strategy coming from other distributed 'worker object',
- 'workload dispatcher' object that manages 'worker strategy' exchanges & time loans/repayments between the distributed 'worker objects' according to their 'nicety' & 'nicety group'. a 'workload dispatcher' is also a 'worker object'.

See also, if You wish or need, ... : Stitie Machine 1.1 'Sunsail' & Stitie Space, Stitie Machine 1.2 'Satellite' & Stitie Space, Clusters.


The Linguistic Parser & AI.

... year or years ago i had insight, that i should (later in my life) write the 'linguistic parser'.

what is the 'linguistic parser', however?

it's parser that can analyze natural language, either with dictionary words written or spoken, images, films ... build & model data structures in memory more or less correctly.

'linguistic parser' should take both dictionary, meanings as well as the natural grammar in consideration.

... i imagine that both Statistics as well as Artificial Intelligence are useful with the 'linguistic parser' - i should study these as well.

Example: http://nlp.stanford.edu:8080/parser/.

i also read that in about three decades Artificial Intelligence will reach or go past the point of human brain's capacity - it's both exciting as well as terrifying possibility.

with properly modelled data structures, AI can process & act on these, to fulfill it's goals.


Derivative of a Function.

Definition & Notation.

def. Derivative of a Function y = f(x) in point x is limit to which closes the ratio of increment of a function Δy to an increment of a variable Δx, when increment of a variable Δx closes to zero.

if such limit does not exist, then function has no derivative in this point.

derivative of a function y = f(x) we can note as:

, , , , .

Geometrical interpretation.

Geometrically, a derivative of a function is equal to a tanget of angle α, between the tangent line touching the graph of the function in a point x and positive direction of
the OX axis.

Additional notes.

Finding a derivative of a function we can call 'function differentiation'.

From definition we can see that derivative of a function is a quickness of the change of a function f(x), when x changes.


Let's calculate a few of function derivatives, using definiton.

1. y = sin(x).

We've used one of trigonometrical formulas (last one, for the sines difference):

... as well, as that is:

Similarly: (cos x)' = - sin x;

2. y = ax3, where a is any constant..

We've used Newton's Formula:

3. y = xn, n - natural number.

We've used Newton's Formula again.

The same formula for derivative (xk)' = kxk-1 we can use for any k.

(an unfinished article, to be continued).