#header-inner {background-position: right !important; width: 100% !important;}

9/2/16

Artificial Intelligence Learning Aspects.

Introduction.

Different types of Machine Learning Software uses different building blocks. These consist of a certain common parts, joined differently.


Learning Aspects.

Most basic parts of a Learning Machine are:
- a knowledge or a skill representation method,
- methods of use of a knowledge or a skill,
- a source & a form of a training information,
- a method of acquiring & perfecting a knowledge or a skill.


Knowledge Representation.

There are many methods of the Knowledge Representation for Artificial Intelligence.

These include:
- decision trees,
- rules,
- predicate logic formulas,
- probability distribution,
- finite state automatons (FSMs).

Knowledge can be represented in two ways:
- symbolic representation,
- subsymbolic representation.

Symbolic represenation is direct, can be easily understood by a human & interpreted by machine.

Subsymbolic representation is metaphorical & complex, not so easily understood or interpreted.

Metaphorically, symbolic representation of animal might be a name noted as a word in database, subsymbolic might be an image with a hand-written name.


Knowledge Use.

Not always used representation method determines how it's used, even if it narrows choices in this respect.

The way of using knowledge in a context of AI is generally determined both by representation & AI goal(s), tasks of a system that learns.

Typical AI tasks include:
- classification,
- approximation.

Classification is determining where objects belong, which patterns they match.

Approximation is representation of objects by a members of a real numbers set.

In a case of classificiation we can use words about learning concepts, concept is a knowledge of belonging to a domain or to a category.

There ae other AI tasks as well, among these:
- problem solving,
- sequential decision choices,
- environment modelling.

This can be that a knowledge use is just passing data representation to an user in a clean & readable form, letting user to decide what to do next, if at all.


Training Information.

Most basic form of training information form & source can be abstracted & simplified, reducing it to the main two of it's aspects:
- training with a supervision,
- training without a supervision.

There are more precise terms for these as well, will be elaborated later.

It's convenient to illustrate a learning system as a machine that accepts input data in a form of information vectors (tuples), as well as responding to these with a proper output. Learning process is then specifying algorithm responsible for output data generation.

Training information instructs student directly or indirectly, as shown on an image:





In the case of a training with a supervision, where source of training information is called teacher, a student acquires information that specifies in a way correct reactions for a certain input vectors set as 'behavior examples' expected from a student.

In the case of a training without supervision, training information is not available at all; only input vectors are given, student has to learn only by observing their sequences; training information is then a part of learning algorithm, we can say that a teacher is 'built-in' in a student.

Often however, it's fairly difficult to classify precisely whether we have training with a supervision or without.

Fairly close to learning with supervision is 'learning with questions'; in this case training information comes from a teacher, but it's only answers to questions asked by a student.

Learning by experimentation is when student acquires information as it experiments with it's environment. it can be performing certain actions (generating output) & observing consequences.

Learning with reinforcement (amplification) is learning by experimentation that uses additional source of training information - often called 'critic' - that provides signals that assess quality of student's behavior. in this case information is more 'valuating' (giving value, meaning, significiance, direction) more than training.

We do not precise any of above-mentioned learning ways too much, because their borders are not too precise anyway; it's best to wait with details & assumptions for certain solution algorithms & uses.


Knowledge Acquisition Methods.

Using acquired training information, system that learns generates new knowledge bits or perfects knowledge known before to perform better at it's task(s).

Knowledge Acquisition & Perfection Mechanism is most often determined by knowledge representation & a form of training information.

Usually there are many available algorithms for learning, associated with appropriate knowledge acquisition mechanisms.

Most popular learning mechanism is induction, a conclusion drawing approach that abstracts unit(s) of knowledge to acquire more generalized (abstract) knowledge.

Noninduction mechanisms are mostly explaining & clarifying knowledge bits, specifying details of initial knowledge of a student; it's used with reinforcement (amplification) of accomplishment rewarding for succesful behaviors; there's analysis of effects of a certain action(s) & rewards.

Attempt at unifying many types of knowledge creation is inferential theory of learning, that perhaps will be elaborated in future articles on this a blog.


Source: [52].

No comments:

Post a Comment