9/29/16

Real-Time Systems.

Introduction.

Real-Time systems are computer systems in which succesful task performance depends on two factors:
- succesful result of computation,
- exact deadline time in which task was performed.

When a task execution time exceeds it's time deadline, we can say that system failed.


Hard, Firm & Soft Real-time Systems.

In the 'Hard Real-time Systems' missing a deadline is a total system failure. Hard realtime systems are created when missing a deadline can result in hardware damage or costs lives or health.

In 'Firm Real-time Systems' infrequent deadline misses are tolerable, but may degrade the system's quality of service. The usefulness of a result is zero after its deadline, but no damage or personnel loss occurs.

In 'Soft Real-time Systems' the usefulness of a result degrades after its deadline, thereby degrading the system's quality of service.


Requirements.

For a computer system to meet criteria of the 'Hard Real-time System', requirements are:
- well understood & fast enough hardware,
- real-time operating system (for example: RTLinux),
- every software piece must adhere to the 'hard real-time requirements'; software must generate results in the deadline time.


Software & other considerations.

In realtime systems, processor(s) time is divided between processes, by an operating system piece called 'real-time scheduler'. At a deadline time a process is 'interrupted' by the scheduler, even if it didn't complete.

Since we do not want any suprises regarding amount of time the process uses to complete, processes should be run & initialized before giving them tasks, and communicate with each other using real-time tools. We probably want to count all of the costs of the processor time, including rescheduling & inter-process communications. We do not wish to forget about other costs that can result in a performance losses, as memory costs as well.

In the Real-time computing, we are concerned mostly with the pessimistic times of tasks' completions, even at cost of average performances.

Since we wish to count exact amounts of the processor(s) cycles, probably the most reliable & best programming language for this is assembler - but perhaps the 'Ada' Programming Language & other Programming Languages can compete as well.

With 'Ola AH' Programming Language we are concerned with 'average time costs' of software parts performances, not with the 'Hard Real-time Standards'. With 'Ola AH' compiler, we'll still strive for a very fast solutions.


... see also, if You wish or need, ... : 'Ola AH': a Concurrent or the Real-Time Language?

9/27/16

Application Design, Use Cases & Automatic Testing.

Application Design & Use Cases.

Often, when a customer orders application, she or he orders a collection of a certain functionalities.

For example:
- logging into online email application,
- deleting all spam in spam inbox,
- logging off automatically after a given time,
- configuring email sorting preferences,
- ...

Use cases are means of specifying these functionalities, defined by a number of steps (click here, scroll here, type something here, read report's field #n, etc ...).

A minimal set of use cases often determines how user interface should look, is often a formal requirement for ordered application functionalities - can be a part of the business contract between a customer & a developer company.


Automated Tests, Changes & Debugging.

Often tests for use cases can be automated, can be performed after any change is introduced into the code ... just before program's compilation, just before running an application, or at any other convenient moment.

By using Automated Use Case Tests, programmers can be comfortable that when they (or their teammates) change parts of the code, the older parts of the code (previous functionalities) that they are responsible for - won't prove erroneous after the new code additions.

Automated Use Case tests often show when part of the code is erroneous after changes, and while these are far from being 'proofs of code's correctness', these are extremely practical nevertheless. Even if not every error is caught by these - carefully designed tests can quickly find basic functionality failures. Other errors can still be found & fixed using other methods, and it's still easier to fix one error than multiple overlapping ones.

More than that - carefully designed automated tests can help programmer to create 'Mental Test Harness' that let's them more boldly & quickly do larger changes in code without inspecting the same things over & over, without fearing of application breakages so much.

This also builds Trust & Responsibility in the teams - with tests it's quick & easy to find out when someone breaks other teammate's code parts - at early stage of failure at that, so it can be addressed before error turns to be too complex to address quickly, before true stress, psychological dramas & employee firings start, before project's budgets & time schedules are endangered.

In many ways, Automated Tests help to develop applications with much more of the speed & security, with only a small amount of extra code at start (tests have to be designed & written too) & with a small amount of maintenance (when requirements change, tests have to be modified).

9/26/16

Selector Methods & Dynamically Created Methods in 'Ola AH' Programming Language.

Introduction.

In 'Ola AH' Programming Language - methods - will be properly typed objects that can be 'passed around', most probably with a proper syntax support.

Method objects can be statically or dynamically constructed as well, with an elegant syntax support.

Any of the instructions can be dynamically created as well, will be properly typed objects that can be 'passed around' - even 'asm' instruction, after that functionality is added to interpreter or compiler, respectively. With 'Ola AH', every high-level instruction or standard library method, will be modelled & documented properly.


Selector Methods.

Selector methods are methods that return a value of the method type, and are given in their arguments list a method collection, as well as other 'arguments list'.

Selector methods will use existing method syntax, with no extra syntactic support - however when a method fulfills the requirements of selector method, we can talk about it as of 'selector method', for easier communication.


Uses.

One of most basic use cases for a selector method is to select a method from 'method space' depending on other argument(s), then to return it.


An Example.

void m1(char c1) { ... };
char m2(char c2, char c3) { ... };
char m3(string s1, int64 i1) { ... };

method m4 := new method();
m4.setReturnType(string);
m4.setReturnType((char, string, int64)); // replacing a string return type with a tuple. m4.addArgument(int64, "i1");
m4.addArgument(char, "c1");
m4.removeArgument(0);
m4.insertArgument(int32, 0, "i1");
m4.addInstruction( ... );
m4.addInstruction( ... );
m4.addInstruction( ... );
m4.removeInstruction(2);
m4.insertInstruction(0, ... );

method alist mal := #{ m1, m2, m3, m4 };

method mySelectorMethod(method alist mal, int8 i) {

  assert i >= 0;
  assert i <= 3;

  return mal.get(i);

}

method m := mySelectorMethod(mal, 1);
char c := m.execute('a', 'b');


See also, if You wish or need, ... : 'Ola AH' Programming Language Syntax.

9/22/16

Inductive Learning.

Introduction.

This article is a theoretical introduction to more practical examples of inductive learning algorithms, a part of Machine Learning & Artificial Intelligence.

As a requirement for the complete understanding of this article, let's consider the Predicate Logic.


Inductive Inference.

Let's consider following logical consequence rule statement:

P ∧ W ⊧ K.

Where:
- P is premises set,
- W is student's inborn knowledge,
- K is conclusions set,
- ⊧ is logical consequence operator.

This statement says that knowledge represented by conclusions K is logical consequence of inborn knowledge W and premises set P.

For our needs, it's convenient to interpret this statement backwards, assume that conclusion is training information T, and assume that student in the process of inductive inference, acquires a certain hypothesis h.

It's also convenient to give following meanings & notations to the statement parts:
- P : knowledge generated during the learning process, also called hypothesis h,
- W : student's inborn knowledge,
- K : training information, noted as T.

Then our statement assumes form:

h ∧ W ⊧ T.

We can say that:
- hypothesis: 'h ∧ W' explains conclusion 'T'.
- 'T' is logical conclusion of hypothesis 'h ∧ W'.
- in this statement: hypothesis explains logical conclusion.

In more elaborate words:

- training information acquired by a student is logical consequence of inborn knowledge and generated hypothesis h.
- inductive hypothesis with student's inborn knowledge explains acquired training information.

Of course, logical consequence occurs when inborn knowledge, training information and hypothesis are correct. In practice often we have to differ from this assumption and settle for approximate consequence.


Finding correct hypothesis in light of above, means detecting in training data certain general regularities, that when joined with the inborn knowledge, explain that data well.

This approaches popular understanding of inductive inference as getting from facts and individual observations to generalizations.

These facts & observations are called 'training examples', and training information given to a student is a 'training examples set'.

Hypothesis, that a student has to find been given the training information, is a generalization of 'training examples'; it's purpose is not only explaining correctly (or approximately correctly), but more importantly prediction of new facts & observations.

There are three types of inductive learning:
- learning concepts (a way of objects classification),
- creating concepts (objects grouping, describing groups),
- learning approximations (mapping objects on real numbers).


Main Types of Inductive Learning.

The goal of inductive learning may assume different forms, depending mostly on the knowledge that has to be acquired by the inductive learning and the form of training information given to a student.

We'll demonstrate the three most important forms of training information, on which the most of the theoretical & practical work focuses, which have the most practical uses as well.

In each form of inductive learning, acquired knowledge is a certain type of mapping of input information on output information.

1. Domain.

Domain is an objects set X. Objects in X are related with knowledge acquired by a student. These objects may represent things, people, events, situations, states of things, etc. - anything that can be argument of a mapping that student has to learn.

2. Examples.

Each of objects, each element of a domain x ∈ X, we'll call an example.

3. Attributes.

We'll assume that examples are discribed using attributes. Attribute is any function specified on a domain. We'll assume that a description of every example x ∈ X consists of values: n ≥ 1 attributes, a1: X → A1, a2: X → A2, ... , an: X → An.

A set of all attributes specified on a domain we'll note as A = { a1, a2, ... , an } and call it the 'attributes space'.

In practice we sometimes identify an example x with attributes vector:

< a1(x), a2(x), ... , an(x) >,

So we'll call an example any element of the cartesian product of the codomain of attributes A1 × A2 × ... × An; this simplification might be misleading, but has it's uses.

For a convenience we'll recall this vector, that for an example x we'll note as <x>A.

Depending on the codomain (a values set), attributes can be divided into types.

Most basic, sufficient for learning purposes is an attributes division as follows:
- nominal attributes: with a finite set of unordered discrete values,
- ordinal attributes: with a countable set of ordered discrete values,
- continuous attributes: with values from a real numbers set.


For each examples set P ∈ X, attribute a : X → A and it's values v ∈ A we'll designate as Pav a set of these examples from P, for which an attribute a has a value v, thus:

Pav = { x ∈ P | a(x) = v }.


Example 1. Points on a plane:

Let's consider a domain X = R2 that is a two-dimensional plane. Examples are points on that plane. Each of examples can be described using two continuous attributes:

a1 : X → R and a2: X → R

that specify proper cartesian coordinates of this point relative to assumed coordinates set.

Similarly, a domain can be assumed to be space Rn for any specified value n ≥ 1.


Example 2. Binary Strings.

Let's consider a domain X = {0,1}n for a given value n ≥ 1. We can assume, that all examples from this domain are n-element binary strings.

Examples are naturally described as n attributes:

a1: X → {0,1}, a2: X → {0,1}, ..., an: X → {0,1}, where:

for each x ∈ X and for each i = 1, 2, ..., n - value ai(x) describes i-th element of a string x.

In this example, we can equalize examples with attribute vectors, and for convenience we can use notation xi instead of ai(x).


Example 3. Geometric Shapes.

Let's consider a doimain consisting of colorful geometrical shapes with differing sizes and shapes. Examples from this domain we can describe with following attributes:

size: ordinal attribute with values: small, medium, large,
color: nominal attribute with values: red, blue, green,
shape: nominal attribute with values: circle, square, triangle.


Example 4. Weather.

Let's consider a domain consisting of possible weather states. Each of examples from this domain we can describe with following attributes:

aura: nominal attribute with values: sunny, cloudy, rainy,
temperature: ordinal attribute with values: cold, moderate, warm,
humidity: ordinal attribute with values: normal, high,
wind: ordinal attribute with values: weak, strong.


Example 5. Cars.

As another example, we'll consider a domain, with elements are car models available on the market. We'll assume that examples from this domain can be described with following attributes:

class: ordinal attribute with values: small, compact, large,
price: ordinal attribute with values: small, moderate, high,
performance: ordinal attribute with values: weak, average, good,
unfailability: ordinal attribute with values: small, average, high.


Learning Concepts.

Concepts are one of forms of our knowledge about world, used to describe & interpret sensual observations & abstract ideas.

With a concept of 'chair', we can point in a large set of various furniture these, that are 'chairs', and these that are not - even if in both groups are furniture pieces with differing size, color, with differing amount of legs, made of differing materials.

In a most basic case, concept specifies division of a set of all considered objects, or domain, into two categories:
- objects belonging to a concept (positive examples),
- objects not belonging to a concept (negative examples).

Sometimes it's convenient to consider multiple concepts described on the same domain, we'll call this 'multiple concept'.

'Multiple concept' describes domain division into categories, of which each category corresponds to one of the 'single concepts'.


def. Concept: Let's assume that on a domain might be specified a class of concepts, noted as CC. Each of concepts c ∈ CC is a function c : X → C, where C describes finite set of categories of concepts of class CC.

In a case of 'single concepts' we'll assume C = {0,1}. In a case of 'multiple concepts' C might be any finite set of categories with quantity of |C| > 2.

'Single concept' describes subset of a domain, consisting of 'positive examples' of this concept:

XC = { x ∈ X | c(x) = 1 }.

In the general case, for a category d ∈ C, certain concept and any of examples set P ⊆ X we assume notation Pcd for these examples from P, that belong to a category d, thus:

Pcd = { x ∈ P | c(x) = d }.

We may omit c in this a notation, and use Pd.

In particular for a single concept c a set X1 = XC is a set containing all of it's positive examples, and set X0 = X - X1 is a set of all of it's negative examples.


Example 6. (Rectangles on a plane).

For a domain of points on a plane R2 introduced in an example 1, we can consider concepts class CC represented by all rectangles with sides parallel to the coordinates system' horizontal and vertical axes.

With a rectangle representing any concept from c ∈ CC we can connect coordinates of it's 'left-down' and 'right-up' points, appropriately (lC, dC) and (rC, uC).

Then a positive examples set of a concept c is defined as set of all points inside or on border of this a rectangle:

XC = { x ∈ X | lC ≤ a1(x) ≤ rC ∧ dC ≤ a2(x) ≤ uC }.

A concept represented by a rectangle as well as several positive examples (filled circles) and negative examples (empty circles) is shown on a following image.



(click on image to enlarge)



Example 7. (Boolean Functions).

For a domain of n-element binary strings introduced in an example 2, concepts might be represented by n-argument boolean functions.

Definitions of these functions have form of a logical formula, in which literals (atomic formulas joined into complex formulas by logical functors) are attributes (whose values of '0' or '1' are interpreted as 'false' or 'true' logical values).

More precisely, for a definition c(x) there can occur positive literals: ai(x) and negative literals: ¬ai(x) for each i = 1,2,...,n.

Positive examples of this domain are these domain elements for which proper formula is satisfied.

For n = 5, example definitions might be:

  c1(x) = a1(x) ∨ ¬a3(x) ∧ (a4(x) ∨ a5(x)),
  c2(x) = ¬a5(x),
  c3(x) = a2(x) ∧ a4(x).


Example 8.

For a domain of geometric shapes introduced in an example 3, we can consider a concepts class CC consisting of all possible single concepts for this domain.

If we assume that this domain is finite, then |CC|= 2|X|.

Certain of these concepts might have a reasonable interpretation for us, for example: 'shapes that resemble fruits', or 'small shapes'.


Example 9.

For a domain of weather states introduced in an example 4, we'll consider a concept class CC consisting of all single concepts, that can be specified for this domain. Selected few of these concepts might have meaningful interpretation from our perspective - such as 'typical mediterranean weather' or 'weather good for sailing'.


Hypotheses for Learning Concepts.

For a given domain and concepts class there's specified, depending on used learning algorithm, space of possible hypotheses, noted as HH.

Hypotheses space consists of all hypotheses that student can construct.

Every hypothesis h ∈ HH, as well as every concept, is a function that assigns examples their categories, so we can write:

h: X → C.


Result of learning is always a selection of a hypothesis from HH, considered as best for given training examples (and possibly, inborn knowledge as well).

Precise learning of every target concept c ∈ CC is possible only if CC ⊆ HH. Then its true that c ∈ HH - that hypotheses space contains hypothesis identical to target concept.

In practice, for certain algorithms, we have however: HH ⊂ CC and we have no certainty that we can learn a target concept. This does not mean, however, that we should strive to equip student in richest hypotheses space possible - because this would hinder learning process.


(...)


(to be continued/rewritten as needed or neccessary, when/if i can).

9/18/16

Artificial Intelligence Research Fields.

Introduction.

In this article we'll cover the four main areas of the AI Research.


Inference.

One of earliest & still developed current in the AI Research is 'automatic inference mechanisms'.

Based on the 'formal logic' achievments, these strive for 'effective deduction algorithms'.

It's about mechanical formulation of logical consequences of knowledge base, with the use of the 'inference rules'.

This field used many formal systems, as:
- propositional caclulus,
- first order predicate calculus,
- non-classic logics.

Automatic inference methods are base for the expert systems, and for the statement proving systems.


Search.

Issues of searching large spaces in an effective manner have uses in Artificial Intelligence Research.

Problem-solving paradigm used in this field assumes that a problem is characterized by a possible states space with a certain number of distinct end states that represent acceptable solutions and by operators set that allow to move through this space.

Finding (best) solution can be reduced to finding (best according to a certain criteria) a sequence of operators that guide from initial problem state into one of end states.

Search is about finding optimal solution using as small as possible costs, including memory use cost & computing time cost; In a case of board games this might be about finding a move that maximizes a chance of victory, considering the actual situation on board & possible player moves. This leads to searching the 'game tree' constructed by considering possible player moves, then possible opponent moves, etc.

Both in searching for problem solution, as well as in board games, exhaustive search (in a complete states search space or in a complete game tree) is beyond our means (with an exception of trivially small problem spaces or boards). That's why heuristic methods are developed for searching, which do not guarantee effectiveness increase in a pessimistic case, but significiantly improve effectiveness on the average. These are based on the use of 'heuristic functions', carefully designed by a system's constructor for numeric estimation of states' quality (based on the distance from the acceptable end state) or for numeric estimation of a game situation (considering the chances of a win).


Planning.

Planning is to a certain extent a result of joining automatic inference mechanisms with the problem solving.

From a 'planning system' we expect to find - in most basic case - a plan of a problem solution in a manner that is more effective than when using the 'search method', even with heuristics. A 'planning system' can search the space using the knowledge about individual operator effects that is provided to it. This knowledge, contained usually in a certain logical formality (for example, in a subset of a predicate language), describes problem state changes that come into effect after using a certain operator. This enables inference about states reachable from initial state using different operator sequences. Occasionally this eliminates the need for search, and in most cases significiantly reduces the search scope at least.

It's said about the Intelligent Planning Systems, that these 'infer about their own actions'.


Example:

We can have possible logical formulas (individually representable for example as expression trees or a series of 0's & 1's) represented as a finite state automaton graph, with each node as a possible state (possible logical formula). This finite state automaton graph has certain 'operator transitions', that transform a single formula or many formulas into another formula(s) - moving from a certain graph state into another graph state. A knowledge of our operators can describe certain 'shortcuts' in this state graph, which can make searching for the acceptable end state more effective. A single 'operator' can define zero, one or many 'shortcuts' as well.


Learning.

AI Learning Field is described in an article: Artificial Intelligence Learning Aspects, can be used to construct a Weak AI, that 'behaves rationally' to fulfill it's task.

Systems that learn can adapt to situation, can be autonomous in action.

There are connections & common points between AI Learning & other AI Research Fields, mostly with 'automatic inference' & 'heuristic search'.


See also, if You wish or need, ... : Finite State Automatons and Regular Expression Basics.


Source: [52].

9/15/16

'Weak' & 'Strong' Artificial Intelligence.

Weak Artificial Intelligence.

'Weak Artificial Intelligence' can be described as:
- System that 'thinks' rationally,
- System that 'behaves' rationally.

With a 'Weak AI' most interrested are Computer Scientists & Developers.

What is 'rational' in this context?

Let's assume that it's succeeding in solving a certain complex tasks well - that when handled by a human would require a significiant amount of work & intellect.


Strong Artificial Intelligence.

'Strong Artificial Intelligence' can be described as:
- System that 'thinks' as a human,
- System that 'behaves' as a human.

With a 'Strong AI' most interrested are Psychologists & Philosophers.

What is 'human' in this context?

Scientists of the 'Strong AI' field have ambitious & far-reaching goal of constructing artificial systems with a near-human or human-exceeding intellect.

This artificial system is assumed to work in a similar, but not neccessarily in an identical way as human thinking processes.

Certain opinions explore the idea of the 'Strong AI' being aware of it's existence and being capable to fight for it's continued survival.


A 'Strong AI' can use 'Weak AI(s)' advices as well, as an input data.


i read in an article in the Internet, that the goal of the 'Strong AI' construction will be achieved in about three decades.

Is it true?

Can't confirm or deny this hypothesis, as of yet.


Technology.

There are many ways to achieve both the 'Strong AI' and a 'Weak AI', both of which can be achieved using similar technology.

For example, the 'Neural Network' technology can be used to create both 'Strong AI' and 'Weak AI'.

The difference between creating a 'Weak AI' and the 'Strong AI' - both using Neural Network technology - is essentially a question of how complex the Neural Network is.

A simple Neural Network is capable of a 'Weak AI' and we would require a complex Neural Network for the 'Strong AI'.


Source: [52], the Internet.

9/10/16

If you want peace, prepare for war.

'If you want peace, prepare for war'.

It's a truth known to the whole World.

i think it's true, because showing weakness invites enemy forces to strike earlier.

That's why blog's author wishes to study:
- Hacking & Cryptography, Quantum Cryptography as well,
- Cyber Terrorism; Intelligence & Counterintelligence efforts, incuding Sabotage,
- Artificial Intelligence,
- Nuclear & War Politics of Arabian Countries, as well as Missles & Interception Missles Developments - Mostly Long Range Missles, NATO SHIELD & related, as well,
- Advanced Mathematics, Quantum Physics & Nanotech.

These are a Very Dangerous fields, to not neglect.

i (Andrzej Wysocki, neomahakala108@gmail.com) am an amateur hacker, but i wish to work in the Cyber Security in the EU NATO Structure, probably in a small Company or a Corporation soon - located in Warsaw, Poland - if neccessary - in more than one Company or a Corporation.

My main concern is Cyber Terrorism & it's threat to World's Peace, especially when Quantum Computers start to create the Cipher Crisis, resulting probably in Economic Crisis & other threats as well.

Later in life (in about 12-20 years, i think) i wish to create a Corporation, NIDAN Software.

9/5/16

Machine Learning Theory & Related Sciences.

Introduction.

Artificial Intelligence Theory is a broad knowledge field, it's a field of Computer Sciences as well.

There are many approaches & theories of AI Software Construction & Research, these knowledge sets are disjunctive, but still overlap to a certain degree.

This means that a single AI Application can use many theories as is approprieate for a given solution, joining them together as needed & neccessary.


Three Streams in Machine Learning.

There are three streams in Machine Learning Theory:
- Theoretical Stream,
- Biological Stream,
- Systems Stream.

Theoretical Stream covers abstracted & simplified common thinking patterns found in many approaches of machine learning field; it approximates & estimates difficulty of learning, amount of time for AI to learn, amount & quality of information needed for learning, a quality of knowledge possible to be acquired & learned by the AI.

Biological Stream covers modelling systems that learn found in living organisms, found in humans & animals, on different levels of their structure - from a single cell to central nervous system; this stream is closer to Biology & Psychology more than strictly-understood Computer Sciences.

Systems Stream covers algorithms design & use, as well as construction, research & use of their implementations; in simpler words it's a catalog of basic AI algorithms, as well as their analysis.


Related Sciences.

While an Artificial Intelligence field can be joined with any of the sciences to a certain degree, there are sciences that have much in common with AI works.

AI Research has been most affected by:
- Probability Theory,
- Information Theory,
- Formal Logic,
- Statistics,
- Machine Control & Automation Theory,
- Psychology,
- Neurophysiology.

Probability Theory has uses both in Theoretical Stream, as well as in Systems Stream; it has uses in Mathematical Apparatus used in learning algorithm analysis, as well as it is basis for many different probabilistic conclusion mechanisms that have many uses.

Information Theory has uses in a certain learning algorithms design. Occasionally hypothese selection uses Information Theory for setting hypotheses' quality criteria. Learning by Induction problem can also be seen as a properly coded training information.

Formal Logic affected machine learning systems research, mostly as basis for symbolic representation method, especially if these use in rules; there are parts of machine learning that use Formal Logic more directly, however - for example, in Logic Programming with Induction, or for example in Learning by Clarification that uses elements of logical deduction.

Statistics approaches methods of data analysis & related conclusions about data; training information can be represented in form of statistic, using a rich Mathematical Apparatus for that.

Machine Control & Automation Theory is about methods of automatic control & design for different classes of objects & processes; regulator is a machine part that affects controlled object or process to achieve or maitain a certain desired state; regulator's task is often send a sequence of instructions that leads to meeting a certain task criteria, so it's related to 'AI Skill' with that respect.

Psychology, among other things, studies learning process found in humans & animals; most important discovery for machine learning that psychology provided is 'learning with reinforcement (amplification)'.

Neurophysiology is a science of nervous systems, both human & animal; there are significiant traces of it's discoveries found in subsymbolic data representations; neurophysiology is inspiration for neural networks as well, as well as for loosely related 'approximation function'.

9/3/16

Risk Management in IT Sector.

Introduction.

IT Projects are involved with Risks, especially larger, ambitious, more complex Projects.

IT Project is a failure when it exceeds set amount of time or cost.

Majority of IT Projects are failures with that respect (about 70-90% of them all).

IT Projects Risk should be Managed, as a part of Project Management efforts.

There are other threats as well, for example writing incorrect (not meeting customer's requirements), insecure, or bugged application.

Low Code Quality also poses a risk, for this type of software - even if cheap at first - carries greater modification & correction costs - measured in time & cash.

As Reality & Markets change, Companies need to Adapt, software needs to be rewritten anyway.


How to handle the Risks?

1. At beginning do a brainstorming session with employees, not only developers & management; involve customers if You can (not only the risks of technological / development type - address the miscommunication, currency conversion risks & other risks as well),
2. Address greatest of risks first, prototyping software, checking libraries & tools as well,
3. Prepare for extra time & other costs as well; especially for the initial preparatory project phases, as well for last debugging ones,
4. Care for the work quality & team integrity, do not change personnel too often, do not hire too untrusted personnel as well,
5. Adapt Iterational Development Procedures (split project into phases according to risks & priorities, complete one after another, modifying software between iteration versions, keep versions archive as well), instead of a Waterfall Model.
6. Enter unproven, dangerous or Risky fronts (markets, tools) with extra assurance, insurance, time & skill,
7. Have a competent team that can cooperate & communicate efficiently over a long time,
8. Adapt Quality Insurance Policy as well, make sure that team adheres to it,

See also, if You wish, or need, ... : Threat Analysis, Common Problems in Software Company, Application Design, Use Cases & Automatic Testing.

9/2/16

Artificial Intelligence Learning Aspects.

Introduction.

Different types of Machine Learning Software uses different building blocks. These consist of a certain common parts, joined differently.


Learning Aspects.

Most basic parts of a Learning Machine are:
- a knowledge or a skill representation method,
- methods of use of a knowledge or a skill,
- a source & a form of a training information,
- a method of acquiring & perfecting a knowledge or a skill.


Knowledge Representation.

There are many methods of the Knowledge Representation for Artificial Intelligence.

These include:
- decision trees,
- rules,
- predicate logic formulas,
- probability distribution,
- finite state automatons (FSMs).

Knowledge can be represented in two ways:
- symbolic representation,
- subsymbolic representation.

Symbolic represenation is direct, can be easily understood by a human & interpreted by machine.

Subsymbolic representation is metaphorical & complex, not so easily understood or interpreted.

Metaphorically, symbolic representation of animal might be a name noted as a word in database, subsymbolic might be an image with a hand-written name.


Knowledge Use.

Not always used representation method determines how it's used, even if it narrows choices in this respect.

The way of using knowledge in a context of AI is generally determined both by representation & AI goal(s), tasks of a system that learns.

Typical AI tasks include:
- classification,
- approximation.

Classification is determining where objects belong, which patterns they match.

Approximation is representation of objects by a members of a real numbers set.

In a case of classificiation we can use words about learning concepts, concept is a knowledge of belonging to a domain or to a category.

There ae other AI tasks as well, among these:
- problem solving,
- sequential decision choices,
- environment modelling.

This can be that a knowledge use is just passing data representation to an user in a clean & readable form, letting user to decide what to do next, if at all.


Training Information.

Most basic form of training information form & source can be abstracted & simplified, reducing it to the main two of it's aspects:
- training with a supervision,
- training without a supervision.

There are more precise terms for these as well, will be elaborated later.

It's convenient to illustrate a learning system as a machine that accepts input data in a form of information vectors (tuples), as well as responding to these with a proper output. Learning process is then specifying algorithm responsible for output data generation.

Training information instructs student directly or indirectly, as shown on an image:





In the case of a training with a supervision, where source of training information is called teacher, a student acquires information that specifies in a way correct reactions for a certain input vectors set as 'behavior examples' expected from a student.

In the case of a training without supervision, training information is not available at all; only input vectors are given, student has to learn only by observing their sequences; training information is then a part of learning algorithm, we can say that a teacher is 'built-in' in a student.

Often however, it's fairly difficult to classify precisely whether we have training with a supervision or without.

Fairly close to learning with supervision is 'learning with questions'; in this case training information comes from a teacher, but it's only answers to questions asked by a student.

Learning by experimentation is when student acquires information as it experiments with it's environment. it can be performing certain actions (generating output) & observing consequences.

Learning with reinforcement (amplification) is learning by experimentation that uses additional source of training information - often called 'critic' - that provides signals that assess quality of student's behavior. in this case information is more 'valuating' (giving value, meaning, significiance, direction) more than training.

We do not precise any of above-mentioned learning ways too much, because their borders are not too precise anyway; it's best to wait with details & assumptions for certain solution algorithms & uses.


Knowledge Acquisition Methods.

Using acquired training information, system that learns generates new knowledge bits or perfects knowledge known before to perform better at it's task(s).

Knowledge Acquisition & Perfection Mechanism is most often determined by knowledge representation & a form of training information.

Usually there are many available algorithms for learning, associated with appropriate knowledge acquisition mechanisms.

Most popular learning mechanism is induction, a conclusion drawing approach that abstracts unit(s) of knowledge to acquire more generalized (abstract) knowledge.

Noninduction mechanisms are mostly explaining & clarifying knowledge bits, specifying details of initial knowledge of a student; it's used with reinforcement (amplification) of accomplishment rewarding for succesful behaviors; there's analysis of effects of a certain action(s) & rewards.

Attempt at unifying many types of knowledge creation is inferential theory of learning, that perhaps will be elaborated in future articles on this a blog.


Source: [52].

Understanding & Tools as Eclipse IDE.

Software Development & Construction uses tools that speed this process up - more than significiantly.

One of my favourite tools is Eclipse IDE, even if i use only a small part of it's features.

An advice to less experienced students of this a blog - there's no need to learn all of the IDE, only neccessary or needed parts. i do not know most of it as well, myself - yet i am a Professional still.

Eclipse IDE is great because of simplicity & great design, as well as thanks to it's plugin system.

A recent update of Eclipse IDE (Eclipse Che 4.7) includes splitting editors into parts, potentially a very convenient tool.

When a program part is contained in a single screen, it's easier to understand - no need to scroll the window & lose part of the code from sight.

That's one of reasons why we use functions, procedures & methods in our elegant coding as well.

Splitting windows allows to see & analyze longer code portions easier, i think & feel.




See also if You wish, or need, ... : Few thoughts on code quality (for professionals), How code quality can be measured.