Artificial Intelligence Theory is a broad knowledge field, it's a field of Computer Sciences as well.
There are many approaches & theories of AI Software Construction & Research, these knowledge sets are disjunctive, but still overlap to a certain degree.
This means that a single AI Application can use many theories as is approprieate for a given solution, joining them together as needed & neccessary.
Three Streams in Machine Learning.
There are three streams in Machine Learning Theory:
- Theoretical Stream,
- Biological Stream,
- Systems Stream.
Theoretical Stream covers abstracted & simplified common thinking patterns found in many approaches of machine learning field; it approximates & estimates difficulty of learning, amount of time for AI to learn, amount & quality of information needed for learning, a quality of knowledge possible to be acquired & learned by the AI.
Biological Stream covers modelling systems that learn found in living organisms, found in humans & animals, on different levels of their structure - from a single cell to central nervous system; this stream is closer to Biology & Psychology more than strictly-understood Computer Sciences.
Systems Stream covers algorithms design & use, as well as construction, research & use of their implementations; in simpler words it's a catalog of basic AI algorithms, as well as their analysis.
While an Artificial Intelligence field can be joined with any of the sciences to a certain degree, there are sciences that have much in common with AI works.
AI Research has been most affected by:
- Probability Theory,
- Information Theory,
- Formal Logic,
- Machine Control & Automation Theory,
Probability Theory has uses both in Theoretical Stream, as well as in Systems Stream; it has uses in Mathematical Apparatus used in learning algorithm analysis, as well as it is basis for many different probabilistic conclusion mechanisms that have many uses.
Information Theory has uses in a certain learning algorithms design. Occasionally hypothese selection uses Information Theory for setting hypotheses' quality criteria. Learning by Induction problem can also be seen as a properly coded training information.
Formal Logic affected machine learning systems research, mostly as basis for symbolic representation method, especially if these use in rules; there are parts of machine learning that use Formal Logic more directly, however - for example, in Logic Programming with Induction, or for example in Learning by Clarification that uses elements of logical deduction.
Statistics approaches methods of data analysis & related conclusions about data; training information can be represented in form of statistic, using a rich Mathematical Apparatus for that.
Machine Control & Automation Theory is about methods of automatic control & design for different classes of objects & processes; regulator is a machine part that affects controlled object or process to achieve or maitain a certain desired state; regulator's task is often send a sequence of instructions that leads to meeting a certain task criteria, so it's related to 'AI Skill' with that respect.
Psychology, among other things, studies learning process found in humans & animals; most important discovery for machine learning that psychology provided is 'learning with reinforcement (amplification)'.
Neurophysiology is a science of nervous systems, both human & animal; there are significiant traces of it's discoveries found in subsymbolic data representations; neurophysiology is inspiration for neural networks as well, as well as for loosely related 'approximation function'.