Particle Interactions on Quantum Level.

Particles spread in Space.

Can particles be modelled as 'pixels' in Stitie Space?

As not too far as i know it seems they are Objects in Space with Physical Properties, that determine what they are.

Example of such is a Photon, which has no mass, no electric charge, and is stable. in empty space, the photon moves at c (the speed of light) and its energy and momentum are related by E = pc, where p is the magnitude of the momentum vector p...

But perhaps it's oversimplification.

Still trying to explore related world of ideas enough.

Probabilistic Quantum Thinking.

As not too far as i know, in Physics on quantum level the outcomes of many physical processes are not precisely determined, and the best we can do is to predict the likehood or 'probability' of various possible events.

'Wave function' plays an important role in determining these probabilities: for example, it's strength, or intensity, at any point represents the probability that we would detect a particle at or near that point.

Particle Interactions.

There is a small probability, when two photons of sufficient energy interact, that they can undergo reverse annihilation and create an electron positron pair.

Electron–positron annihilation occurs when an electron (e−) and a positron (e+, the electron's antiparticle) collide. the result of the collision is the annihilation of the electron and positron, and the creation of gamma ray photons or, at higher energies, other particles...

Electrons & Positrons have mass (9.10938291 × 10-31 kilograms, either of them), a Photon is massless.

Energy & Mass.

'matter is form of energy'.

'matter is basically massive. the distinction separates it from the unmassive stuff that propagates at light speed'.

Mass can be converted to energy and vice versa, as in the famous E = m · c 2 equation by Albert Einstein.

Probabilistic Finite State Objects & Stitie Space.

Perhaps Physical System on Quantum Level can be modelled as Finite State Objects in Stitie Space, that when observed give probabilistic results, either Summaries with Probabilities, or a random Values from their Random Event Spaces.

Statistics then can be used.

See also: [28], [29], Stitie Machine 1.1 'Sunsail' for Stitie Space, Quantum Physics Energy Engine, Energy, Matter, Particle, Wave.


P2P Robot Cloud Control System.


i think but am no expert.

i think that Peer-to-Peer System with the System Coordinator can efficiently control 'Clouds' of Robots, either of nano-, pico-, or of other scale.

Election Algorithm.

Election Algorithm might differ from classic Election Algorithm(-s), for example robotic units might have priorities in which they take over responsibility of the System Coordinator, or there might be a hybrid algorithm for the 'Election' / 'Succession'.

How these Robots are Coordinated?

idea is that robots have a certain degree of autonomy, and System Coordinator gives them tasks and provides them with more or less accurate coordinates of other Objects, for example coordinates of the robots in our 'swarm', or of other objects non-affiliated with our 'swarm'.

even imprecise coordinates might be useful, as System Coordinator does not need to have Total Control, letting robots attempt to find each other based on vague information, and receive reports such as last coordinates of certain robotic units.

such robots have Roles and Collaborate in smaller or larger groups or alone... some of them Communicating with System Coordinator over various distances, other performing different tasks.

Communication Rays.

Establishing communication can happen for example in a two-phase process using the directional energy rays such as Lasers or Xasers, Coherent X-Rays.

First, a 'search ray', aiming & searching fairly wide area is sent until sought object is located.

It can start as a thin line aimed at location, then gradually switch to a cone up to maximum angle, perhaps with 'searching movements', perhaps more.

Sought object answers with ray communication at the source, and second phase of communication is started, using much thinner rays, not spreading widely the details of communication over the Space.

Established communication should include maintained connection session, maintaining coordinates & destination points headed for and current. If neccessary, little of search movements & widening of a communication ray can occur. Retransmissions can occur as well.

Coherent X-Rays.

Why coherent x-ray are option?

i've read & deducted that sooner or later, such coherent X-Rays will be available for communication purposes.

From the Internet perspective it's just another physical medium, so we require just a transceiver (transmitter-receiver).

They have precious properties of being able to pass through walls, while remaining coherent, thin & directional, perhaps more.

i can easily imagine that unique properties of X-Rays can prove them to be more succesful where other electromagnetic radiation wavelengths such as radio might fail.

They are as good as other electromagnetic wave lengths, it's just an example. They are just a part of multi-frequency communication links. Wave lengths determine frequencies, and vice versa - as not too far as i know.

See also: [3], [12], Digital X-Ray Signal Transmission., The Internet and Physical Communication Media, Hidden Communications Threat, Stitie Grid, Tactical Forms & Formations in 3D, Internet of Things, the Distributed Machine, perhaps more.

Peer-to-Peer Systems.


A peer-to-peer network is designed around the notion of equal peer nodes simultaneously functioning as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client–server model where communication is usually to and from a central server.

Architecture with the System Coordinator.

Using Election Algorithm(-s) one of peers can take responsibility of the System Coordinator.

System Coordinator can organize all peers into a structure optimal for given task and give them subtasks etc., or for example just provide service for an Information Exchange - such as list of all peers and available resources at each so they can participate in resources exchange on their own.

When System Coordinator goes offline or fails in any other way, the peer-to-peer networks have capability of ignoring current System Coordinator and initiate Election, choosing new System Coordinator as needed from remaining peer set.

Chord Structure in a P2P Architecture with System Coordinator.

Parts of Chord Article may have uses in P2P Architecture with the System Coordinator.

For example: System Coordinator might organize selected number of peers into Chord DHT Circle Structure that is Independent, then wait for exceptions / errors & manage whole system.

See also: [12].

Distributed Shared Memory.

Distributed Shared Memory can be seen as a Service.

Server allows many users - in the Internet, or not only - to access the data in the Safe, Concurrent way.

Distributed Shared Memory is an Abstraction of Data Sharing Service for Computer Systems with separate Physical Memory.

Distributed Hash Tables (DHTs), used for example in the Torrent peer-to-peer file sharing systems, are examples of concrete implementations of the DSM Abstraction.

Concurrency is done per memory block, addressed properly.

Data can be replicated for speed of access & loss prevention.

Events System can be used for replicating data. Once a memory block within a group of such has it's data changed, including removal of data - which is also seen as a change, all memory blocks in a group can be notified and data can propagate properly, using properly Election Algorithm(-s) for selecting Process Coordinator if neccessary. This includes proper Processes Synchronization & Transactions - as in databases, financial Transactions are optional. Distributed Transactions have their uses here as well, they are using the Idea of Two-Phase Commits.

See also: [7], [13], [12], perhaps also [8], perhaps also [11] & [21], perhaps also [3], perhaps more.


Quantum Physics Energy Engine.

i think that Stitie Space can serve as a framework for modelling Energy Particle Interactions, perhaps more.

i think that Energy Particle, such as Photon for example, is an Object that has position in Space, and other Physical Properties.

i read that from a certain perspective, particle of light, a photon, has a size, about half a fermi: 0.5·10-15m, 'since you start to feel it pushing back against you if you go to close', which happens at about that size.

Thus we can Model, Compute & Analyze Waves of Such Energy Particles as well, i think.

Stitie Machine scales well, so we can have plenty of Computing Power & Resources if needed, even now... much more in few years, i think.

Ambitious plan would be to create the Simple Quantum Physics Engine, which involves Deeper Knowledge of Physics, perhaps a little of Mathematics as well.

i think this can have use in Device Drivers for the Three-Dimensional Multi-Wavelength Energy Projectors, if there are or would be such in future. Or perhaps even N-Dimensional.

See also: [28], [29], Stitie Machine 1.1 'Sunsail' for Stitie Space, Laws of Thermodynamics, Energy interactions on Quantum Level, Energy & Light; Fields & Waves.


A Quick Method for Mid-High Complexity Software Design.


This is fairly easy, intuitive, top-down method of software solution design.

Forming Algorithm.

Plan of forming Algorithm, a solution:

1. Understand & Precise what we wish to do,
1.1. Name a 'problem' and / or 'solution' properly,
1.2. We can use OOA Domains Modelling,
1.3. We can explore & learn basics of Domain Theory by performing Research: from books, films, the Internet, Courses, etc.,
1.4. Basics of Domain Theory let us know topic enough to talk well with experts if neccessary,
2. Plan the 'solution', perhaps writing it down, either as algorithmic steps, or model's parts to be used in these steps,
2.1. Identify things to do as parts of 'solution' that need to be done to achieve desired result,
2.2. For each of identified parts, repeat the whole process, naming 'subproblems' or 'partial solutions'.

Realization & Implementation.

Once we have Algorithm, we can:

- implement each of the 'solution parts',
- abstract & simplify similar parts, creating reusable components,
- integrate reusable components into the working solution(-s),
- document contracts.

Use of Packages In Java, perhaps in other languages as well.

Packages are containers for software building blocks, classes.

From classess, objects can be instantiated, 'assembled' into graphs and 'configured' with an 'initial state', to form working, running applications.

Each of such assemblies / configurations is a different application, with different version numbers and different behaviors.

Changes in classes between versions imply different assembly, therefore a different version.

Packages have 'qualified names', for example: 'car.engine.model', which might imply that:

- package is responsible car engine's model.
- that a car engine's model is a 'partial solution' for the 'car engine' functionality, and 'car engine' is a 'partial solution' for the 'car' functionality.

Every 'class' used in a 'partial solution' should belong to a package used for the 'partial solution'.

When the same package is used in many places, it should be renamed, abstracted and moved somewhere outside of a 'partial solution'.

Object Model Decomposition & Analysis.

Application, a Graph of Objects can be decomposed and painted as a Model, preferably precisely showing fields & methods (strategy in case of Stitie Machine), with dependencies visible, perhaps with initial state shown as well.

Then this Graphical Model can be examined & analysed.

After analysis, further abstraction & simplyfing, or corrections can be done.

There are Modelling Languages such as UML, particularly with Class Diagrams & Object Diagrams, which can be used here, i think.

Modelling Concurrency.

This can be done using Petri Nets.

In case of Stitie Space, Stitie Machines at Odd & Even coords can represent Places & Transitions appropriately.

Modelling Protocols.

Protocol modelling can be realized using a Tree.

Source: My experience, [6], Wikipedia.

See also: Objects, Classes, Modelling, Object Relationships Modelling & Analysis.

Basics of Linux & it's Kernel.

Linux is Operating System similar to Classic Unix.

There are many different distributions, or software configurations.

Linux consists of:

- Kernel,
- System Libraries,
- System Tools.


Kernel provides realization of all important Operating System abstractions, including such elements as Virtual Memory & Processes.

Kernel uses idea of Dynamically Loaded Modules.

Modules are Kernel parts, which can be loaded into Kernel and unloaded from Kernel as needed or neccessary.

We do not need CD-ROM driver functionality of a kernel if we do not have CD-ROM.

Once we plug the hardware in or out, we can load or unload the driver responsible for hardware as needed.

Not all modules are drivers responsible for hardware, however... there can be also other functionalities such as software tools for scheduling processes, or other parts of the Operating System's 'software infrastructure'. By changing, loading / unloading module, we can change implementation of Operating System's Process Scheduler as we wish.

To change Operating System's behavior on such a basic level is risky, and we do not need to recompile whole Kernel to achieve the change. We can write a Kernel Module, or use / modify existing a Kernel Module, then unload / load it as needed.

Hackers can rewrite Kernel Module to hide chosen files from Operating System's users, as an example. Or to log (record) keystrokes and exact details of mouse moves, including system clock's reading and logged user information and screenshots in a hidden file on a hard drive, as an another example.

System Libraries.

System Libraries define standard set of functions that can be used in users' programs to access the Kernel.

System Libraries also provide certain functionalities that does not require full privileges of a Kernel Code, such as Mathematical Functions, Sorting Algorithms, String Operations etc... There are application standards, such as UNIX, POSIX, that require functionalities provided by System Libraries to allow applications to meet them.

If neccessary, even a Driver For Mathematical Unit (Such as Processor Part, or a Graphics Card) can be part of System's Kernel. System libraries can use this possible Kernel Modification. Perhaps there are standards however deciding what is put in Kernel, and what in System Libraries, what needs 'privileged Kernel mode', what does not.

Operating System as Abstraction over Hardware. Operating System as the Software Infrastructure.

Applications are usually written not for particular Hardware, but for a certain Computing System, a combination of both Hardware & Operating System, perhaps other Software as well.

It's easier to depend on an abstract hardware driver & system libraries, than to include code for all possible concrete CD-ROM models, sorting algorithms & such, in our software, if we want to read data from a CD-ROM as part of our functionality.

System Tools.

System tools are programs useful in managing or using Computer System.

They are executed & used as many times as we prefer, to edit a text file for example... or are constantly working in background (daemons), to respond to signals from the Network, for example.

System tools can be executed from a console terminal as written commands, or are using Graphic User Interface. Their execution can be also automated, and happen as part of Operating System starting.

Source: [12], Wikipedia.



Modularity is dividing program into small components, or modules, each of such with closed functionality and with a stable architecture - working and complete, but perhaps, desirably, extensible still.

Modularity allows for software parts reuse and extending.

In my opinion best module is an implementation class(-es) with an abstract interface(-es).

In my opinion, modules should be:

1. Interface-dependent: modules should depend on a simple interface, a set of operations that can be used by other software parts.

In an Interface, we do not specify how we do things, but what we do.

In Implementation we precise how things are done. Different implementations of a given interface can do some things differently, for example: count slower, but with more precision, as long as it's not precised by interface requirements.

No unneccessary features & functions should be available to module's users, module should work like a black box which does not show every detail of it's functionality to user, only it's key functions, an interface.

This simplifies everyone's life, makes software cheaper, faster, safer, more robust.

Interfaces should be responsible only for one thing, even if it consists of many other smaller things.

Module users should depend on abstract interfaces, which allow for concretion's substitution.

We can choose between implementations as we like when composing software from parts, for example: we can choose & change later car's engine for more expensive but faster if needed.

For example: as we model Car, we say - abstractly - that it has an Engine, among other parts. Concrete Engine Model Choice depends on how we compose car, but design should allow us to choose as we prefer, or change as we need - thanks to relying upon abstract engine, not a concrete model or concrete family of models.

2. Composible: modules should be able to be composed from smaller modules, which represent things,

For example: car consists of many parts such as engine or wheels. both car, engine and wheel is a single thing,

3. Well-documented: modules & their interfaces should be easy to understand, thus easier to modify or repair,

4. Loosely-Coupled: a change or error in one module, should not cause other modules to be broken,

This can be achieved by managing dependencies between modules, avoiding dependency cycles, and isolating modules from each other, designing them so they can work independently, as well as in a collaboration,

Example of a dependency cycle: module A depends on module B, module B depends on module C, module C depends on module A,

Dependency is something that when changed causes a change or error in other part(-s) of software.

5. Open/Closed: Modules should be open for extension, but closed for modification.

We cannot change what module does, but we can either demand less to make it fulfill it's function or give more for the same, or lower price.

For example: we won't change a Car to Pillow, but we can either provide extra functionality such as air-conditioning capability, or remove some demands on client such as Processor, the Internet bandwidth, or Memory usage.

For example: we can attempt to optimize Car's functioning to use less of electricity, without changing it's functionality other way - as precised by functional requirements for an interface.

See also: SOLID.

Source: [27], my own experience.


Secure Communication over the Internet.

On most basic level, we can identify the following desirable properties of secure communication:

* Confidentiality: Only the sender and intended receiver should be able to understand the contents of the transmitted message. Because eavesdroppers may intercept the message, this necessarily requires that the message be somehow encrypted (it's data disguised) so that an intercepted message cannot be decrypted (understood) by an interceptor. Cryptographic methods are responsible for message encryption & decryption.

* End-point authentication: Both the sender and receiver should be able to confirm the identity of the other party involved in the communication - to confirm that other party is indeed who or what they claim to be. Face-to-face human communication solves this problem easily by visual recognition, over the Internet it's not so simple. Authentication protocols & cryptography are often used for that.

* Message Integrity: Even if the sender and receiver are able to authenticate each other, they also want to ensure that the content of their communication is not altered, either maliciously or by accident, in transmission. Checksumming / Message Authentication Codes such as MD5, are often used to ensure integrity of messages, perhaps with more of Cryptography & other methods.

* Operational security: Security of the infrastructure, hardware & software used in communication, other tools we use & security discipline of people / organizations involved. Firewalls, Intrusion Detection / Prevention Systems, VPN, more or less secure physical communication links, etc... all of this can help to ensure this quality of secure communication, but not only... Security should be adressed as a whole, instead of just putting 'A Metal Vault Door in a Tent', an expensive security toy that covers only part(s) of the security issues. Even most expensive Cipher won't help if people will fail to secure keys or passwords, for example. Or if erroneous software is hacked, for example. If 'the secure physical link' is tapped (bug placed along the line to record the transmission), Cipher can help, but this is still a weakness, a security hole. Ciphers can be attacked if we have samples, especially if both encrypted and decrypted... and there are cases when both unencrypted text and encrypted text are sent... or at least standard parts of messages are transmitted and are easily guessed by cryptographers. (for example: known HTML parts).

See also: Communication via 3 nodes, How to arrange safer route via the Internet?, What is VPN?.

Source: [3], [8], Insights, perhaps more.


Atomic Multicast, Reliable Multicast.

Atomic Multicast is a group communication method in which exact copies of a single message are sent to a group of receivers, and mechanism guarantees that message is received either by all of receivers or none of such.

Example of use: Replicated Stateful Services Fault Tolerance mechanism requires Atomic Multicast for synchronization.

Reliable Multicast is a group communication method in which effort is made for delivering exact copies of a single message to all of receivers in a group, but mechanism does not guarantee it. It is possible for a message to reach only part of receivers.

Example of use: Reliable Multicast is often enough for finding Distributed Objects of a given type, if we are not too selective.

Source: [13].

See: Broadcast and Multicast routing, Group Communication in Distributed Systems, AI, Stateful, Stateless, Services.


'Supervised Learning', Stitie Space.

this is just an experiment with more or less nondeterministic AI learning, author does not guarantee anything just yet.

in Supervised Learning, we are given a set of example pairs (x, y), where x belongs to a set X, y belongs to set Y and the aim is to find a function f : X -> Y in the allowed class of functions that matches the examples.

i think that (x,y) are states of a Neuron.

* x in X is initial state of a Neuron,
* y in Y is target state of a Neuron, achieved after receiving a message from outside,
* token is data in message received from outside. it might include information about message's source, and / or from message's source,
* f(x,token) is a transition function which transforms x into y, depending on x and token.
f can have side effects, such as sending messages to other Neurons.

f might be more or less nondeterministic transition, with random data included either or both in x and in token.

accuracy in reaching desired y-states, measured with a statistical apparatus (for now we use only random events space and simplest tools for such) can be measured in % (for example: using function f, desired y-states were reached from x-states in 84% over 108 tries, in 90-91 cases out of 108).

then we can risk an attempt to extrapolate, to extend solved problem domain past examples.

i think that f can be chosen more or less randomly from possible functions (interpreter's instruction tree can be generated more or less randomly then analyzed. simple or complex constructs can be used in generation of more complex constructs that way, under supervision of a Real Scientist).

for now with one Neuron, later with few and more, perhaps.

See also: Neural Networks, Stitie Space.

Stateful, Stateless, Services.

Services, both in Distributed Environment and in Single Computer System's Memory can be either Stateful or Stateless.

a Stateless Service does not change it's state between handling 'requests' from 'Service Clients'. 'Response' depends only on the 'Current Request', not on 'previous operations' in that a Service.

a Stateful Service may change it's state between handling 'requests' from 'Service Clients'. 'Response' depends, and is determined by the 'Current Request' as well as on 'previous operations' in that a Service.

Previous operations might be internal operations, not triggered by requests (not very elegant solution to use such) as well as 'previous requests' from 'Service Clients'.

in a better case of better System's Design, only 'previous requests' from Service Clients are 'previous operations'.

Generally, Stateless services are more reliable, cheaper, faster, more secure and more easily duplicated to provide reserve components. Stateless services should be preferred wherever it is possible and not unreasonable. There are 'elegant solutions' that use Stateful Services however.

Stitie Machines (in Stitie Space or not; within a Single Computer System or in the Internet), can provide capabilities of either Stateless Services or Stateful Services.

Group Communication in Distributed Systems, AI.

one-to-one message exchanges are useful, but often not enough.

to communicate something by one process to a group of processes, we use multicast messaging.

in the context of Distributed Applications, it has following uses:

* Fault Tolerance: by having & using 'reserve services',
* Finding Objects in Distributed Services: by sending a message to many services at once, with a query, and waiting for answers,
* Performance increase, sometimes: by data multiplication. sometimes data is closer (less hops in the Internet, for example), or we just have less systems that are overburdened, perhaps more,
* Multicast Updates: sometimes we can notify many systems at once of an event occurance,
* perhaps more.

above have uses in a Service Integration or Systems Integration (putting the pieces together so they work well together).

Systems Integration: Service can be provided by a Computer System (hardware with software),

Services Integration: multiple Services can be Integrated within a single Computer System, but not only within a single Computer System.

i think that Sitie Space with this post have uses in the Distributed Artificial Intelligence, for communication (sending messages) between 'Neurons' (machines such as Computer Systems), connected as in a Graph by the Internet. Each of Computer Systems can also consist of a 'Graph of Neurons and Connections', together providing a 'larger' Service of a more complex 'Neuron' - it's 'Recurrential'.

Source: [13], Wikipedia (links are above), my experience.

See also: Distributed System, What is cluster?, Petri Nets, Stitie Space and Distributed Machines, Simple Artificial Intelligence, Hardware, Operating System and Computer System (for everyone), Atomic Multicast, Reliable Multicast.


Vectors Matrix, Stitie Space.

i think it's very easy to compute Vector Matrix-es using Stitie Space.

just put a vector object (one or more) into a state at the desired coordinate(s).

then send strategy to a given coordinate(-s) and do something with such.

see also: Stitie Machine 1.1 'Sunsail' for Stitie Space.

i think it's also easy to compute such things as a 'Force Wave' (composed of 'Force Wave Particles'), because mass can also be put into a state at a given coordinate(s), not only vector of acceleration squared... (F = m · a2) perhaps it's true, i am learning Physics still.


Conditional Software Tiering.

Conditionality & Buddhism.


'Arise through the force of conditions'.

The meaning of Pratītyasamutpāda is that which arises in dependence upon conditions, in reliance upon conditions, through the force of conditions.

'This is, because that is.
This is not, because that is not.
This ceases to be, because that ceases to be.'


Conditional Software Tiering.

i think it's possible to give Software such a quality.

to depend on conditions, perhaps on surplus, including reserve components, so it's more robust, sturdy & secure... in case of a single component failing, we can use other, spare ones... this is nothing new in Software Development, i think.

software can detect & report which software components are operational, assemble itself from such, upgrading functionality to higher layer if possible, and in case of component malfunction(s), reducing itself to more basic functionality as needed.

because proper working of components are preconditions for proper working of other components which arise from such.

higher layer functionality reduction can also be caused by other conditions such as need of resources for other causes & software parts, including security on a 'different front'.

Conditions & Causes.

there can be that something arises because of a cause(s) occuring, while precondition(s) hold. cause(s) occurance can be modelled using event(s), conditions holding or not can be represented as a data in the database or tuple(s) in the Tuple Space.

ceasing of conditions or arising of conditions might also be events that's how decisions can be made for software to advance to higher tiers or to degrade itself to lower functionalities as neccessary.


see also, if You wish: Conditions, Security Harness, Software Complexity, 'Events' in Programming, Conditionality, Token Game.


Conditions, Security Harness, Software Complexity.

(EN) precondition = (PL) warunek wstępny.
(EN) postcondition = (PL) warunek końcowy.
(EN) invariant (it can be modified) = (PL) niezmiennik (jakby co można go zmienić).

condition: a precondition, a postcondition, or an invariant is a quality of software that does or does not hold at a certain moments in time.

example 1: did user type '3' in a box? we can include this as a precondition for letting certain things to happen via software's execution (we do not have to).

but we do not know for example if it's the user or someone else, who pretends... many times.

example 2: is user provided image green enough according to our standards?

there can be other conditions as well.

when preconditions cease to hold, software parts cease to work properly anymore, depending on a perspective.

when invariants are modified, software loses certain qualities perhaps... gaining others perhaps, and so on...

when code part is executed (after all preconditions are met), after execution we can check if desired postconditions are met (to check if interferences such as hacking or programming errors caused this to fail). we can also react, for example by reporting this to software's user(s).

it's part of formal thinking about software, security harness of formal mathematical proofs (which are often too expensive to use), but also of automated unit testing / test cases that help Software Devs to 'tackle the Software Complexity' ... to write more complex applications faster, cheaper, without so many errors.

For the formal mathematical proof details, see also: [4], [19], perhaps more.
Example of Tools used for Testing are: JUnit, Easymock. (for Java, perhaps more).

See also, if You wish: Design by Contract, Conditional Software Tiering.

Causal Layers of Software.

Software Layers Causality.

i think but am no expert, not very meek either way as well...

i think that software is built of more basic, smaller often, software components.

proper working of such a building blocks is precondition for working of larger components.

working of such preconditions can be tested using security harness such as automated test cases, or proven mathematically more or less often.

when preconditions of a software component cease to hold, software will fall apart.

software projects are often competing, and precondition for more resources for a given project is superiority, which can be achieved in many ways, for example by causing preconditions of rival projects to fall apart, somehow.

basic components are 'lower' layers, complex components (larger, built from smaller) form 'higher' layers, at least from a certain perspective.

See also: [6], perhaps.


Formation of sixty, eighty four.

i think it's another very important number when Forms & Formations are considered.

60 + 84 = 144 = 12 * 12 = 3 * 4 * 3 * 4 = 3 * 2 * 2 * 3 * 2 * 2 = 36 * 4 = 36 * (3 + 1) = 36 * 3 + 36 = 108 + 36 = 108 + 3 * 12 ...

these numbers can translate to forces that form a larger force, for example 108 + 3 * 12 can mean the Task Force of 108 units, with 3 reserve forces of 12 units (operational, all of them).

how to transform such numbers in this system?

in many ways, but i'd rely mostly on shortest, quickest, most secure of moves (Martial Arts Philosophy).

i'd spread these forces into 3's and 2's then use these groups to form larger numbers.

it's based around quality of 108 number that's composed of 3's and 2's (triads & pairs).

108 = 11 * 22 * 33.

Form & Formation of 6 is very important, it can be spread into 2 * 3 or 3 * 2. there's the difference between triad of units cooperating and pair cooperating. this forms basis for subtle differencies between cooperation of pair of triads or triad of pairs, for example in how they can be split to perform how many tasks at once and of what type. Formation of 6 can reform between 2 * 3 and 3 * 2 fluidly, if unit is prepared, i think.


108 = 36 * 3 = 6 * 6 * 3. i'd call it FormTri108, a very important Form, and core of Formational Strategy.


108 = 27 * 4 = 33 * 22. i'd call it FormQuad108, also the very important Form.

this transformation of Forms & Formations can be called graph(s) transformation(s), for graph data structure(s) can describe relative positions and communication lines between units, in the Internet or not. absolute coordinate positions as well, in case of Stitie Space, for example.


but still 108 is Most Beautiful.

108 + 108 + 84 = 300 = 144 + 144 + 12 ...

that's how these forces can be transformed from a Form to a Form.... 108 + 108 + 84 can be joined into 300, then spread into 144 + 144 + 12, for example.

in a context of the Internet Warfare, using Stitie Space, MATEN & Prism, for example.

see also: Shapes, Forms, 108 Number & Strategic Botnet, Factorization Tree, Factoring Numbers, Reforming Shapes, Stitie Machine 1.1 'Sunsail', Spartan Formation of 300.


Reductional Fighting.

i think but am no expert of Abstract Strategy on a Diversified Front that includes the Internet Front + the Strategic Front...

When full operatibility is reached at a certain scale.

Scale would be size/mass of an unit, not neccessarily an armoring.

For example (scale 4, with forces ratio 6:1):

- 1 units of scale 4;
- 6 units of scale 3;
- 36 units of scale 2;
- 216 units of scale 1;

it's called a full operatibility of 4 scales.


It's weak point is lack of reserves, which can be supplied by an allied force.

Otherwise, scale 1 can be attacked to reduce full operatibility of 4-scale formation.


i think that in such a case, we should fight with full force, except for highest scale we have (scale 4 in this case).

In case of a gap in scales (for example, lack in scale of 2 in this case, higher scale forces should retreat as well).


in other words: heavy units of the scale should not fight without escort, mechanical or not.

so, for example, i think that Supercomputers should not fight without escort of smaller scale internet devices, for example desktop computers connected to it's Local Area Network.

nothing like a light counter with surveillance and Police involvement... should deter longer, recurring attacks quite efficiently... and still very useful in a Real fight, i think.

Strategicity Formational.

Stitie Space has quality of Strategicity Formational.

Strategicity means we can use Strategy.

Which part of Strategy? In this case we can change Formations (of LightPoints or Code/Data pairs or Strategy/State pairs - code is called Strategy, because it can execute a Strategy inside computer that can have side effects outside) in Space (in this case Stitie Space).

Stitie Space is a way to address memory in a way, one of many such ways.