#header-inner {background-position: right !important; width: 100% !important;}


Design for Volatile Parallel RAM Server for Objects.

This design is about Volatile Parallel Random Access Memory server in Java Programming Language.

Stored data is lost when service goes down.

Memory can be represented in computer memory as an sequentially indexed array.

Communication can happen via standard socket interface with use of fast queues for each of cells & for infrastructure services such as returning memory size & cell size.

Size of a memory cell can be configured before service's start, and byte sequences can be stored or retrieved from a cell with given address.

Objects can be stored in memory, if there's space available for them, otherwise an error is reported.

First objects are marshalled (serialized) to a byte sequence, then stored in server's memory - including the data neccessary for unmarshalling. This happens as atomic operation, and can happen in more than one memory cell. Java Reflection API should be used. Marshalling / Unmarshalling of Objects should happen at Client, not on Server, but there should be also a server for 'stored objects map' and accessing stored objects should happen through it as atomic operation.

See also: Concurrent Algorithms & PRAM.



Merger is a tool for merging two sorted sequences of the same length & of comparable data type(-s), into one sorted sequence.

For example: < 2, 3, 4 > can be merged with < 3, 5, 7 > into < 2, 3, 3, 4, 5, 7 >.

Merger uses Bitonic Sorter & part comparing i-th element with n - i + 1 for i = 1, 2, ..., n/2.

Merger Construction Algorithm, for n = 8.

Merger Example, for n = 8.

Merger has use in Parallel Sorting Algorithm with time complexity of O(log2 n).

See also: Bitonic Sorting Net.

Source: [4].


The Internet Search Engine for Anti-Terror.

How to make an Internet Search Engine?

Easiest way is to just write an application that visits web pages either at random, following clues or in an ordered pattern, or using a mix of these.

After visiting a page, it's contents can be downloaded for analysis & categorization.

This process needs to be repeated after a given time period, to check if web page content changed, and needs to be re-categorized.

This takes resources such as computer memory, cpu-cycles, the Internet bandwidth, is race against time... but even without large amount of resources, parts of the Internet can be visited still, and it's cheaper if we know what we are looking for.

Then downloaded & categorized data can be presented to users, who write 'queries' in search field and get results of the search in response - picked from data that has been prepared for them in previous steps.

Sorting Server, Parallel or not, can be useful for preparing large amounts of data for categorization, presentation, perhaps for more.

Various the Internet Search Engines can serve different purposes, depending on which, methods of search, categorization & presentation can vary.

To write a Search Engine that seeks Terrorist Threats in the Internet, application writer needs to understand Terrorism - it's language, thinking patterns, methods of communication & symbolism, perhaps more. He or She needs to be prepared to handle that properly as well. This is what i lack at, at least for now.

Knowledge of Cryptography helps too.

Parallel Sort Server.

i think that Stitie Space can be used to model Sorting Net or it's parts as neccessary.

When combined with Database, really large amount of data can be processed in parts, then combined into whole Sorted Data Structure.

Cost would be O(log2 n) plus cost of modelling & instantiating Sorting Nets which are more or less O(m2), where m is size of data parts to be processed, m ≤ n.

Because of multiple threads use, multiple processors or cores are used efficiently (will have to check / correct code if neccessary, once have Highly Parallel Machine to work with).

Specificially looking for Highly Parallel ARM Processors with Java Bytecode support.

See also: [4], Stitie Machine 1.1 'Sunsail' for Stitie Space, Advanced RISC Machines (ARM) & Java, perhaps more.


Advanced RISC Machines (ARM) & Java.

'ARM is a family of instruction set architectures for computer processors based on a reduced instruction set computing (RISC) architecture developed by British company ARM Holdings.

A RISC-based computer design approach means ARM processors require significantly fewer transistors than typical CISC x86 processors in most personal computers. This approach reduces costs, heat and power use. These are desirable traits for light, portable, battery-powered devices—​including smartphones, laptops, tablet and notepad computers, and other embedded systems.

A simpler design facilitates more efficient multi-core CPUs and higher core counts at lower cost, providing improved energy efficiency for servers.

ARM Holdings develops the instruction set and architecture for ARM-based products, but does not manufacture products. The company periodically releases updates to its cores. Current cores from ARM Holdings support a 32-bit address space and 32-bit arithmetic; the ARMv8-A architecture, announced in October 2011, adds support for a 64-bit address space and 64-bit arithmetic. Instructions for ARM Holdings' cores have 32 bits wide fixed-length instructions, but later versions of the architecture also support a variable-length instruction set that provides both 32 and 16 bits wide instructions for improved code density.

Some cores can also provide hardware execution of Java bytecodes.'

Source: Wikipedia.

Forces Classification.

in context of this blog:

- Task Forces: forces that are prepared to perform certain tasks.
- Supporting Forces: forces that are prepared for supporting other Forces in their tasks.
- Repelling Forces: forces that are prepared to repel attacks (make opposing forces retreat or stop attacking in any way).

(perhaps more to come later).

examples in hacking warfare:

- Rootkit installers (Task Forces),
- DoS projectors (Supporting Forces),
- Remote Firewall installers (Repelling Forces).

there can be combinations of forces composition, for example:

- Task / Support Forces are prepared to perform their Tasks or to Support other Forces in their tasks.


Bitonic Sorting Net.

Sorting network for bitonic sequences can be constructed from 'half-cleaners'.


For size n input sequence, i-th input is connected with comparator with i + n/2 input,
for i = 1, 2, ... , n/2. We'll assume than n is even.


Bitonic Sorter.

Bitonic Sorter Construction Algorithm.

Using a Half-Cleaner & two smaller Bitonic Sorters we can Create a larger Bitonic Sorter recurrentially as shown on above image.

a Complete Bitonic Sorter Example.

Bitonic Sorting has use in Sorting with time complexity of O(log2 n) - a 'good behavior' for parallel sorting algorithms.

For Sequential Sorting Algorithms, 'good behavior' is time complexity of O(n · log n).

Source: [4], Wikipedia.

See also: Sorting Nets, Comparing Net, Bitonic Sequence, Bitonic Sorting in Cryptography.

Parallel PRAM Algorithm: List Ranking.

There are many ways we can rank list elements & there are many algorithms that enable us to do so.

List Rank of a List Element is it's distance from end of the List.

Classic sequential Algorithm works in time of O(n).

Quoted parallel algorithm for PRAM EREW machine works in O(log n) time on n processors.

Work done is measured as amount of processors times it's time efficiency, or O(n · log n), so there's cost for that speed.

Algorithm proceeds as follows:

List-Rank (pseudocode):

1. For each processor i in parallel
2.   do if (next[i]) = NIL
3.     then d[i] := 0
4.     else d[i] := 1

5. while exists object i such as, that next[i] != NIL
6.   do for each processor i in parallel
7.     do if next[i] != NIL
8.       then d[i] := d[i] + d[next[i]]
9.         next[i] := next[next[i]] // list's integrity is 'broken'

Algorithm's proceedings:

Pointers are destroyed in process of an algorithm execution, but they can be restored easily if we have copies of them (copy of proper pointer in each of list elements, for example), in constant time.

There are many similar algorithms and many tree algorithms that can be transofrmed to this class of algorithms (jumping algorithms).

Source: [4].

See also: Concurrent Algorithms & PRAM.


Forces Equivalence.

in context of this blog:

Force consists of units.

If we have more units of a given type than enemy we can attempt to equivalize (amount of) enemy units by preparing/arming equal amount of units with weapons meant to counter that force, and have rest of forces use mixed weapons, partly against that type of units, partly against other target(-s).

That way we have countering force with equivalent firepower and more supporting force(-s).

Countering units are specialized fully against given target.

Support units are not specialized fully against given target.

How easily unit defeats another (% of engagements succesfull) in a given conditions can be called efficiency against an unit in a given conditions.

If we have units that can defeat n of units of a given type 'easily' (efficiency of at least 90% of engagements), we can counter enemy force of m units with m/n units of such type of our own.

That way we can reach 'countering equivalence' against given force with m/n units of given type.

Efficiency of at least 90% we can call 'countering'.

Efficiency of more than 50% we can call 'superior'.

Efficiency of 50% we can call 'equivalent'.

Efficiency of less than 50% we can call 'inferior'.

Efficiency of less than 10% we call 'neglible'.

If we do not have countering efficiency, we can combine multiple units into a force and statistically assess their efficiency against the given target.

See also: Hacking firepower and operatibility.