Thursday, 27 August 2020

'Three Spies Problem'.

By node in this post we understand internet device, network of such form a graph.

Communication from node A to node B can go through other transitory node T or through
transitory nodes T1, T2, ... Tn.

Then we can reach node B with message with 100% success rate if transitory node(s) won't fail.

We can send messages via transitory nodes in many ways.

Let's assume we have three transitory nodes, that is - T1, T2 and T3.

Then we can transmit a message:

1. Via random or predetermined node.

If that node fails, signal needs to be retransmitted.

2. Via all nodes at once, whole message.

It succeeds as long as at least one transitory node does not fail, but signal can be captured easier.

3. Via all nodes, 1/3 of message through each.

Transmission fails if at least 1 node fails. But lost part(s) of message can be resent later. There's less risk of capturing whole signal by opposing forces as well.

4. Via all nodes, 2/3 of message through each. (different 2/3 via each).

Transmission succeeds 100% of time when at least two nodes won't fail. Whole signal is captured if 2 transitory nodes are captured. When one node won't fail, 2/3 of signal are sent, then rest can be transmitted again.

When splitting signal into 2/3, whole has to be encrypted, then split into 3 parts, concatenated (joined) appropriately, then ecrypted each 2/3 again. It's difficult to decrypt message or its parts that way if one has no private keys.

For security, transmission can occur via different nodes (different internet route paths for example), and not at the same time. This can be done via three different internet cafes for example.


See also, if You wish, ... :
- 'Incoming Cipher Crisis'.

Sunday, 23 August 2020

Object State & Context.

About.

State / pl: 'stan' / can be defined in many ways.


Finite State Machine.

A finite-state machine (FSM) / pl: automat skończenie stanowy / is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition
/ pl: 'przejście', 'funkcja przejścia' /. An FSM is defined by a list of its states, its initial state / pl: 'stan początkowy' /, optionally acceptance (final) state(s) / pl: 'stan(y) akceptujący(-e)', 'stan(y) końcowy(-e)' /, transitions, and the inputs that trigger each transition.

... for more, feel free to look: [ Finite-state machine on Wikipedia ].


Finite State Object.

One can look at object, at it's variables. Variables can be - and are - represented as list of 0's and 1's. Variable values can be concatenated into one list. All of possible permutations of 0's and 1's in this list represent all of possible of this object's states.

Similarly, object's methods can be seen as 'families' of transition functions / pl: 'rodziny funkcji przejścia' / between object's possible states.

In this way, object can be seen as finite state automaton, also called FSM - Finite State Machine.

... i call FSM implemented as object an FSO - Finite State Object.

In FSO, each of possible states can be represented graphically as a dot labelled with its list of 0's and 1's ... and methods are arrows leading from one state to one or more other states. Obviously not every state must be connected with every other possible state. Not every method must change object's state as well ... there's no arrow associated with this method then.


Context.

We can select a group of objects and contain it in another object, then states of contained objects are parts of grouping object's state. Anything outside connected to grouping object can be named 'context' / pl: kontekst /. That is ... context is 'external state'.


Mind States.

I think and feel that buddhism's concept of Mind can be explained as similar to Finite State Machine(s).

For more details, feel free to look: [ Self-Improvement and Mind States ].

Wednesday, 1 April 2020

Tuple Data Type.

Nice Notation.

Must admit that i love these parts of Computer Sciences that have elegant, scientific notation.

Tuples / pl: 'Krotki' / are nice that way, as well.

For example:
  (char, int8, int8, string) t1 := ('a', 1, 3, "sample string");
  (int8, int8) t2 := (2,3);


Uses.

Was thinking about how to implement Tuple Data Type in 'Ola' Programming Language.


Preconditions.

i think that every object used with 'Ola' Programming Language ... should have a 'name' string as a part of its runtime state, randomly generated at first, with Random Number Generator initialized with a Time Stamp.

i think it should be possible to rename any/every of object(s).

Succesful working of the Tuple Data Type depends on above preconditions. Without these preconditions, this article should be either edited or removed.


Implementation Considerations.

Was thinking about how to implement Tuple Data Type in 'Ola' Programming Language - and came with that a tuple - internally - should consist of:
- String Array with runtime object names,
- Map where object name string is key and TupleValue is value.
  Invariant: During Map's observable moments, name strings in Map are sorted alphabetically.
  TupleValue has:
  - object's type / not sure yet how to encode it /,
  - object's value / according with it's type /.


Benefits.

1. Using String Array and Map, we provide support for objects graph that can contain cycles.

Names in objects' runtime state are also important tool for avoiding 'infinite recursion' loops in graphs with cycles.


2. Thanks to String Array and Map data types used, it's cheap and easy to find n-th value in a tuple ... both n-th key, as well as n-th TupleValue.

/ Algorithmic time: Worst: O(1) for retrieving key, and Worst: O(1) for retrieving value /



For example:
  (int8, int8, int8) t := (1,5,3);
  String tk := t.getKey(2);
  TupleValue tv := t.getValue(2);


3. We can also quickly get String Array part of a Tuple ... in which an unsorted copy of keyset is kept.

/ Algorithmic time: Worst: O(1) /


For example:
  (int32, int8, int8, int8) t := (15,6,5,3);
  String[] keys := t.getKeys();


4. Giving a name to a tuple value is also cheap and easy,

/ Algorithmic time: Worst: O(1) /

For example:
  (int8, int8, int8) t := (2,4,3);
  t.setName(2, "name for 3");
  // both String Array, as well as Map's key are updated ... both need to be a part of a single atomic operation.


5. Searching for a tuple value by its name is also easy and cheap.

/ Algorithmic time of insertion/lookup in Java's LinkedHashMap is: Amortized Worst: O(1) /

For example:
  (char, int8, string, int8) t := ('2',4,"one",0);
  t.setName(2, "name for one");
  TupleValue tv := t.getTupleValueByName("name for one");


Links.
- Linda & Tuple Space,
- Algorithms' Time-Performance Metrics.

Thursday, 19 March 2020

Algorithms' Time-Performance Metrics.

What is Algorithm?

An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions.

Algorithms + Data Structures = Programs.

The more we know about input data, the better-fitting algorithms we can choose - which may significiantly improve program's performance.


Computational Complexity.

Algorithm's Computational Complexity is defined as amount of computer's resources needed for given algorithm's execution.

The most basic such resources are:
- Processor Time,
- Amount of Memory.

Let's notice that usually it's not possible to form computational complexity as function of input data / such as: strings, arrays, trees or graphs /.

Usually what we have is only data size, understood as amount of input data.

For example:
- In a sorting problem - as data size we consider amount of elements in an input string,
- In a binary tree traversal problem, as data size we consider amount of nodes in a tree,

To be able to define Computational Complexity of Algorithm, we must agree on units in which it's measured.

Computational Complexity consists of:
- Time Complexity,
- Memory Complexity.

Time Complexity should depend on Algorithm itself, should be independent of computer, programming language, or details of coding it's realized with.

For this, we select certain operations characteristic to such algorithm - we'll call these: 'Dominant Operations'. Dominant Operations must have also following quality: amount of Dominant Operations executions is proportional to the total amount of operations executed by any of computer program's realizations.

For sorting algorithms, as dominant operation we usually choose comparison operation in an input data string, but occasionally also operation that swaps places of two pieces in input string data.

As unit of time complexity we define an execution of one dominant operation.

Computational Complexity of Algorithm we treat as a function of data size n.


We can consider:
- Pessimistic Algorithm's Complexity - defined as amount of resources needed for 'worst case' of input data,
- Expected Algorithm's Complexity - defined as amount of resources needed for 'typical case' of input data.


Definitions.

To define concepts of pesimistic and expected time complexity function, we'll use following symbols:

Dn - a set of input data sets, each of size n;
t(d) - amount of dominant operations for a input data set d;
Xn - random variable - it's value is t(d) for d ∊ Dn;
pnk - probability distribution of a random variable Xn, probability that for data size n algorithm will execute k of dominant operations (k ≥ 0).


Algorithm's pessimistic time complexity, we understand as function:

W(n) = sup { t(d): d ∊ Dn },

Where: 'sup' means: 'upper limit of the set'.


Algorithm's expected time complexity, we understand as function:

Meaning: expected value ave(Xn) of random variable Xn.


Notation for Bounds.

Big O (O()) describes the upper bound of the complexity.
Omega (Ω()) describes the lower bound of the complexity.
Theta (Θ()) describes the 'exact' bound of the complexity, between O() and Ω().
Little O (o()) describes the upper bound excluding the exact bound.

For example, let's consider Hoare's QuickSort Algorithm, Randomized version*:
  QuickSort's expected complexity is linear - logarithmic: Θ(n•log n),
  QuickSort's worst case complexity is square: O(n2).

* Scientific proofs for given algorithm's complexity values can be found in 'Introduction to Algorithms' book by:
- Thomas H. Cormen,
- Carles E. Leiserson,
- Ronald L. Rivest.


See also if You wish:
- What is Big O Notation Explained: Space and Time Complexity|.


Running Time.

Actual time complexity of an Algorithm / its running time / when used as a program, differs theoretically by proportionality factor that depends on used realization of this given algorithm.

Therefore, important part of information in complexity functions W(n) and A(n) is their order of magnitude - asymptotic behavior when n tends to infinity.

The most often we try to give simplest possible function that characterizes order of magnitude W(n) and A(n), for example: n, n•log n, n2, n3.


Typical Proportions.

The most of considered algorithms have time complexity that is proportional to one of functions:


1. log(n) - logarithmic complexity / pl: złożoność logarytmiczna /.

Logarithmic time complexity occurs - for example - with algorithms of type:

Exercise of size n is reduced to task of size n/2 + a certain amount of constant operations.

For example: binary search in a sorted input data set: a1 ≤ a2 ≤ ... ≤ an.


2. n - linear complexity / pl: złożoność liniowa /.

Linear time complexity occurs - for example - with algorithms in which we execute constant amount of operations for each of n elements in input data.

Example of such algorithm is Horner's Algorithm for determining polynomial's values.


3. n•log(n) - linear-logarithmic complexity / pl: złożoność liniowo - logarytmiczna /.

4. n2 - square complexity / pl: złożoność kwadratowa /.

5. n3, n4 - polynomial complexity / pl: złożoność wielomianowa /.

6. 2n - exponential complexity 2n / pl: złożoność wykładnicza 2n /.

7. n! - exponential complexity n! / pl: złożoność wykładnicza n! /.


Time-Consuming Algorithms.

Let's notice that algorithms with exponential complexity can solve its problems only for a small amount of input data size.

There's treshold, starting from which exponential function starts to increase its value so fast, that realizing algorithm on a computer becomes impossible.

As for example, let's assume that we have:
- input data set, its size is n,
- algorithm with time complexity 2n,
- two computers:
  - on first computer, the dominant operation executes in 10-6 second,
  - on second computer, the dominant operation executes in 10-9 second.


Time needed to perform calculations by our algorithm is presented in a following table:

Size n2050100200
Calculation time
(2n/106)
1,04 s35,7 years4∙1014 ages5∙1044 ages
Calculation time
(2n/109)
0,001 s13 days4∙1011 ages5∙1041 ages



That is, for time-consuming exponential algorithms - using even 1000 times faster computers might mean almost nothing in practice.

It doesn't matter much that precise time differs on different computers, with different software and hardware.

This is only an example, presented here to convey idea of how important Algorithmic Speed is.


Wednesday, 11 March 2020

Causal Analysis: Causes, Results & Correlations.

Causal Analysis.

Causal analysis is the field of experimental design and statistics pertaining to establishing cause and effect.

Typically it involves establishing three elements: corelation, sequence in time / that is, causes must occur before their proposed effect /, and a plausible physical or information-theoretical mechanism for an observed effect to follow from a possible cause, often involving one or more experiments.


Causation, functional relation.

When we have:
  a Cause Ca1,
  a Result R1,
  a Proof that C1 causes R1

Then we can say that we have a Cause - Result relation between Ca1 and R1


Correlation, probabilistic relation.

'Correlation is not causation' means that just because two things correlate does not necessarily mean that one causes the other.

As a seasonal example, just because people in the UK tend to spend more in the shops when it's cold and less when it's hot doesn't mean cold weather causes frenzied high-street spending.

A more plausible explanation would be that cold weather tends to coincide with Christmas and the new year sales.


See also, if You wish: Causal Notation.

Tuesday, 28 January 2020

Conditional Instruction & Expressions.

Conditional Instruction 'if' consists of:


1. Conditional expression that evaluates to a boolean value (truth or falsehood).

for example: ( o.getA() > 0 ) might be an expression that evaluates to boolean value.
for example: o.containsData() might be expression that evaluates to boolean value.
for example: ( o != null && o.containsData() && o.getA() > 0 ) might be expression that evaluates to boolean value.


2. Instruction to execute in case of truth.

for example: return; // a valueless return statement.
for example: { } // empty instruction.
for example: doSomething(); // function call.
for example { doSomething(1); doSomething(); } // complex instruction consisting of two function calls,
// with similar function signatures.


3. Syntax.

Syntax is keywords, reserved words, building blocks of a programming language..

syntax can be varied depending on Programming Language, but has to be unambiguous.

for example: if (a = 0) then doSomething();
for example: if (a == 0) { doSomething(); }
for example: if ( (a == 0) iff (b > 3) ) { doSomethingUseful() }


See also: Computer Thinking.

Monday, 27 January 2020

Computer Thinking.

About.

This article is a modest attempt for answering following two questions:
1. Do computers think?
2. How decisions are made in a Computer Program?

... there's a lot more of related knowledge, answers provided here are just a short introduction to the topic.


Beings & Phenomena.

Buddhist definition is as follows:

def. Beings have Mind, Phenomena don't have Mind. That's how these differ.


Computation & Calculation.

Computers do not think, only calculate & compute.

Calculation is a Mathematical / not only algebra / operation that takes input value(s) and produces result(s),

For example:
- 2 + 2 = 4,
- not true = false,
- cos 60o = 0,5.

Computation is calculation with side effects, as for example:
- Printing an image on a Printer,
- Sending a message via the Internet,
- Saving a file on a USB Pendrive.


Conditional instruction 'if' semantics.

Let's see how 'conditional instruction if ' works internally.

Let's see how processor instruction with assembler mnemonic jne may work.
/ 'jne' is mnemonic for 'jump if not equal' /.

We should also be aware that processors have 'registers', a low-capacity, low amount, high access speed memory cells.

Code:

101. cmp val1 val2;
102. jne :my_section;
103. jmp :my_section_equal;
104. ...
105. ...

(...)

:my_section: 205. ...

(...)

:my_section_equal: 320. ...

(...)


Above code is easy to explain:

1. At line 101, jne compares val1 with val2.

If are equal, 'eq register' / or 'zero flag register' / stores 1 for 'true', otherwise it stores 0 for 'false'.

2. At line 102, if val1 not equal val2, then jne jumps to memory address to which :my_section label points: line 205.

Otherwise, code at line 103 is executed in a next processor step.


How it works?

This 'magic' of conditional instruction is explained with few simple tricks:

1. Compare eq.

eq = (val1 - val2 == 0);

This is logical expression with a boolean value, which in low level evaluates to 1 for 'true', or 0 for 'false'.

This 0 or 1 value is stored in 'eq register'.

2. Jump if not equal.

We consider here: 'instruction pointer register', or in short: 'ip register'.

If 'eq register' has stored 0 - that is, when compare's result is false, then 'ip register' is filled with number, with memory address to which :my_section label points: to line 205.

Otherwise, 'ip register' is increased by 1, to line 103.

3. Next instruction.

Processor continues with next instruction, as pointed by instruction pointer register ... either executing instruction at line 205 or executing instruction at line 103.

3a. Line 205.

Processor continues with instruction at this memory address.

3b. Line 103.

... a 'jmp' instruction is executed, storing in 'ip register' a value of 320.

/ 'jmp' is a mnemonic for unconditional 'jump' instruction /.

4. Summary.

Depending on compare operation's value, we end up jumping to line 103 and then to line pointed by :my_section_equal label, or to line pointed by :my_section label.

That's it, this is conditional branch. We end up executing different code depending on whether val1 is equal with val2 or not.


See also, if You wish: Conditional Instruction & Expressions.


P.S. Please note that memory address and assembly language line are different things.. but are related, and work in a similar way.

Sunday, 26 January 2020

Events & Notifications.

Events, Notifications & Responses.

Events occur when something in Reality happens.

For example:
- When a key is pressed on a keyboard,
- When a certain pattern(s) in electromagnetic radiation are detected by sensor(s),
- When a printer machine malfunctions,

... these all are Events occuring in Reality, events of a certain type, with a given state: additional information & details.


Software parts can handle event occurances, can 'notify' other code parts about events happening ... making hardware-software infrastructure respond to such event occurances somehow.


Also, when objects in memory are created, when objects in memory are destroyed or when objects change state: this also can be a cause of event notification happening, of code parts executing a response.


Registration.

In Object Oriented Programming Languages, often there are objects that observe 'sensors', and notify objects that are registered for notifications.

How to register for event notifications?

... often there's method in a sensor-observing object that handles registrations, other objects call it and give a 'reference' to themselves. Such references are stored in a List or in other data structure, then when event occurs ... a method is called.

/ References are similar to pointers, handles, etc ... and are used in Java Programming Language, for example /.

... for a Code Example: click.


Publish–Subscribe Pattern & Message Broker / Event Bus.

In software architecture, publish–subscribe is a messaging pattern where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers, called subscribers, but instead categorize published messages into classes without knowledge of which subscribers, if any, there may be.

Such Messages have type, and place in type-hierarchy, in inheritance-tree.

Publishers publish messages of a given type(s), Subscribers subscribe for messages of a given type(s), and message broker / event bus handles messages of all types that fit in its type-hierarchy.

/ there's root message type, every other message is either of that type, or inherits from it, either directly or indirectly /.

Similarly, subscribers express interest in one or more classes and only receive messages that are of interest, without knowledge of which publishers, if any, there are.

This is a nice example of loose class coupling, of reducing dependencies in code.

Code parts - publishers & subscribers - do not have to 'know' about other, these are concerned only with messages they send and/or receive.

This pattern can provide Advantages of Loose Coupling & Scalability.

Thursday, 16 January 2020

Internet.

Introduction.

Internet is network of networks. Computers, as well as other Internet Devices are running programs ... can connect and communicate. It's like graph / nodes and connections - or dots and lines that connect them /.


i think one can speak abstractly about networks, as well.

... what is a 'human internet'?


Structure.



-=- Edges & Core. -=-



Network Edge consists of devices that we are most familiar with / computers, PDAs, cellphones, tablets and other MIDs [Mobile Internet Devices] / that we use on daily basis. They are connected together into networks that are also interconnected / via Network Core - hierarchy of ISPs [Internet Service Providers] /.

Software running on Internet Devices communicate using protocols. Protocols organize the way software interacts, order and content of messages exchanged.


Protocols are separated into Layers, each Layer providing certain services to the infrastructure.


Five-Layer Internet Protocol Stack.

... it consists of:

1. Application Layer: communication/interaction between applications.

2. Transport Layer: makes sure that right program running on device receives right information / data is sent to a certain port, using a certain protocol /, and provides additional services such as 'guaranteed' delivery of messages, information flow speed control, and other internal technical services such as congestion control and breaking application-layer messages into segments. Depending on transport layer protocol / TCP or UDP / some services may be provided, some might be not.

3. Network Layer / Internet Layer: responsible for moving information packets from source device to destination device [possibly with more than one Internet Device on the path to traverse through]. The internet network layer includes IP Protocol, which defines the how transmitted information is categorized, and how systems react on this information. The Internet's network layer also contains routing protocols that determine routes that information packets [in this layer named datagrams] between sources and destinations.

4. Link Layer / Network Access Layer: software element, responsible for moving packets [named 'frames' in this layer] from one Internet Device [router is also Internet Device] to another in one hop.

5. Physical Layer: while the link layer is software, physical layer consists of hardware infrastructure - network cards, transmission medium of the link [for example: twisted-pair copper wire, fiber optics, communication satellites].




-=- Layers. -=-

Wednesday, 15 January 2020

Linda & Concurrency.

There are four classic concurrency problems:

1. Critical Section / Mutual Exclusion,
2. Readers & Writers,
3. Producers & Consumers,
4. Five Dining Philosophers.


Linda - Tuple Space - has uses in Concurrency.

To achieve required results, not only code needs to be written ... tuple space also needs to be filled with initial state.


Critical Section / Mutual Exclusion. Readers & Writers.
Let's assume that two or more processess want to access critical section of code.

Critical section is this program fragment, that can be executed by at most one process at once.

We assume that each process which enters critical section, will leave it in finite time.

process PCS;
begin
  repeat
    personal_affairs();
    Input("ticket");

    critical_section();

    Output("ticket");
  until false
end;

--
initial state:
* ("ticket");


The readers-writers problem relates to an object such as a file that is shared between multiple processes. Some of these processes are readers i.e. they only want to read the data from the object and some of the processes are writers i.e. they want to write into the object.

The readers-writers problem is used to manage synchronization so that there are no problems with the object data. For example - If two readers access the object at the same time there is no problem. However if two writers or a reader and writer access the object at the same time, there may be problems.

To solve this situation, a writer should get exclusive access to an object i.e. when a writer is accessing the object, no reader or writer may access it. However, multiple readers can access the object at the same time.

process Reader;
var
  r: int32;
begin
  repeat
    personal_affairs();
    Input (r: int32, 0);
    Output (r+1, 0);

    do_read();

    Input (r: int32, w: int32);
    Output (r-1, w);
  until false
end;

process Writer;
begin
  repeat
    personal_affairs();
    Input (r: int32, 0);
    Output (r, 1);
    Input (0, 1);

    do_write();

    Output (0, 0);
    until false
end;

--
initial state:
* ( 0, 0 );

Producers & Consumers. Five Dining Philosophers.
There are P > 0 processes in system, which produce certain data, and C > 0 processes which receive data from producers.

Between producers and consumers there can be buffer B with capacity BCAPACITY, whose task is balancing temporary differences during processes execution time.

Processes that produce data we'll call producers, and processes receiving data - consumers.

Let's notice that a Tuple in Linda - Tuple Space is not a Buffer/FIFO Queue that do_produce() & do_consume() operate on.

a Tuple in Tuple Space is only used to store amount of used 'places' in a Buffer/FIFO Queue.

BCAPACITY is a constant, total number of 'places' in a Buffer/FIFO Queue.


process Producer;
const
  // B Capacity: 0 ... 107.
  int32 BCAPACITY = 107;
begin
  repeat
    personal_affairs();
    Input(n: int32);
    if (n < BCAPACITY) {

      do_produce();

      Output(n+1);
    } else {
      Output(BCAPACITY);
    }
  until false;
end;

process Consumer;
begin
  repeat
    personal_affairs();
    Input(n: int32);
    if (n > 0) {

      do_consume();

      Output(n-1);
    } else {
      Output(0);
    }
  until false;
end;

--
initial state:
* (0);

Five philosophers dine by round table. Before each there's plate. Between plates lie forks. In middle of the table there's serving dish with fish.

Each philosopher thinks.

When he gets hungry, he reaches for forks laying at his right and left side, then starts eating.

When he's done eating, he puts forks away and again devotes himself to thinking.

Following conditions have to be met:
* Only one philospoher ate with the same fork at the same time.
* Each of the philosophers ate only and always with two (and always those which lay near his plate) forks.
* No philosopher would die of starvation.
* Also we want that each of the philosophers would act the same way.

process Philosopher(i);
var
  int32 i;
begin
  repeat
    think();

    Input("room ticket");
    Input("fork", i);
    Input("fork", (i+1) mod 5);

    eat();

    Output("fork", i);
    Output("fork", (i+1) mod 5);
    Output("room ticket");
  until false;
end;

--
initial state:
* ("fork", 0);
* ("fork", 1);
* ("fork", 2);
* ("fork", 3);
* ("fork", 4);
* ("room ticket");
* ("room ticket");
* ("room ticket");
* ("room ticket");

Sunday, 12 January 2020

Transcending Limits of Computer Sciences.

Can computers solve any problem?

... certainly not.


For example:

1. Computers can't store a number of infinite length, because there's not enough of matter & energy in the Universe.

Storing numbers requires memory / either persistent or volatile / ... and memory is produced from silicon.

Storing infinite-length number would require infinite amount of memory.

... as far as it seems now, amount of energy & matter in the Universe is finite.


2. Some calculations are too time-consuming.

... it's related with 'Algorithmic Speed' & 'Big O Notation'.

Sometimes passwords need to be cracked within hours, while 'naive' approach may take tens of centuries to decrypt a message.


3. ...


Transcending Limits.

Regarding memory issues - perhaps discoveries of physics & other sciences will lead to more of memory being available for computers, both home PCs, as well as for Laptops and Supercomputers.

... a workaround for storing infinite-length numbers is writing these in different form.

0,333333 ... = 0,(3) = 1/3.



Where algorithmic speed is considered, one may investigate common bottlenecks in a group of problems - as for example: quantum cryptography - find algorithms for solving those bottlenecks with Advanced Mathematics, then invest in creating a hardware support / pl: wsparcie sprzętowe / for these purposes.


What is Hardware Support?

... it is a piece of hardware that supports/enables certain functionalities, as for example:
- a blitter chips built in computers,
- sound chip built in a computer's motherboard,
- graphics cards attached to motherboard,
- ...

... there are more steps of course, software drivers, libraries, etc. need to be written as well.


Decreasing algorithmic costs, increasing algorithmic speed - as in 'Algorithms' Time-Performance Metrics' - allows software to perform calculations very fast - even if there's a lot of data to process.