Thursday, 19 March 2020

Algorithms' Time-Performance Metrics.

What is Algorithm?

An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions.

Algorithms + Data Structures = Programs.

The more we know about input data, the better-fitting algorithms we can choose - which may significiantly improve program's performance.


Computational Complexity.

Algorithm's Computational Complexity is defined as amount of computer's resources needed for given algorithm's execution.

The most basic such resources are:
- Processor Time,
- Amount of Memory.

Let's notice that usually it's not possible to form computational complexity as function of input data / such as: strings, arrays, trees or graphs /.

Usually what we have is only data size, understood as amount of input data.

For example:
- In a sorting problem - as data size we consider amount of elements in an input string,
- In a binary tree traversal problem, as data size we consider amount of nodes in a tree,

To be able to define Computational Complexity of Algorithm, we must agree on units in which it's measured.

Computational Complexity consists of:
- Time Complexity,
- Memory Complexity.

Time Complexity should depend on Algorithm itself, should be independent of computer, programming language, or details of coding it's realized with.

For this, we select certain operations characteristic to such algorithm - we'll call these: 'Dominant Operations'. Dominant Operations must have also following quality: amount of Dominant Operations executions is proportional to the total amount of operations executed by any of computer program's realizations.

For sorting algorithms, as dominant operation we usually choose comparison operation in an input data string, but occasionally also operation that swaps places of two pieces in input string data.

As unit of time complexity we define an execution of one dominant operation.

Computational Complexity of Algorithm we treat as a function of data size n.


We can consider:
- Pessimistic Algorithm's Complexity - defined as amount of resources needed for 'worst case' of input data,
- Expected Algorithm's Complexity - defined as amount of resources needed for 'typical case' of input data.


Definitions.

To define concepts of pesimistic and expected time complexity function, we'll use following symbols:

Dn - a set of input data sets, each of size n;
t(d) - amount of dominant operations for a input data set d;
Xn - random variable - it's value is t(d) for d ∊ Dn;
pnk - probability distribution of a random variable Xn, probability that for data size n algorithm will execute k of dominant operations (k ≥ 0).


Algorithm's pessimistic time complexity, we understand as function:

W(n) = sup { t(d): d ∊ Dn },

Where: 'sup' means: 'upper limit of the set'.


Algorithm's expected time complexity, we understand as function:

Meaning: expected value ave(Xn) of random variable Xn.


Notation for Bounds.

Big O (O()) describes the upper bound of the complexity.
Omega (Ω()) describes the lower bound of the complexity.
Theta (Θ()) describes the 'exact' bound of the complexity, between O() and Ω().
Little O (o()) describes the upper bound excluding the exact bound.

For example, let's consider Hoare's QuickSort Algorithm, Randomized version*:
  QuickSort's expected complexity is linear - logarithmic: Θ(n•log n),
  QuickSort's worst case complexity is square: O(n2).

* Scientific proofs for given algorithm's complexity values can be found in 'Introduction to Algorithms' book by:
- Thomas H. Cormen,
- Carles E. Leiserson,
- Ronald L. Rivest.


See also if You wish:
- What is Big O Notation Explained: Space and Time Complexity|.


Running Time.

Actual time complexity of an Algorithm / its running time / when used as a program, differs theoretically by proportionality factor that depends on used realization of this given algorithm.

Therefore, important part of information in complexity functions W(n) and A(n) is their order of magnitude - asymptotic behavior when n tends to infinity.

The most often we try to give simplest possible function that characterizes order of magnitude W(n) and A(n), for example: n, n•log n, n2, n3.


Typical Proportions.

The most of considered algorithms have time complexity that is proportional to one of functions:


1. log(n) - logarithmic complexity / pl: złożoność logarytmiczna /.

Logarithmic time complexity occurs - for example - with algorithms of type:

Exercise of size n is reduced to task of size n/2 + a certain amount of constant operations.

For example: binary search in a sorted input data set: a1 ≤ a2 ≤ ... ≤ an.


2. n - linear complexity / pl: złożoność liniowa /.

Linear time complexity occurs - for example - with algorithms in which we execute constant amount of operations for each of n elements in input data.

Example of such algorithm is Horner's Algorithm for determining polynomial's values.


3. n•log(n) - linear-logarithmic complexity / pl: złożoność liniowo - logarytmiczna /.

4. n2 - square complexity / pl: złożoność kwadratowa /.

5. n3, n4 - polynomial complexity / pl: złożoność wielomianowa /.

6. 2n - exponential complexity 2n / pl: złożoność wykładnicza 2n /.

7. n! - exponential complexity n! / pl: złożoność wykładnicza n! /.


Time-Consuming Algorithms.

Let's notice that algorithms with exponential complexity can solve its problems only for a small amount of input data size.

There's treshold, starting from which exponential function starts to increase its value so fast, that realizing algorithm on a computer becomes impossible.

As for example, let's assume that we have:
- input data set, its size is n,
- algorithm with time complexity 2n,
- two computers:
  - on first computer, the dominant operation executes in 10-6 second,
  - on second computer, the dominant operation executes in 10-9 second.


Time needed to perform calculations by our algorithm is presented in a following table:

Size n2050100200
Calculation time
(2n/106)
1,04 s35,7 years4∙1014 ages5∙1044 ages
Calculation time
(2n/109)
0,001 s13 days4∙1011 ages5∙1041 ages



That is, for time-consuming exponential algorithms - using even 1000 times faster computers might mean almost nothing in practice.

It doesn't matter much that precise time differs on different computers, with different software and hardware.

This is only an example, presented here to convey idea of how important Algorithmic Speed is.


No comments:

Post a Comment