samedi 10 juin 2017

Decreasing Complexity Means Increasing Performance

What is wrong the piece of C Language code below?
for ( i = 1 ; i <= length(str) ; i ++ ) {
    // Do some stuff
}
There is a recalculation of the length of the string at every iteration, which is a major waste of time.

Indeed, at the first iteration, we already know how many times we need to iterate; when it's time for the second iteration, a recalculation of the already known value is done once again ... we keep doing this n times.

Hence, complexity here is unnecessarily high.

Detailing the code above in a more explicit manner, it may be translated as follows:
int n = 0;
while (str[i] != '\0') {
    n ++;
}
for ( i = 1 ; i <= n ; i ++ ) {
    // Do some stuff
    n = 0;
    while (str[i] != '\0') {
         n ++;
    }
}
At a glance, not only the code becomes redundant and longer, this fragment's time complexity is obviously O(n²).

The easy optimisation is to add a variable n, which will hold the length of the string, and, consequently, no additional computations would be needed anymore:
int n = length(str);
for ( i = 1 ; i <= n ; i ++ ) {
    // Do some stuff
}

This easy fix allows to pass from a quadratic time complexity to a linear one - O(n).

Besides, such enhancements are irrelevant when it comes to small data.

dimanche 28 février 2016

Indirect Recursion - A Natural Phenomenon

Indirect recursion could be noticed in seeds and trees: a seed ends up to be a tree, and the latter produces many seeds, which themselves become trees and so forth.





An interesting case was beautifully described in the Qur'an, chapter 2, verse 261, a plain description that is easy  to represent:

"The example of those who spend their wealth in the way of Allah is like a seed [of grain] which grows seven spikes; in each spike is a hundred grains. And Allah multiplies [His reward] for whom He wills. And Allah is all-Encompassing and Knowing". (Source).

Regardless of the religious aspects, one can notice the exponential growth of seeds, which is appreciated in agriculture, but seriously deprecated in Computer Science.


If we represent this phenomenon (seeds and spikes) as a Java algorithm:


void seed(x) {
    System.out.println("Seed " + x);
    for (i = 0; i < 7; i ++) {
        spike(i);
    }
}

void spike(y) {
    System.out.println("Spike " + y);
    for (j = 0; j < 100; j ++) {
        seed(j);
    }
}

Let's refer to seed as f(x) and spike as g(x):




Clearly, calling f(x) will make 700 indirect "calls" of itself, in an indirect fashion.

Good deeds with exponential seeds is surely a nice reward, but such exponential sub-routine calls will only have a negative impact on performances.


One should be careful with recursion, direct or indirect.

mardi 24 juin 2014

Triple Nested Loops - Two Linear Loops and One Logarithmic with a non-Constant Base - The Monster

To obtain the function that tells the exact number of iterations a set of nested loops would perform didn't seem to be a tedious exercise. However, I came accross this interesting case:
- Linear outer loop
- Dependent first level linear inner loop
- Dependent second level logarithmic inner loop with a general base.

for ( i = 1 ; i <= n ; i ++ ) {
    for ( j = 1; j <= i ; j ++ ) {
        for ( k = 1 ; k <= j ; k *= a ) {
            // c
        }
    }
}

Ceilinged/Floored logarithmic functions within summations turned out to have very lengthy closed-forms (See my previous post).







We will dissect the formulas above (for any a > 1 and n >= 1):












Therefore,



This link points at a plain Java program that represents the above functions (f(), g(), h(), u(), v(), w(), respectively), and give the exact number of iterations performed by the covered algorithm mentioned earlier (receives two arguments, n and a).

dimanche 25 mai 2014

Logarithmic Summations and Discrete Loops - Ceiling and Floor Functions

Reading the so called paper "Discrete Loops and Worst Case Performance" (by Dr. Johann Blieberger), I came across some interesting summations:




Useful here:

for ( i = 1; i <= n ; i ++ ) {
    for (j = 1; j <= i ; j *= 2) {
        // Some constant time C instructions.
    }
}

Because:



Likewise:





Applicable here:


for ( i = 1; i <= n ; i ++ ) {
    for (j = 1; j < i ; j *= 2) {
        // Some constant time C instructions.
    }
}

Because:






And more generally:


for ( i = 1; i <= n ; i ++ ) {
    for (j = 1; j <= i ; j *= a) {
        // Some constant time C instructions.
    }
}

We can proceed like the following:






Further:


for ( i = 1; i <= n ; i ++ ) {
    for (j = 1; j < i ; j *= a) {
        // Some constant time C instructions.
    }
}

With:





Thus, while I tried to refine the results of some nested loops running time analysis, I had to face a particular summation, using both logarithm and ceiling functions, but this time, as an exponent. I finally deduced a general solution for the range [1, n]:



And similarly:




I hope this was useful to you. Enjoy!

samedi 19 avril 2014

Asymptotic Analysis - Vehicles Race and Algorithm Running Time

First, watch this video:

video

Now, let's assume the vehicles, namely the Jet, the Car, and the Motorcycle are your algorithms.
Also, suppose the length of the racecourse is the size of the data n.

Consequently, the speed of the vehicles can be analogized to the running time of the algorithms.

Eventually, if we had to embody the circumstances of this interesting race, we will obtain this (approximately):






Supposedly, the respective vehicles speed closed-form:





Which means:




Formal Analysis:

Let the instant between 0:24 and 0:30 be represented by n0, instant 0:33 represented by n'0, and instant 0:36 by n''0.

The most interesting case to study, is Tcar(n), as it grows faster than TmotorC(n) but slower than Tjet(n).






Critique:

We notice that, if the length of the courserace was small, the motorcycle would have won this contest.

The speed of the jet is exponentially larger than both the car and the motorcycle, but we can't realize this fact only within a sufficient distance (large data).

Indeed, the motorcycle is fast at the beginning, but it gets exceeded by the jet (instant 0:33 of the video).

On the other hand, the car too, when reaching its speed apex, could overstep the motorcycle (instant 0:36 of the video), but, it was faster than the jet until the latter could transcend the former at a certain instant between 0:24 and 0:30.

We can pretend, then, that the speed of:

- the jet is exponential,
- the car is quadratic,
- the motorcycle is linear.

Also, we can assert we don't really need such a dispositive (like the one in the video) to show that a bike can be faster or slower, between a certain distance range.

But, we may image that, for instance,




Acknowledgment:

a big thank you to Dr. Naeem Nizar Sheikh, who spoke about such an example during a conversation, a couple of years ago.

lundi 23 décembre 2013

Asymptotic Analysis - Approaching Algorithms to Mathematics (I)

Complexity theory is the field of mathematics that allows to determine the asymptotic order of growth of a given mathematical function.

Let's take a look first at different possible orders of growth:



We notice that n! is the most expensive order of growth in this plot; indeed, it takes a very large number of seconds, or even minutes, for instance, to execute an input of size n = 10. whilst 10n (the cheapest order of growth) would take less than a hundred seconds to execute an input of the same size (n = 10).

Moreover, we notice, for example, that 2n² is slower than 20n within the interval [0 .. 10[, but eventually do intersect when n = 10. In other words, an algorithm can be faster than an other one, when values are small, which might be misleading and deceiving.

Thanks to complexity theory, determining the order of growth of any mathematical function (remember: any algorithm can be represented as such), helps to formally decide whether the algorithm we are going to use is suitable or not.

So, how do we formally prove the order of growth?
We make use of the following fundamental notations below:

- O(n): Big Oh of n         --> The upper bound
- Ω(n): Big Omega of n   --> The lower bound
- Θ(n): Big Theta of n     --> When the upper bound and lower bound are alike

Below the formal analysis:
- T(n)  O(f(n)) iff there exist c & n0 > 0 such that T(n) ≤ cf(n) when n > n0 (upper bound);
- T(n)  Ω(f(n)) iff there exist c & n0 > 0 such that T(n) ≥ cf(n) when n > n0 (lower bound);
- T(n)  Θ(f(n)) iff T(n)  O(f(n)) and T(n)  Ω(f(n)) (upper and lower bound);
- T(n)  o(f(n)) iff T(n) ∈ O(f(n)) and T(n) not in Ω(f(n)) (upper but not lower bound).

Let's take advantage of the execution time of the plain for loop, covered in a precedent post: 3n + 2

Upper bound:


Lower bound:



Plotting the abovementioned, yields the following curves:


It is obvious that T(n) = 3n + 2's order of growth is linear, since its prevailing term is n; thus the intuitive choice of O(n). As a matter of fact, as long as T(n) is O(n), therefore, it is O(n²), O(n3), O(n4) and so forth. We can assert exactly the same way, but the other way around, that T(n) is Ω(n), Ω(√n), and then Ω(n1/3), Ω(n1/4), Ω(log2(n)) and so on.

On the other hand, we may notice that T(n) is O(n) and T(n) is Ω(n), therefore, we may use Theta notation for T(n) like the following:




The previously covered for loop fragment will execute (3n + 2) times, no matter what the value of n is.

dimanche 8 décembre 2013

Delving into Algorithm Analysis - Deducing the Mathematical Representation of a Determined Study Case (III) [Recursive Algorithms and Recursive Relations]

We have seen previously how could represent an iterative algorithm as a mathematical function.
But how are we going to do so for recursive algorithms?
Let's take a look at the renowned recursive algorithm below:


void hanoiTowers(int n, char left, char center, char right) {
     if (n == 0) {     // Base Case
          return;
     } else {          // General Case
           hanoiTowers(n - 1, left, right, center);
           printf("\nDisk: %d from %c to %c", n, left, center);
           hanoiTowers(n - 1, right, center, left);
     }
}
We notice that there are two unconditional recursive calls inside the hanoiTowers() routine.

I invite you to attempt to execute the following algorithm, once with n = 10 and once with n = 30:
void hanoiTowers(int n) {
     if (n == 0) {     // Base Case
          return;
     } else {          // General Case
           hanoiTowers(n - 1);
           hanoiTowers(n - 1);
     }
}

When n = 10, this algorithm executed in a few milliseconds, but when n = 30, it took 12 seconds. (intel Core i5 - 4 Threads, 6 GB memory).

Actually, the conventional technique (recursive relations) to solve this is like the following:



Where c is constant representing the running time it takes to execute the printf() statements.



We will assume that a = 1, and c = 1




Thus, this algorithm running time is non polynomial (very expensive algorithm).




Now, we know how to infer the "mathematical representation" of both recursive and iterative algorithms, let's dig into time complexity a little bit, in the next posts.