dimanche 17 janvier 2021

Understanding Complexity (Urban Traffic VS Highway Traffic)

Sometimes, algorithmic complexity, when approached academically, or mathematically, might go astray its literal meaning.

Indeed, algorithmic complexity is described using theoretic mathematical expressions, and referred to basically by using Big O notation.

As a matter of fact, algorithmic complexity has a picturesque meaning.
 
Indeed, sometimes a bad algorithm is the one that has a pointless high complexity, and any experienced programmer can easily find for optimisation.

Reducing complexity can sometimes mean that an unnecessary recalculation is repeatedly done. Sometimes, a value needs to be calculated only by using a formula rather than using (nested) loops etc.

A real life situation is traffic in the city versus traffic in the highway.

The distance between home and downtown is 10 km, but it takes me 45 minutes to reach my destionation, taking into account traffic lights, intersections, the tramway ...

Whereas, the distance between home and a former job located in a suburban area is 20 km, it takes me 20 minutes only to get there using the highway.

As a deduction, complexity is very high in the city with upper bound (business days) and lower bound (holidays / weekend): small o notation.

While complexity is very low (big o notation where lower bound is equal upper bound).

Some techniques are used to exploit space complexity for the benefit of time complexity.
The structure below (example) is used to connect two neighbourhoods separated by railways :
In Rabat, a section of the highway has become an inherent part of the city. Therefore, travellers needed to reduce speed, and stop at traffic lights or at stop signs. But, during the last decade, there were ongoing projects to establish underground tunnels so that travellers wouldn't have to stop, and speed limit was raised from 60 km/h to 80 km/h. 
 
Indeed, establishing tunnels reduced considerably time complexity in this section of the highway.

In summary, the discussion above allows to have a concrete situation of what complexity can be, with the intention of simplifying the understanding of the algorithmic complexity, and its purpose in general.


lundi 19 octobre 2020

Booth's Algorithm - C Implementation

 #include <stdio.h>  
 #define WORD 4  
 #define VERBOSE 1 //0  
 /*  
  * 02 DEC 2011  (CSC2304)
  * Implementation of the Booth's Algorithm.  
  */  
 void twosComplementAddition(char[], char[]);  
 void rightShift(char[], char);  
 void addition(char[], char[]);  
 char* twosComplementMultiplication(char M[], char Q[]) {  
   char C;  
   char *A = (char*) malloc(sizeof(char)*(2 * WORD + 1));  
   char processedQ[WORD+ 1];  
   char Q0, Q_1 = '0';  
   int i, j;  
   strcpy(A, "0000");  
   if (VERBOSE) {  
     printf("\n  A  |  Q  |  M  |");  
     printf("\n  %s  |  %s  |  %s  |  Initial", A, Q, M);  
     printf("\n-------------------------------------------------------------");  
   }  
   for (i = 0, j = 1; i < WORD; i++, j++) {  
     Q0 = Q[WORD - 1];  
     if (VERBOSE) {  
       printf("\n  %s  |  %s  |  %s  |  Cycle %d", A, Q, M, j);  
     }  
     if (Q0 == '0' && Q_1 == '1') {  
       addition(A, M);  
       if (VERBOSE) {  
         printf("\n  %s  |  %s  |  %s  |  Addition", A, Q, M);  
       }  
     } else {  
       if (Q0 == '1' && Q_1 == '0') {  
         twosComplementAddition(A, M);  
         if (VERBOSE) {  
           printf("\n  %s  |  %s  |  %s  |  Two's Complement", A, Q, M);  
         }  
       }  
     }  
     Q_1 = Q[WORD - 1];  
     rightShift(Q, A[WORD - 1]);  
     rightShift(A, A[0]);  
     if (VERBOSE) {  
       printf("\n  %s  |  %s  |  %s  |  Right Shift", A, Q, M);  
       getch();  
     }  
     printf("\n-------------------------------------------------------------");  
   }  
   strcat(A, Q);  
   return A;  
 }  
 void rightShift(char reg[], char bit) {  
   int i;  
   for (i = WORD - 1; i > 0; i--) {  
     reg[i] = reg[i - 1];  
   }  
   reg[0] = bit;  
 }  
 void addition(char A[], char M[]) {  
   int i;  
   char c = '0';  
   for (i = WORD - 1; i >= 0; i--) {  
     if (A[i] == '0' && M[i] == '0') {  
       A[i] = c;  
       c = '0';  
     } else {  
       if ((A[i] == '1' && M[i] == '0') || (A[i] == '0' && M[i] == '1')) {  
         if (c == '0') {  
           A[i] = '1';  
         } else {  
           A[i] = '0';  
         }  
       } else {  
         if (A[i] == '1' && M[i] == '1') {  
           A[i] = c;  
           c = '1';  
         }  
       }  
     }  
   }  
 }  
 void twosComplementAddition(char A[], char M[]) {  
   int i;  
   char temp[WORD + 1];  
   for (i = 0; i < WORD; i++) {  
     if (M[i] == '0') {  
       temp[i] = '1';  
     } else {  
       temp[i] = '0';  
     }  
   }  
   temp[WORD] = '\0';  
   addition(temp, "0001");  
   addition(A, temp);  
 }  
 int main() {  
   char QQ[WORD + 1];  
   char M[WORD + 1];  
   char Q[WORD + 1];  
   char *result;  
   printf("\nBooth's Algorithm");  
   printf("\n*****************");  
   printf("\nEnter M: ");  
   scanf("%s", M);  
   printf("\nEnter Q: ");  
   scanf("%s", Q);  
   strcpy(QQ, Q);  
   result = twosComplementMultiplication(M, Q);  
   printf("\n%s * %s = %s", M, QQ, result);  
   printf("\n");  
   return 0;  
 }  

mardi 24 juillet 2018

N-ary Tree Data Structure API in Java

TreeDS4j is a Java (6+) API that allows handling n-ary tree data structure in such a flexible way. Indeed, the user is subject to enter structured and coherent data, and the latter would be stored in an hierarchy, accordingly. The link below contains two samples that show how to take advantage of the API.


samedi 10 juin 2017

Decreasing Complexity Means Increasing Performance

What is wrong the piece of C Language code below?
for ( i = 1 ; i <= length(str) ; i ++ ) {
    // Do some stuff
}
There is a recalculation of the length of the string at every iteration, which is a major waste of time.

Indeed, at the first iteration, we already know how many times we need to iterate; when it's time for the second iteration, a recalculation of the already known value is done once again ... we keep doing this n times.

Hence, complexity here is unnecessarily high.

Detailing the code above in a more explicit manner, it may be translated as follows:
int n = 0;
while (str[i] != '\0') {
    n ++;
}
for ( i = 1 ; i <= n ; i ++ ) {
    // Do some stuff
    n = 0;
    while (str[i] != '\0') {
         n ++;
    }
}
At a glance, not only the code becomes redundant and longer, this fragment's time complexity is obviously O(n²).

The easy optimisation is to add a variable n, which will hold the length of the string, and, consequently, no additional computations would be needed anymore:
int n = length(str);
for ( i = 1 ; i <= n ; i ++ ) {
    // Do some stuff
}

This easy fix allows to pass from a quadratic time complexity to a linear one - O(n).

Besides, such enhancements are irrelevant when it comes to small data.

dimanche 28 février 2016

Indirect Recursion - A Natural Phenomenon

Indirect recursion could be noticed in seeds and trees: a seed ends up to be a tree, and the latter produces many seeds, which themselves become trees and so forth.





An interesting case was beautifully described in the Qur'an, chapter 2, verse 261, a plain description that is easy  to represent:

"The example of those who spend their wealth in the way of Allah is like a seed [of grain] which grows seven spikes; in each spike is a hundred grains. And Allah multiplies [His reward] for whom He wills. And Allah is all-Encompassing and Knowing". (Source).

Regardless of the religious aspects, one can notice the exponential growth of seeds, which is appreciated in agriculture, but seriously deprecated in Computer Science.


If we represent this phenomenon (seeds and spikes) as a Java algorithm:


void seed(x) {
    System.out.println("Seed " + x);
    for (i = 0; i < 7; i ++) {
        spike(i);
    }
}

void spike(y) {
    System.out.println("Spike " + y);
    for (j = 0; j < 100; j ++) {
        seed(j);
    }
}

Let's refer to seed as f(x) and spike as g(y):




Clearly, calling f(x) will make 700 indirect "calls" of itself, in an indirect fashion.

Good deeds with exponential seeds is surely a nice reward, but such exponential sub-routine calls will only have a negative impact on performances.


One should be careful with recursion, direct or indirect.

mardi 24 juin 2014

Triple Nested Loops - Two Linear Loops and One Logarithmic with a non-Constant Base - The Monster

To obtain the function that tells the exact number of iterations a set of nested loops would perform didn't seem to be a tedious exercise. However, I came accross this interesting case:
- Linear outer loop
- Dependent first level linear inner loop
- Dependent second level logarithmic inner loop with a general base.

for ( i = 1 ; i <= n ; i ++ ) {
    for ( j = 1; j <= i ; j ++ ) {
        for ( k = 1 ; k <= j ; k *= a ) {
            // c
        }
    }
}

Ceilinged/Floored logarithmic functions within summations turned out to have very lengthy closed-forms (See my previous post).







We will dissect the formulas above (for any a > 1 and n >= 1):












Therefore,



This link points at a plain Java program that represents the above functions (f(), g(), h(), u(), v(), w(), respectively), and give the exact number of iterations performed by the covered algorithm mentioned earlier (receives two arguments, n and a).

dimanche 25 mai 2014

Logarithmic Summations and Discrete Loops - Ceiling and Floor Functions

Reading the so called paper "Discrete Loops and Worst Case Performance" (by Dr. Johann Blieberger), I came across some interesting summations:




Useful here:

for ( i = 1; i <= n ; i ++ ) {
    for (j = 1; j <= i ; j *= 2) {
        // Some constant time C instructions.
    }
}

Because:



Likewise:





Applicable here:


for ( i = 1; i <= n ; i ++ ) {
    for (j = 1; j < i ; j *= 2) {
        // Some constant time C instructions.
    }
}

Because:






And more generally:


for ( i = 1; i <= n ; i ++ ) {
    for (j = 1; j <= i ; j *= a) {
        // Some constant time C instructions.
    }
}

We can proceed like the following:






Further:


for ( i = 1; i <= n ; i ++ ) {
    for (j = 1; j < i ; j *= a) {
        // Some constant time C instructions.
    }
}

With:





Thus, while I tried to refine the results of some nested loops running time analysis, I had to face a particular summation, using both logarithm and ceiling functions, but this time, as an exponent. I finally deduced a general solution for the range [1, n]:



And similarly:




I hope this was useful to you. Enjoy!