samedi 10 juin 2017

Decreasing Complexity Means Increasing Performance

What is wrong the piece of C Language code below?
for ( i = 1 ; i <= length(str) ; i ++ ) {
    // Do some stuff
}
There is a recalculation of the length of the string at every iteration, which is a major waste of time.

Indeed, at the first iteration, we already know how many times we need to iterate; when it's time for the second iteration, a recalculation of the already known value is done once again ... we keep doing this n times.

Hence, complexity here is unnecessarily high.

Detailing the code above in a more explicit manner, it may be translated as follows:
int n = 0;
while (str[i] != '\0') {
    n ++;
}
for ( i = 1 ; i <= n ; i ++ ) {
    // Do some stuff
    n = 0;
    while (str[i] != '\0') {
         n ++;
    }
}
At a glance, not only the code becomes redundant and longer, this fragment's time complexity is obviously O(n²).

The easy optimisation is to add a variable n, which will hold the length of the string, and, consequently, no additional computations would be needed anymore:
int n = length(str);
for ( i = 1 ; i <= n ; i ++ ) {
    // Do some stuff
}

This easy fix allows to pass from a quadratic time complexity to a linear one - O(n).

Besides, such enhancements are irrelevant when it comes to small data.