Complexity theory is the field of mathematics that allows to determine the asymptotic order of growth of a given mathematical function.
Let's take a look first at different possible orders of growth:
We notice that n! is the most expensive order of growth in this plot; indeed, it takes a very large number of seconds, or even minutes, for instance, to execute an input of size n = 10. whilst 10n (the cheapest order of growth) would take less than a hundred seconds to execute an input of the same size (n = 10).
Moreover, we notice, for example, that 2n² is slower than 20n within the interval [0 .. 10[, but eventually do intersect when n = 10. In other words, an algorithm can be faster than an other one, when values are small, which might be misleading and deceiving.
Thanks to complexity theory, determining the order of growth of any mathematical function (remember: any algorithm can be represented as such), helps to formally decide whether the algorithm we are going to use is suitable or not.
So, how do we formally prove the order of growth?
We make use of the following fundamental notations below:
- O(n): Big Oh of n --> The upper bound
- Ω(n): Big Omega of n --> The lower bound
- Θ(n): Big Theta of n --> When the upper bound and lower bound are alike
Below the formal analysis:
- T(n) ∈ O(f(n)) iff there exist c & n0 > 0 such that T(n) ≤ cf(n) when n > n0 (upper bound);
- T(n) ∈ Ω(f(n)) iff there exist c & n0 > 0 such that T(n) ≥ cf(n) when n > n0 (lower bound);
- T(n) ∈ Θ(f(n)) iff T(n) ∈ O(f(n)) and T(n) ∈ Ω(f(n)) (upper and lower bound);
- T(n) ∈ o(f(n)) iff T(n) ∈ O(f(n)) and T(n) not in Ω(f(n)) (upper but not lower bound).
Let's take advantage of the execution time of the plain for loop, covered in a precedent post: 3n + 2
Upper bound:
Lower bound:
It is obvious that T(n) = 3n + 2's order of growth is linear, since its prevailing term is n; thus the intuitive choice of O(n). As a matter of fact, as long as T(n) is O(n), therefore, it is O(n²), O(n3), O(n4) and so forth. We can assert exactly the same way, but the other way around, that T(n) is Ω(n), Ω(√n), and then Ω(n1/3), Ω(n1/4), Ω(log2(n)) and so on.
On the other hand, we may notice that T(n) is O(n) and T(n) is Ω(n), therefore, we may use Theta notation for T(n) like the following:
The previously covered for loop fragment will execute (3n + 2) times, no matter what the value of n is.
Let's take a look first at different possible orders of growth:
We notice that n! is the most expensive order of growth in this plot; indeed, it takes a very large number of seconds, or even minutes, for instance, to execute an input of size n = 10. whilst 10n (the cheapest order of growth) would take less than a hundred seconds to execute an input of the same size (n = 10).
Moreover, we notice, for example, that 2n² is slower than 20n within the interval [0 .. 10[, but eventually do intersect when n = 10. In other words, an algorithm can be faster than an other one, when values are small, which might be misleading and deceiving.
Thanks to complexity theory, determining the order of growth of any mathematical function (remember: any algorithm can be represented as such), helps to formally decide whether the algorithm we are going to use is suitable or not.
So, how do we formally prove the order of growth?
We make use of the following fundamental notations below:
- O(n): Big Oh of n --> The upper bound
- Ω(n): Big Omega of n --> The lower bound
- Θ(n): Big Theta of n --> When the upper bound and lower bound are alike
Below the formal analysis:
- T(n) ∈ O(f(n)) iff there exist c & n0 > 0 such that T(n) ≤ cf(n) when n > n0 (upper bound);
- T(n) ∈ Ω(f(n)) iff there exist c & n0 > 0 such that T(n) ≥ cf(n) when n > n0 (lower bound);
- T(n) ∈ Θ(f(n)) iff T(n) ∈ O(f(n)) and T(n) ∈ Ω(f(n)) (upper and lower bound);
- T(n) ∈ o(f(n)) iff T(n) ∈ O(f(n)) and T(n) not in Ω(f(n)) (upper but not lower bound).
Let's take advantage of the execution time of the plain for loop, covered in a precedent post: 3n + 2
Upper bound:
Plotting the abovementioned, yields the following curves:
It is obvious that T(n) = 3n + 2's order of growth is linear, since its prevailing term is n; thus the intuitive choice of O(n). As a matter of fact, as long as T(n) is O(n), therefore, it is O(n²), O(n3), O(n4) and so forth. We can assert exactly the same way, but the other way around, that T(n) is Ω(n), Ω(√n), and then Ω(n1/3), Ω(n1/4), Ω(log2(n)) and so on.
On the other hand, we may notice that T(n) is O(n) and T(n) is Ω(n), therefore, we may use Theta notation for T(n) like the following:
The previously covered for loop fragment will execute (3n + 2) times, no matter what the value of n is.