Many beginners struggle with understanding the Big O notation. I’ll show you a simple way to understand it with examples. It’s not mathematical, professional or academic, but hopefully it will help you understand it.

What is Big O

You’re writing your code and suddenly you come up with a problem which you can solve using several different algorithms. You have to pick one and you’re interested in the fastest one. How would you determine which one is the best in this case? Technically you could just run your app and track execution time, but that would be dependand on hardware or compiler version (etc), so performance can differ on different computers. You need a way to describe algorithm performance in relative units, that’s why we have Big O.

Big O notation is a way to measure relative complexity of a function. It expresses how fast function grows over time. We use it to describe execution time of an algorithm in relative terms.

To classify (describe how fast function grows over time) given algorithm we use following orders of function (best to worst):

  • constant – O(1)
  • logarithmic – O(log n)
  • linear – O(n)
  • quasilinear – O(n log n)
  • polynomial – O(n ^ c)
  • exponential – O(c ^ n)
  • factorial – O(n!)

How to understand it?

In Big O notation we assume the input data size to be unknown, which we call n. It can be string length, number of elements in array or stack size etc. First step is to determine what is used to describe the size of a problem.

Secondly we need to understand what operation we’re measuring. If we have two algorithms doing same job, we have to find out what is the basic operation which will be called most often e.g. for sorting algorithms this operation would be comparision of numbers. This operation can be comparision, method call, intensive calculation, basically somethig which affects performance the most and it is (or can be) executed for every input item.

Thridly we need to determine how many times our algorithm executes this operation, e.g. we have an array of size n (n is size of our problem) and our algorithm compares each number one time, which ends up in n operations. For this algorithm Big O is O(n), because it will always end up executing n operations for problem of size n.

What would happen to Big O if our algorithm compared every number twice? It would perform n + n comparisions, how our Big O would change? It wouldn’t, it still is O(n) because with this notation we’re only interested in its most significant factor (which is n because n + n = 2n = O(2n) = O(n)). We do this because we don’t want to be exact, we just want to know general order of function. Imagine that you replace n with infinity, difference between 2n and n becomes negligible. It changes nothing, what is important is that algorithm solves problem in linear time. Similarly if our algorithm performs n! + log n operations the log n becomes negligible, so we end up with O(n!).

Up until now I’ve used Big O to describe only the worst case scenario, but it can also be use to determine best and expected case scenarios. It’s similary except that in best case we’re determining what is the minimal amount of operations in our algorithm and in expected case we’re estimating the average amount of operations. I’ll leave that for your own exercise and research.

First example – O(1) the fastest

O(1) is best in terms of speed. It means that algorithm works in constant time regardless of problem size.

1
2
3
4
5
int GetFirst(int[] numbers)
{
    // validation skipped
    return numbers[0];
}

Second example – O(n) the linear

O(n) is pretty fast on our scale. It’s linear which means that complexity grows proprotional to size of the problem.

Let’s consider this simple example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
int GetMax(int[] numbers)
{
    if (numbers == null || numbers.Length == 0)
    {
        throw new ArgumentNullException(nameof(numbers));
    }
    
    int max = numbers[0];
    for (int i = 1; i < numbers.Length; i++)
    {
        if (numbers[i] > max)
        {
            max = numbers[i];
        }
    }
    
    return max;
}

If you were to compare this function to different function (which still returns maxium number from array) what’s the primary operation? The opeartion is comparison numbers[i] > max. Since we have n numbers in the array, how many operations would that be? Also n. So we end up with O(n) complexity for this algorithm.

Third example – O(n ^ 2) the quadratic

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
void Sort(int[] numbers)
{
    if (numbers == null || numbers.Length == 0)
    {
        throw new ArgumentNullException(nameof(numbers));
    }
    
    for (int i = 0; i < numbers.Length; i++)
    {
        for (int j = 0; j < numbers.Length; j++)
        {
            if (numbers[i] > numbers[j])
            {
                int tmp = numbers[i];
                numbers[i] = numbers[j];
                numbers[j] = tmp;
            }
        }
    }
}

Let’s see how this algorithm works. Size of problem is the size of an input array, primary operation is comparision numbers[i] > numbers[j], but we have two nested loops here. First loop always goes through whole array, so it will compare numbers n times. Second nested array does the same. We end up with n * n comparisions in total, so our complexity in this case is O(n ^ 2).

Summary

Big O helps determine complexity of an alogrithm. Specifically we’re interested in the worst case scenario, but we can also use it to determine best / expected cases too. I’ve focused on performance in this article, but we can also use it to determine space complexity. Software developers use these principles even when they don’t know Big O notation. Even understanding basics of it helps in algorithm optimization. We can often make performance improvements in exchange for space complexity, so it’s useful to know how and when we can redesign algorithms.

Sites worth checking: