Big O Notation: The Basics

Brian Cheung
5 min readApr 19, 2021

One of the fundamental concepts in computer science and programming is something called “Big O Notation”. In this blog, I’ll be going over what Big O is, why it is important when it comes to programming, and various examples of Big O.

Big O and bagels
Big O, bagel, same thing, right?

What is Big O Notation?

Big O notation is used to describe the runtime, or time complexity, of different algorithms. Although we use the word “runtime”, Big O does not describe how long an algorithm takes in milliseconds or other units of time. Instead, it describes the runtime of an algorithm in terms of the input size and the number of operations it will perform. A simple way to think of Big O is that it is the efficiency of an algorithm. Big O notation is represented as a function O and an input size n. We use the letter O because the growth rate of a function is also known as the order of the function.

Importance of Big O

Big O is important because as programmers, we always want to write code that is not only easy to write and understand, but also efficient. If you are just starting to learn programming, you’ll probably not think too much about the runtime of your programs, as many of the basic programs you learn to write early on takes milliseconds to execute. But as you get your career started, you may start to work with some big data. Imagine if you are working for Instagram, and you had to parse the data of every user. That’s a lot of data, and you don’t want slow code to interrupt the user experience.

Calculating Big O

Calculating…

Before we get to calculating the Big O of computer programs, we need to know how to calculate the Big O of algorithms in general. Bear with me as I bring out some algebra.

If there’s one thing to remember when it comes to calculating the Big O of an algorithm, it is that we are only concerned with the part that will take the longest. Why is this? Consider the following function:

T(n) = 6n² + 3n — 5

As n approaches infinity, the n² term will dominate, and make it so that all the other terms can be disregarded. For example, when n = 1000, the 6n² term is 2000 times as large as the 3n term (6(1000)² = 6,000,000 while 3(1000) = 3,000). In general, as we approach infinity, ignoring the 3n term will have basically no effect on the outcome of the function. In addition, we can remove any coefficients in the function for the same reason. If F(n) =1,000,000n², and G(n) = n³, the output of G(n) will eventually exceed the output of F(n) once n passes a certain threshold (that being n =1,000,000). So after only taking the most dominant term, and removing all coefficients, we can say that the Big O of the function T(n) = 6n² + 3n — 5 is n². Often, you may see this written as:

T(n) = O(n²)

but this is misleading because “=” is not meant to express “is equal to”, but rather just “is”. A more accurate expression would use the “∈” symbol from set notation, so we would write:

T(n) ∈ O(n²)

Now lets apply this to programming, and I’ll show you the various runtimes of different, common algorithms.

Constant Time: O(1)

Constant time is the programming dream. Its the best case scenario and you literally cannot beat it. Algorithms that run in O(1) time will take the same amount of time to execute, no matter what the input size is. Examples of this include:

  • Accessing a specific element in an array:
def get_array_index(arr, idx)
arr[idx]
end
  • Determining whether a number is even or odd:
def even_or_odd(n)
n % 2 == 0
end

Logarithmic Time: O(log n)

Logarithmic time means that as the input size grows exponentially, the time to run the algorithm remains linear. The best example of an algorithm that runs in logarithmic time is a binary search:

def binary_search(arr, n)    
first = 0
last = arr.length - 1
while first <= last
i = (first + last) / 2
if arr[i] == n
return true
elsif arr[i] > n
last = i - 1
elsif arr[i] < n
first = i + 1
else
return false
end
end
end

Linear Time: O(n)

Linear time means that the runtime of the algorithm is proportional to the size of the input. The main example of this is when we have to iterate over an array to find a target value. The runtime of this is O(n) because we do not know where in the array the target value is, and in the case that the target value is not in the array, we would end up going through the entire array anyway:

def linear_search(arr, n)
for i in 0..arr.length
if arr[i] == n
return true
end
end
false
end

Quadratic Time: O(n²)

Quadratic time is not the most ideal scenario, but sometimes, programmers have to roll with it as there might not be a better solution for an algorithm. The runtime of an algorithm with a Big O of O(n²) is the square of the input’s size. You can imagine how this can be problematic; if we had an input array that had a length of 1,000, there would be 1,000,000 operations! The most common example of a quadratic time algorithm is a nested for-loop:

def quadratic_time(arr)
for i in 0..arr.length
for j in arr[i].length
puts arr[i][j]
end
end
end

Here are various runtimes, listed from most efficient to least efficient:

  • O(1)
  • O(log log n)
  • O(log n)
  • O(n)
  • O(n log* n)
  • O(n²)
  • O(nª), a > 2
  • O(n!)

The Big Deal

As we can see, various different algorithms have different runtimes, some that are more efficient than others. As you journey into the programming world, calculating the runtimes of your programs will eventually become a daily routine.

Hope this helps!

Thanks for reading!
Source: K-On!

--

--

Brian Cheung

Software engineering graduate of Flatiron School. Loves all things gaming, photography, and anime.