Big O notation made simple

If you ever do a technical interview, in 99.9% of cases you'll be asked to write some code or an algorithm and give it's algorithmic complexity. You need to know this stuff and it isn't just for Python, this applies to any language! I'm going to explain this in the simplest way possible while still giving you a good understanding of the topic.

Selection sort and Insertion sort for example are what we call quadratic sorting algorithms.

This means they both have a big O time complexity of:
Time complexity refers to the time it takes for the algorithm to complete. There are generally three categories:

Big O deals with the worst case and it is the case we are usually concerned with!

When we say that both of these algorithms have a time complexity of O(n^2), we are essentially saying that the performance of the algorithms will grow proportionally to the square of the size of the input (lists in this case).

I've taken the time to run some algorithms with different time complexities on lists of varying sizes and we can look at how the algorithm takes longer to complete as the size of the list grows. You can see this in the diagram below:

algorithmic complexity comparison

Time complexity

In this diagram, the y-axis represents how long it took the algorithm to finish in seconds and the x-axis shows the number of elements in the list.

The blue line is for an O(n^2) algorithm and the orange is for an O(n) algorithm (Linear time algorithm).

I have a fast computer so it doesn't appear that the orange line is changing (it is, just very slowly).

However it is very clear that the O(n^2) algorithms become very slow when the size of the input becomes large.

An algorithm's time complexity is usually put into one the following categories starting with the best complexity:

The algorithms who's complexity has some logarithm component is a little trickier to explain. Take binary search for example. If we have a list of integers and we are searching for a specific number in that list, the number we are looking for is either in the first half or second half of the list. If it was in the first half for example, we could cut out the second half and not bother searching it as it would be a waste of time. Essentially what we have done is cut the input size in half. We repeat this process until we have our number, each time halving the input.

So, an O(log n) an algorithm is an algorithm in which we iteratively halve the input. The plot of this can be seen below

logarithmic complexity

I hope these explanations helped you out! You can find out more about algorithmic complexity in the book "Slither into Python" available in the navigation menu at the top of the page! Oh and did I mention it's free to read online?

Help support the author by donating or purchasing a copy of the book (not available yet)

Buy me a coffeeBuy me a coffee