# Big O notation made simple

If you ever do a technical interview, in 99.9% of cases you'll be asked to write some code or an algorithm and give it's algorithmic complexity. You need to know this stuff and it isn't just for Python, this applies to any language! I'm going to explain this in the simplest way possible while still giving you a good understanding of the topic.

Selection sort and Insertion sort for example are what we call *quadratic sorting algorithms*.

This means they both have a **big O time complexity** of: *O*(*n*^{2})

**Time complexity** refers to the time it takes for the algorithm to complete. There are generally three categories:

**Best case scenario**- For example with insertion sort, let's say our list is already sorted and we try to sort it, then we are in a best case scenario as we don't need to move any elements around and the algorithm finishes quickly.**Average case**: This is how the algorithm performs in terms of time on average**Worst case scenario**: This is how the algorithm would perform in terms of time if our list was completely scrambled and out of order.

**Big O** deals with the worst case and it is the case we are usually concerned with!

When we say that both of these algorithms have a time complexity of *O(n^2)*, we are essentially saying that the performance of the algorithms will grow proportionally to the square of the size of the input (lists in this case).

I've taken the time to run some algorithms with different time complexities on lists of varying sizes and we can look at how the algorithm takes longer to complete as the size of the list grows. You can see this in the diagram below:

In this diagram, the y-axis represents how long it took the algorithm to finish in seconds and the x-axis shows the number of elements in the list.

The blue line is for an O(n^2) algorithm and the orange is for an O(n) algorithm (Linear time algorithm).

I have a fast computer so it doesn't appear that the orange line is changing (it is, just very slowly).

However it is very clear that the O(n^2) algorithms become very slow when the size of the input becomes large.

An algorithm's time complexity is usually put into one the following categories starting with the best complexity:

**O(1)**- Constant time: It takes the same time for an algorithm to complete no matter the input size**O(log n)**- Logarithmic time: Will be explained below**O(n)**- Linear time: An algorithm whose performance grows in direct proportion to the size of the input**O(n log n)**- Linearithmic time: Will be explained below**O(n^2)**- Quadratic time: An algorithm whose performance grows in direct proportion to the square of the size of the input**O(2^n)**- Exponential time: An algorithm whose performance doubles with each addition to the input.

The algorithms who's complexity has some logarithm component is a little trickier to explain. Take binary search for example. If we have a list of integers and we are searching for a specific number in that list, the number we are looking for is either in the first half or second half of the list. If it was in the first half for example, we could cut out the second half and not bother searching it as it would be a waste of time. Essentially what we have done is cut the input size in half. We repeat this process until we have our number, each time halving the input.

So, an **O(log n)** an algorithm is an algorithm in which we iteratively halve the input. The plot of this can be seen below

I hope these explanations helped you out! You can find out more about algorithmic complexity in the book "Slither into Python" available in the navigation menu at the top of the page! Oh and did I mention it's free to read online?