Big O: Part 1

Jesse Langley
5 min readSep 11, 2020

--

Big O Notation

Big O is a very important topic in computer science and in programming. Big O is a way to define the time complexity of algorithms. The idea is to communicate to the next person how fast or how much space an algorithm or a program will take. This is very important when it comes to large applications.

Why do we need it? Let’s imagine we have two valid solutions to one problem. They both work but they have totally different approaches. Let’s say one uses a loop and the other uses a list to accomplish the same task. How do we know which one is best? This is what Big O is about, it’s a way of generalizing code and comparing it to other pieces of code.

Let’s say you asked to write a function that takes two dates and returns the number of days between the first and second date. There are many ways to solve this problem. How do we decide which one is the fastest?

This reminds me of trying to explain a villain or a hero’s power levels in cartoons or superhero movies. How will the viewer know how much more powerful one hero is compared to the other? Using numbers to represent power levels can be helping to explain the difference in abilities without actually seeing it or being there in person.

When I began learning to code, I initially did not care about the performance all that I cared about was getting the solution to work. I did not understand it at first and everyone around me made it look so easy but they could barely explain it to me. This is my attempt to explain it to my past self.

What I learned is that at the time I was only working with very small sets of data, and I quickly realized that when working with large sets of data that using a better implementation of a function can save hours comparatively.

There is always the best solution to a problem and performance matters. It’s important to be able to have a precise vocabulary when it comes to explaining time or space complexity in code. Even if you are content with your solution it is important to understand how it compares or how it performs to other solutions. Great when discussing trade-offs between approaches. Learning big O helps to allow coders to explain and talk about their code better.

So what is the definition of better code or better programming solutions? Does this mean faster code? Does this mean code that takes of less memory or does it mean code that is more reliable or more readable? Honestly this all matters, but let’s focus on speed specifically to check the time complexity of a function or an algorithm.

What would have helped me when I was studying this initially is if someone told me that the O in big O stands for Operations. This may or may not be true but when it comes to times complexity it really does help to understand what exactly is going on.

Ok so let’s start with an example. Let’s say we want to write a function that calculates the sum of all numbers from up to and including some number n. (This example comes from this link)

Ok, so which one is better? Specifically, what is the fastest for this function? In the shorter one-line answer above there are three operations. In the other solution, on the other hand, are initially four assignments, two operations, and one comparison. Just counting operations this is already a lot more complex.

But on top of that there these comparisons and operations are in a loop so they will happen as many times as “n” the variable. So those four assignments, two operations, and one comparison will my 5n + 2. This loop will run “n” times so we describe that as 5O

+ 2(five O of n plus 2). The last this here is that we completely dismiss contacts when talking about Big O so we have to drop the 7 and leave the times complexity for the longer solution to be On.

When we talk about the second solution there are only three operations that happen only once. This would give us just O(3), counting only three operations. This difference here again is dropping the constant and just compare as O(1). O(1) is the fastest solution can be and can really save your program a lot of time.

Big O allows us to talk about the runtime of an algorithm as the input grows. The broad understanding of Big O is what is important. There are a few different options when it comes to describing Big O. Some options are:

· Linear O

· Quadratic O

· Constant O(1)

As we saw in the above example that was Linear, we used a loop. What if we actually put a loop inside of a loop? This would make that solution Quadratic. There are other ways of showing big O but a great cheat sheet can be found here.

Big O is not as complex as it seems once you really start to dig deep into understanding it. I only touched the basics here but stay tuned for more as I will dive deeper into this subject in the future.

Part 2

References:

https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation

--

--