Big O Part 2: (Space Complexity)

Jesse Langley
4 min readSep 18, 2020

--

In my last blog, I defined Big O and why it is important. I also went into how to explain the time complexity of an algorithm. Time complexity focuses on how fast an algorithm can run. We analyzed the runtime of an algorithm as the size of the input increases.

Now I will go into the space that an algorithm takes up as the size the input increase. We can still use big O notation to describe what is happening. This is called Space complexity.

“The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm to execute a program and produce the output.” (Wikipedia)

Space complexity is sometimes referred to as auxiliary space complexity. As N grows the size the input itself grows so the space taken up by the inputs are ignored. We care about the algorithm itself because as N grows, we are assuming that the input is growing. All we really care about is what repercussions the algorithm has, besides the input. So unless otherwise noted, when we talk about space complexity, we are ignoring the inputs and focusing on the auxiliary space.

There are a few rules to follow when referring to space complexity. The first rule is that Booleans, numbers, undefined, null are all constant space. The second rule is that strings require O(n) where N is the string length. The third rule is that objects and arrays are generally O(n) where N is the length for arrays or the number of keys for objects.

The example function above takes in an array and sums all of the numbers in that array. The things that are taking up space are the variable total, and the variable i. We add into the total variable but that doesn't matter that takes time but space is already there taken up. So no matter the size of the array as it grows it doesn't have an impact on the space being taken up because there are only two variables. We add to the total variable but we are not creating a new variable which means we have constant space.

Just like with constant time constant space is described as O(1) space. The space complexity of sum will always be the same no matter the size of the input.

In the above example (double) it takes an array and it returns a new array with every item from the first array being doubled. So if I ran double([2,6,8]) it would return (=>) [4,12, 16].

The things that are taking up space in this function are the newArr variable which is the empty array that is created at the beginning of the function. This time in the for-loop newArr is getting longer and longer directly proportionate to the length of the input (the array that is taken into the function). So the space that is taken up is directly proportionate to the input. And if we remember our third rule of thumb is that objects and arrays are usually O(n) space.

The double function would actually be O(n)+ 2 because of the initial array declaration being 1 and the i declaration in the for-loop being 2. the new array being made is O(n). If you remember the last section we ignore contacts when talking about big O so this problem would be described as having O(n) complexity.

So knowing time and space complexity are very important to know when speaking with other programmers. A programmer should be able to notice if an algorithm has terrible time complexity but very good space or vice versa.

This subject is very important so I will continue to talk about it through a lot of my blogs. This is really an opportunity to lay the groundwork for myself and readers!

References:

https://www.geeksforgeeks.org/g-fact-86/

--

--

Jesse Langley
Jesse Langley

Written by Jesse Langley

0 Followers

Full_Stack_Developer

No responses yet