Algorithms Cheat Sheet with JavaScript

What they don't usually teach in bootcamps. Practical implementations of common algorithms in idiomatic JavaScript.

I will introduce you to how to think about algorithms, how to estimate them, optimize them, recursions and all the stuff you need to know as a credible JavaScript engineer. Things I wish I knew earlier in my career when interviewing.

Relax, take your time and enjoy.

What is an Algorithm ?

  • It is a set of instructions to solve a problem.
  • As an engineer, it’s your job / goal to solve problems
  • When solving problems, you identify patterns / techniques that are more efficient / performant
  • Remember that there are many ways to solve a single problem, but which one is the fastest when the amount of input data to process increases ?

Time complexity

  • Time complexity is how many primitive operations are executed in an algorithm as data is fed to it
  • It helps to reason about the speed of an algorithm
    — It’s an approximation of the time it takes to execute an algorithm as the input data increases
    — It’s an approximation because you cannot exactly predict how long an algorithm will take to execute on every possible type of machine architectures / environments / hardware configurations / etc.
    So instead of using seconds or milliseconds, a growth factor is used to indicate how much an algorithm tends to grow as you feed it data.
  • To sum up time complexity is how many primitive operations are executed to solve a problem as the input grows and in the worst case scenario.
  • Think about the cost of operations performed. The speedier an algorithm, the more data it can process in a reasonable time.

Space complexity

  • Space complexity is how much memory is used with respect to input size when executing an algorithm

Example of Time Complexity Estimation

Let's say you have a list of VPN providers on a platform. You want to solve for the most expensive and the cheapest ones. Remember that providers keep being added everyday, so the list keeps growing.

const vpnProviders = [
{ name: 'totoVpn', price: 234 },
{ name: 'titoVpn', price: 321 },
{ name: 'tatoVpn', price: 99 },
{ name: 'tutoVpn', price: 54 },
...
];

Approach 1: compare all providers to one another

As you loop through the list, you keep track of the highest and lowest price per iteration.

Let's see a representation of when searching for the most expensive price:

       |  234  |  321  |  99  |  54  |
--------------------------------------
234 | -- | 321 | 234 | 234 |
--------------------------------------
321 | 321 | -- | 321 | 321 |
--------------------------------------
99 | 234 | 321 | -- | 99 |
--------------------------------------
54 | 234 | 321 | 99 | -- |

This approach results in nested loops to compare each price to all the other ones in the list and find the highest and as the list grows, you get the following:

 # of providers (n)  |   4  |  11  |  100     |  2000      |
------------------------------------------------------------
# of operations | 16 | 121 | 10,000 | 4,000,000 |

This grow is called quadratic time:

  • as the number n of providers increases, the number of operations increases by the power of two (4², 11², 100² operations).
  • so basically, the more you data grows, the more enormous the work will be needed

Approach 2: track the minimum and maximum

      |  234  |  321   |  ...  |  99  |  54  |
----------------------------------------------
max? | 234 | 321 | ... | 321 | 321 |
| 234 | 321 | ... | 99 | 54 |
- --------------------------------------------
min? | 234 | 234 | ... | 99 | 54 |
  • this approach results in two loops that are not nested. The first one to find the maximum price, the second to find the lowest price.
  • Each item in list gets looked at only once in the loop iteration and it is stored if meets the condition (is max, is min)
  • here the time complexity is linear (here, 2n) because of the two distinct loops that look at all items once
  • the number of operations to perform is twice the number of items or simply put,as the data grows, the work increases by 2.
# of providers (n)  |   4  |  11  |  100  |  2000  |
----------------------------------------------------
# of operations | 8 | 22 | 200 | 4,000 |
  • you can see that the 2n time complexity is faster than the n² one because you don't have as many operations to perform when data grows

Approach 3: sorted list

const vpnProviders = [
{ name: 'tutoVpn', price: 54 },
{ name: 'tatoVpn', price: 99 },
{ name: 'totoVpn', price: 234 },
{ name: 'titoVpn', price: 321 },
...
];
  • in the case of a sorted list, it's straightforward to find the minimum and maximum
  • no need for loops, just 2 operations, set the first value as the minimum and set the last value as the maximum
  • therefore the time complexity of the algorithm has a constant growth because no matter no number n of items, the number of operations stays the same
# of providers (n)  |   4  |  11  |  100  |  2000  |
----------------------------------------------------
# of operations | 2 | 2 | 2 | 2 |

Introducing Big O

  • The time complexity growth is represented by the big O notation
  • so far, we have seen:
    - Quadratic time complexity represented by O(n²)
    - Linear time complexity represented by O(n)
    - Constant time complexity
    represented by O(1)
  • Because tme complexity is an approximation, insignificant numbers are dropped therefore a time complexity of 2n is just O(n) not O(2n)
  • Factors are dropped:
    - 5n is O(n) [pronounced "o of n"]
    - 24n² is O(n²) [pronounced “o of n square”]
  • here's a graphical representation of the growth:
x axis = data input, y axis = time
  • knowing the time complexity helps you understand the limits of an algorithm in terms of how much data it can withstand before it takes a million years to finish
  • It also helps to make trade-offs about choosing one algorithm over others based on certain conditions
  • the notion of algorithmic correctness is about your choice of an algorithm, is it the most optimal / correct solution ?
  • notice with a logarithmic time complexity ( O(log n) ), the more data you feed the algorithm the less and less work it has to do, but also notice that at the beginning it takes more time than a quadratic time complexity
  • so from very performant to extremely slow, you have:
    - Constant time complexity represented by O(1)
    - Logarithmic time complexity represented by O(log n)
    - Linear time complexity represented by O(n)
    - Quadratic time complexity represented by O(n²)
    - Cubic time complexity represented by O(n³)
    - Exponential time complexity represented by O(k)

That's it

Still learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store