.daniel's musings - Big Oh


Big Oh

September 7, 2024 3:08 PM

A few years ago I decided to do more learning about algorithms and data structures. I had a basic understanding of them but I decided my first mountain would be leetcode. It took me three days to figure out the first algorithm problem. You can find a lot of that on my github as well as my leetcode profile.

So let's talk about Big O notation. Big O notation is a way to describe the efficiency of an algorithm. It is a way to describe how the runtime of an algorithm grows as the input size grows.

For this blog, we'll be talking about the most common Big O notations. Most of the code we'll be talking about will be in Ruby and Go, might add JavaScript depending on the algorithm.

Terminology

Big O Notations

O(1) - Constant Time

Constant time is the best-case scenario for an algorithm. It means that the runtime of the algorithm does not change as the input size grows. This is the most efficient algorithm you can have.

Let's assume that you have an array of numbers and you want to get the nth element of the array. The runtime of this algorithm is O(1) ("O of one") because it doesn't matter how big the array is, it will always take the same amount of time to get the nth element.

def get_nth_element(array, n)
  array[n]
end
func getNthElement(array []int, n int) int {
  return array[n]
}

O(n) - Linear Time

Linear time is when the runtime of the algorithm grows linearly with the input size. This is the most common runtime you'll see in algorithms. It means that the runtime of the algorithm grows at the same rate as the input size.

Let's assume that you now have an array of numbers and you want to find a specific number in the array. The runtime of this algorithm is O(n) ("O of n") because the runtime grows linearly with the size of the array. This would typically require you to return a boolean value.

def find_number(array, number)
  array.each do |n|
    return true if n == number
  end
  return false
end
func findNumber(array []int, number int) bool {
  for _, n := range array {
    if n == number {
      return true
    }
  }
  return false
}
function findNumber(array, number) {
  for (let i = 0; i < array.length; i++) {
    if (array[i] === number) {
      return true;
    }
  }
  return false;
}

O(n^2) - Quadratic Time

Here is where things start to get a little bit slower. Quadratic time is when the runtime of the algorithm grows quadratically with the input size. You can think of quadratic time as a nested loop.

For example, you have an array of numbers and you want to find all the pairs of numbers that add up to a specific number. The runtime of this algorithm is O(n^2) ("O of n squared") because you have to loop through the array twice. Care has to be taken to avoid duplicate pairs.

def find_pairs(array, target)
  pairs = []
  array.each_with_index do |n, i|
    array.each_with_index do |m, j|
      pairs << [n, m] if n + m == target && i != j
    end
  end
  pairs
end
func findPairs(array []int, target int) [][]int {
  pairs := [][]int{}
  for i, n := range array {
    for j, m := range array {
      if n+m == target && i != j {
        pairs = append(pairs, []int{n, m})
      }
    }
  }
  return pairs
}
function findPairs(array, target) {
  let pairs = [];
  for (let i = 0; i < array.length; i++) {
    for (let j = 0; j < array.length; j++) {
      if (array[i] + array[j] === target && i !== j) {
        pairs.push([array[i], array[j]]);
      }
    }
  }
  return pairs;
}

O(log n) - Logarithmic Time

Logarithmic time is when the runtime of the algorithm grows logarithmically with the input size. This is the most efficient runtime you'll see in algorithms. It means that the runtime of the algorithm grows at a slower rate than the input size. This is usually seen in algorithms that use divide and conquer.

You're probably familiar with binary search. Binary search is an algorithm that finds the position of a target value within a sorted array. The runtime of this algorithm is O(log n) ("O of log n") because the runtime grows logarithmically with the size of the array.

def binary_search(array, target)
  low = 0
  high = array.length - 1

  while low <= high
    mid = (low + high) / 2
    if array[mid] == target
      return mid
    elsif array[mid] < target
      low = mid + 1
    else
      high = mid - 1
    end
  end
  return -1
end
func binarySearch(array []int, target int) int {
  low := 0
  high := len(array) - 1

  for low <= high {
    mid := (low + high) / 2
    if array[mid] == target {
      return mid
    } else if array[mid] < target {
      low = mid + 1
    } else {
      high = mid - 1
    }
  }
  return -1
}
function binarySearch(array, target) {
  let low = 0;
  let high = array.length - 1;

  while (low <= high) {
    let mid = Math.floor((low + high) / 2);
    if (array[mid] === target) {
      return mid;
    } else if (array[mid] < target) {
      low = mid + 1;
    } else {
      high = mid - 1;
    }
  }
  return -1;
}

O(2^n) - Exponential Time

Exponential time is when the runtime of the algorithm grows exponentially with the input size. This is the worst-case scenario for an algorithm. It means that the runtime of the algorithm grows at an exponential rate as the input size grows.

Algorithms that use recursion are usually exponential. For example, the Fibonacci sequence is an algorithm that uses recursion and has an exponential runtime.

def fibonacci(n)
  return n if n <= 1
  fibonacci(n - 1) + fibonacci(n - 2)
end
func fibonacci(n int) int {
  if n <= 1 {
    return n
  }
  return fibonacci(n - 1) + fibonacci(n - 2)
}
function fibonacci(n) {
  if (n <= 1) {
    return n;
  }
  return fibonacci(n - 1) + fibonacci(n - 2);
}

But what does this all mean and how do you use it? Well, Big O notation is a way to describe the efficiency of an algorithm.

When you're writing code, you should always be thinking about the efficiency of your code. You should always be thinking about how you can make your code more efficient. You should always be thinking about how you can make your code run faster.

My biggest issue when I first started learning about things like loops and recursion was that I didn't understand the efficiency of my code. I would write code that worked but it was slow.

Some Real-World Applications

O(1) - Constant Time

O(n) - Linear Time

O(n^2) - Quadratic Time

O(log n) - Logarithmic Time

O(2^n) - Exponential Time

Conclusion

Write code, write efficient code. Understand the efficiency of your code. Understand the efficiency of the algorithms you're using. Understand the efficiency of the data structures you're using. Understand the efficiency of the code you're writing.

References