Getting Asymptotes with the Big O

Let's Talk about Math, Baby!

Posted by 'Tine Zekis on May 31, 2015 · 12 mins read

Assignment: Technical Blog

Let's talk about you and me... Let's talk about all the good things and the bad things that make a method take too long even for a computer! So, let's say we're creating a method for our computer, and we want to know whether our method is efficient. That is to say, if we had a very large input, would the computer be able to return the desired output in a reasonable amount of time? In order to understand the efficiency of a method, let us explore the example of looking up a name in a phone book:

Start from the Very Beginning: My Nanny Says It's the Very Best Place to Start

Let's say we want to find the phone number for Christine Onayemi in a phone book. For those of you who grew up with the Internet, a phone book is a large book containing listings in alphabetical order by last name, then by first name or initial. Each name then has a phone number and possibly a street address. So, if we want to find Christine Onayemi, we would flip to the O section. But how would we tell out computer how to "flip" to a particular page? Instead, we could ask it to take each name and see if the last name starts with an "O". If not, then it goes on to the next name. And the next. Finally, when we get to a name that does start with "O", we need to check whether the second letter is "N", and so on until we have matched the last name. Then we need to look through the "Onayemi"s to find one whose first name starts with "C", and so on. As you can imagine, if this is a large phone book, this method will take a lot of time. So let's see if we can make this search more efficient.

Start in the Middle and Go from There

If we have a phone book of, say, 1,000,000 names, then we can instruct our method to start at the halfway mark. So, we look at the name that is 500,000 in from the beginning and compare this name to "Onayemi". If this name happens to be "Onayemi, Christine", then we're done...how lucky! But in the much more likely case that this is not the end of our search, we will use what is called a binary search method to continue. If "Onayemi" comes after our halfway-point name alphabetically, then we move on to the three-fourths mark (half of the second half); if "Onayemi" comes before this name, we look at one-fourth (half of the first half). We continue to halve segments until we get to the name we are looking for. Believe it or not, using this method, in a list of 1,000,000 people, we will only need to halve a segment a maximum of 20 times before we are certain to find the name we are looking for. It seems this method is much more efficient than the first!

So, if there are several ways to solve the same problem, how can we compare them to determine which is the most efficient? Enter: the big O.

Big O Complexity and Notation

Mathematically speaking, each time we ask a computer to carry out a method, we are initiating an algorithm. The complexity of that algorithm will determine the run-time of the method. In order to compare algorithm complexities, big O notation was developed. With big O notation we express the runtime in terms of "how quickly it grows relative to the input, as the input gets arbitrarily large" (Parker Phinney, Interview Cake). We cannot predict runtime exactly, since there are factors we cannot take into account, such as the processor's speed or whether the computer is running other programs simultaneously. Instead, we think of runtime in terms of how quickly it grows as our inputs get bigger and bigger. For big O notation, we will call our input size n. We say that our runtime grows "on the order of the size of the input" [O(n)], or "on the order of the square of the size of the input [O(n2)], etc. In fact, the "O" in big O stands for "order". It will be important to note, that we are only interested in the highest power (order) of n, since these will grow the fastest as our input becomes huge. As we approach this huge input, we will also approach some value for our big O notation. This is why "big O analysis" is sometimes referred to as "asymptotic analysis". Hopefully this will become more clear with some examples, so let's get to it!

Big O Notation Examples

Addition Algorithm: Linear Complexity

Let's discuss the algorithm for adding two, multi-digit numbers together. Generally, when we add such numbers, we can line them up by place value and begin to add corresponding pairs of digits from right to left. If a sum for any digit pair is greater than 9, we will carry over the digit in the tens place and add it to the next pair of digits. Using this algorithm, if we are adding two 6-digit numbers, then we will have to add six pairs of digits, and possibly carry a seventh. We will assume that the act of adding is more costly to the runtime than the carrying, so the number of pairs to add will be our focus. If we add two 100-digit numbers together, then we need to compute 100 additions. For two 10,000-digit numbers, we must compute 10,000 additions. In this case, the complexity of our algorithm is directly proportional to the number of digits, which we will call n. This is called O(n), or linear complexity. The algorithm for subtraction would create a similar result, only we will need to borrow rather than carry digits.

Multiplication Algorithm: Quadratic Complexity

Now let's talk about how we would multiply two, multi-digit numbers. We will line the numbers up in a similar fashion by place value. However, this time, we take each digit from the bottom number and multiply it with each digit in the top number (again, carrying digits beyond the ones place). We repeat this for each digit of the bottom number. So, when multiplying two 6-digit numbers, we need to calculate 36 multiplications! There will also be addition of the resulting numbers, but in this case, our multiplication will grow our runtime at a faster rate, so we do not bother ourselves with details regarding addition. If we multiply two 100-digit numbers, we must calculate 10,000 multiplications. Since this algorithm scales with n2, we say that it is O(n2), or quadratic complexity.

The Traveling Salesman

A common problem in the world of computer science is that of the Traveling Salesman: There are n number of towns, and each is linked to one or more other towns by a road of a particular distance. The Traveling Salesman must find the shortest route that allows him to visit each town.

Okay, this isn't so bad...let's say there are three towns: A, B, and C. If there is a road between all pairs, then the salesman's options are limited to the following:

  • A → B → C
  • A → C → B
  • B → C → A
  • B → A → C
  • C → A → B
  • C → B → A
And since (A → B → C) and (C → B → A) are really the same paths (just in reverse), one member of each reversed pair can be eliminated. Thus, there are only 3 possibilities for our salesman. No problem!

But what happens when we increase n (the number of towns)? With 4 towns, we get 12 possibilities, 5 towns give 60, 6 towns give 360. We can calculate the number of possibilities by taking half of the factorial of n.

  • 3! / 2 = 3 * 2 * 1 / 2 = 6 / 2 = 3
  • 4! / 2 = 4 * 3 * 2 * 1 / 2 = 24 / 2 = 12
  • 5! / 2 = 5 * 4 * 3 * 2 * 1 / 2 = 120 / 2 = 60
  • 6! / 2 = 6 * 5 * 4 * 3 * 2 * 1 / 2 = 720 / 2 = 360
  • ...
  • 25! / 2 = 25 * 24 * … * 2 * 1 / 2 = 15,511,210,043,330,985,984,000,000 / 2 = 7,755,605,021,665,492,992,000,000
By the time we get to only 200 towns, we no longer have enough time in the universe to solve this problem with traditional computers!

Naming Complexities

  • Constant Complexity: O(1) - 1 item takes 1 second, 10 items takes 1 second, 100 items takes 1 second. The input size does not impact the runtime as it grows.
  • Linear Complexity: O(n) - 1 item takes 1 second, 10 items takes 10 seconds, 100 items takes 100 seconds. The input and runtime have a linear relationship; as one increases by a factor of 10, so does the other.
  • Quadratic Complexity: O(n2) - i item takes 1 second, 10 items takes 100 seconds, 100 items takes 10,000 seconds. The runtime grows with the square of the input.
  • Polynomial Complexity: O(na) - Any complexity that can be written in this form has a polynomial complexity, or is solvable in polynomial time.
  • Logarithmic Complexity: O(log n) - 1 item takes 1 second, 10 items takes 2 seconds, 100 items takes 3 seconds, 1,000 items takes 4 seconds, etc. The number of seconds is the exponent in this relationship, or more precisely, the runtime grows with the log of the input.
  • Factorial Complexity: O(n!) - The runtime grows with the factorial calculation of the input (see the Traveling Salesman example above).

Well, I hope this has been helpful in your understanding of big O complexities and notation, as well as how we determine the efficiency of a method. Please stay tuned as I continue to blog about my learnings at Dev Bootcamp. I'm coming up on my last week before the on-site portion of the program, so get excited!

Resources: