Let's talk about you and me... Let's talk about all the good things and the bad things that make a method take too long even for a computer! So, let's say we're creating a method for our computer, and we want to know whether our method is efficient. That is to say, if we had a very large input, would the computer be able to return the desired output in a reasonable amount of time? In order to understand the efficiency of a method, let us explore the example of looking up a name in a phone book:

Let's say we want to find the phone number for Christine Onayemi in a phone book. For those of you who grew up with the Internet, a phone book is a large book containing listings in alphabetical order by last name, then by first name or initial. Each name then has a phone number and possible a street address. So, if we want to find Christine Onayemi, we would flip to the **O** section. But how would we tell out computer how to "flip" to a particular page? Instead, we could ask it to take each name and see if the last name starts with an "O". If not, then it goes on to the next name. And the next. Finally, when we get to a name that does start with "O", we need to check whether the second letter is "N", and so on until we have matched the last name. Then we need to look through the "Onayemi"s to find one whose first name starts with "C", and so on. As you can imagine, if this is a large phone book, this method will take a lot of time. So let's see if we can make this search more efficient.

If we have a phone book of, say, 1,000,000 names, then we can instruct our method to start at the halfway mark. So, we look at the name that is 500,000 in from the beginning and compare this name to "Onayemi". If this name happens to be "Onayemi, Christine", then we're done...how lucky! But in the much more likely case that this is not the end of our search, we will use what is called a binary search method to continue. If "Onayemi" comes after our halfway-point name alphabetically, then we move on to the three-fourths mark (half of the second half); if "Onayemi" comes before this name, we look at one-fourth (half of the first half). We continue to halve segments until we get to the name we are looking for. Believe it or not, using this method, in a list of 1,000,000 people, we will only need to halve a segment a maximum of 20 times before we are certain to find the name we are looking for. It seems this method is much more efficient than the first!

So, if there are several ways to solve the same problem, how can we compare them to determine which is the most efficient? Enter: the big O.

Mathematically speaking, each time we ask a computer to carry out a method, we are initiating an algorithm. The complexity of that algorithm will determine the run-time of the method. In order to compare algorithm complexities, big O notation was developed. With big O notation we express the runtime in terms of **"how quickly it grows relative to the input, as the input gets arbitrarily large"** (Parker Phinney, Interview Cake). We cannot predict runtime exactly, since there are factors we cannot take into account, such as the processor's speed or whether the computer is running other programs simultaneously. Instead, we think of runtime in terms of how quickly it grows as our inputs get bigger and bigger. For big O notation, we will call our input size *n*. We say that our runtime grows "on the order of the size of the input" [**O( n)**], or "on the order of the square of the size of the input [

Let's discuss the algorithm for adding two, multi-digit numbers together. Generally, when we add such numbers, we can line them up by place value and begin to add corresponding pairs of digits from right to left. If a sum for any digit pair is greater than 9, we will carry over the digit in the tens place and add it to the next pair of digits. Using this algorithm, if we are adding two 6-digit numbers, then we will have to add six pairs of digits, and possibly carry a seventh. We will assume that the act of adding is more costly to the runtime than the carrying, so the number of pairs to add will be our focus. If we add two 100-digit numbers together, then we need to compute 100 additions. For two 10,000-digit numbers, we must compute 10,000 additions. In this case, the **complexity** of our algorithm is directly proportional to the number of digits, which we will call *n*. This is called **O( n)**, or

Now let's talk about how we would multiply two, multi-digit numbers. We will line the numbers up in a similar fashion by place value. However, this time, we take each digit from the bottom number and multiply it with each digit in the top number (again, carrying digits beyond the ones place). We repeat this for each digit of the bottom number. So, when multiplying two 6-digit numbers, we need to calculate 36 multiplications! There will also be addition of the resulting numbers, but in this case, our multiplication will grow our runtime at a faster rate, so we do not bother ourselves with details regarding addition. If we multiply two 100-digit numbers, we must calculate 10,000 multiplications. Since this algorithm scales with *n*^{2}, we say that it is **O( n^{2})**, or

A common problem in the world of computer science is that of the **Traveling Salesman**: There are *n* number of towns, and each is linked to one or more other towns by a road of a particular distance. The Traveling Salesman must find the shortest route that allows him to visit each town.

Okay, this isn't so bad...let's say there are three towns: A, B, and C. If there is a road between all pairs, then the salesman's options are limited to the following:

- A → B → C
- A → C → B
- B → C → A
- B → A → C
- C → A → B
- C → B → A

But what happens when we increase *n* (the number of towns)? With 4 towns, we get 12 possibilities, 5 towns give 60, 6 towns give 360. We can calculate the number of possibilities by taking half of the factorial of *n*.

- 3! / 2 = 3 * 2 * 1 / 2 = 6 / 2 = 3
- 4! / 2 = 4 * 3 * 2 * 1 / 2 = 24 / 2 = 12
- 5! / 2 = 5 * 4 * 3 * 2 * 1 / 2 = 120 / 2 = 60
- 6! / 2 = 6 * 5 * 4 * 3 * 2 * 1 / 2 = 720 / 2 = 360
- ...
- 25! / 2 = 25 * 24 * … * 2 * 1 / 2 = 15,511,210,043,330,985,984,000,000 / 2 = 7,755,605,021,665,492,992,000,000

**Constant Complexity: O(1)**- 1 item takes 1 second, 10 items takes 1 second, 100 items takes 1 second. The input size does not impact the runtime as it grows.**Linear Complexity: O(**- 1 item takes 1 second, 10 items takes 10 seconds, 100 items takes 100 seconds. The input and runtime have a linear relationship; as one increases by a factor of 10, so does the other.*n*)**Quadratic Complexity: O(**- i item takes 1 second, 10 items takes 100 seconds, 100 items takes 10,000 seconds. The runtime grows with the square of the input.*n*^{2})**Polynomial Complexity: O(**- Any complexity that can be written in this form has a polynomial complexity, or is solvable in*n*^{a})**polynomial time**.**Logarithmic Complexity: O(log**- 1 item takes 1 second, 10 items takes 2 seconds, 100 items takes 3 seconds, 1,000 items takes 4 seconds, etc. The number of seconds is the exponent in this relationship, or more precisely, the runtime grows with the log of the input.*n*)**Factorial Complexity: O(**- The runtime grows with the factorial calculation of the input (see the Traveling Salesman example above).*n*!)

Well, I hope this has been helpful in your understanding of big O complexities and notation, as well as how we determine the efficiency of a method. Please stay tuned as I continue to blog about my learnings at Dev Bootcamp. I'm coming up on my last week before the on-site portion of the program, so get excited!

- Plain English Explanation of Big O
- Big O Notation: Using Not-Boring Math to Measure Code's Efficiency