Lecture 14: More on Running Times

Thinking about efficiency

So to think about how “efficient” an algorithm, or a piece of code, is, we need a way to quantify how long it takes to run. Our rule of thumb is this: things take either:

  • a small, constant amount of time, which we’ll approximate as ‘about one unit’, or
  • they take an amount of time dependent upon some variable or variables

To simplify things, we say that almost all operators and keywords evaluate in a small, constant amount of time in Java: basic arithmetic, conditionals, assignment, array access, control flow, and method invocation. So you might look at a method like:

int add(int x, int y) {
  int sum = x + y;
  return sum;
}

and say something like: well, when this method runs, first it adds x and y (1). Then it assigns to sum (1). Then it returns (1). So it takes “about” three units of time to execute.

Or you might look at:

void honkIfEven(int x) {
  if (x % 2 == 0) System.out.println("honk");
}

and say something like, well, first x%2 is computed. Then it’s compared to zero. So the method takes at least two units. Then it might take a third to print “honk”.

Does it?

Well, that depends on the implementation of println(). To do a “real” analysis, we have to drill down into any method that’s called and check how it works, and look at methods it calls, and so on. For the purposes of this class, we’ll just state that certain methods are roughly constant time (like println), even though that’s not strictly true, in ways that will probably become clear to you as we go on.

OK, be that as it may, there’s something important to note here, which is that both of these methods take a small, fixed amount of time that doesn’t depend upon anything. Let’s look at something different:

int sum(int[] a) {
  int s = 0;
  for (int i: a) {
    s += i;
  }
  return s;
}

How long does this method take to execute? Well, about one to declare and assign 0 to s.

Then about one to update i each time through the loop, and another to update s each time through the loop.

Then one for returning s.

So what’s the answer? It depends upon the length of the array, right? It depends upon the input in other words, it’s not a constant. Some parts of the runtime are (the initial setup and the return) are constant, but some are not (the loop). Here, we might say the runtime is about 2 + 2 * (a.length). In other words, the runtime here is a function (in the mathematical sense) of the length of a. It’s proportional t the length of a.

Generally, any time you see a loop, you have the possibility of a non-constant runtime, that is, of a runtime that’s a function of (some aspect of) the input.

Early returns

What about if the loop can return early?

boolean containsOne(int[] a) {
  for (int i: a) {
    if (i == 1) return true;
  }
  return false;
}

Like before, the runtime here varies upon the input. But in a new and excitingly different way! When do we exit the loop? Who knows?!?

Since we can’t know, we generally concern ourselves with the worst case, that is, what’s the longest this loop could run?

Answer: it’s a function of the length of a, again. About 2*a, given our previous analysis.

The other kind of analysis we might do is an “average case” analysis, but we’ll mostly leave that for COMPSCI 311.

In class exercise 1

What is the approximate worst-case runtime of each of the following methods?

boolean allEven(int[] a) {
  for (int i : a) {
    if (i % 2 == 1) return false;
  }
  return true;
}

Answer: linear in the length of a.

boolean firstEven(int[] a) {
  for (int i : a) {
    if (i % 2 == 0) {
      return true;
    } else {
      return false;
    }
  }
  return false;
}

Answer: constant time. Even though there’s a loop, it’s never iterated over more than once.

Nested loops

Now consider the case of for loops within for loops. Suppose we had an algorithm for duplicate detection that looked like this:

boolean containsDuplicate(int[] a) {} {
  for (int i = 0; i < a.length; i++) {
    for (int j = 0; j < a.length; j++) {
      if (i == j) continue;
      if (a[i] == a[j]) return true;
    }
  }
  return false;
}

How does this algorithm operate? (On board.)

How long does it take to run? For each iteration of the outer loop, we have to go through the entire inner loop. So we have to run a.length * (cost of inner loop) * a.length.

This method’s runtime is a function of its input, but it’s no longer a linear (first-degree polynomial) function; it’s “quadratic” – that is, it’s runtime is proportional to a.length squared.

That’s a lot worse, especially as a.length grows. Who cares, right? Computers are fast? 3 GHz = 3 billion operations a second, right?

Well, what if we’re working with a big array? Say, a million elements? 10 ns each is only 10 ms total to run. Something that runs in time proportional to the array length will be manageable. What about quadratic? 1,000,000 x 1,000,000 = 1,000,000,000,000. That’s a lot of zeroes! Even if each step only takes, say, 10 ns, we’re still talking about 10,000 seconds to complete!

So generally, when we write methods or call them, and we suspect that they’re going to be used with large inputs, we should be thinking about how much time they’ll take to run. Many efficient algorithms are linear in the size of their input, though some are a little worse, and some are much worse.

OK, Marc, but we don’t need to go through the entire array inside the inner loop; we could just go through “what’s left” at the end, since we’ve already checked everything there, right?

Quadratic or not?

boolean containsDuplicate(int[] a) {} {
  for (int i = 0; i < a.length; i++) {
    for (int j = i + 1; j < a.length; j++) {
      if (a[i] == a[j]) return true;
    }
  }
  return false;
}

Well, again, how many steps does the inner loop take? It’s not always a.length, but it’s a function of a.length – the first time through, it is a.length - 1; the next time a.length - 2, and so on, down to 3, 2, 1, 0. What’s that proportional to? It’s still a function of a.length (about 1/2). Here’s an illustration (on board), or you can run the sums if you like.

Implementations matter

How about the following?

boolean allEven(List<Integer> list) {
  for (int i = 0; i < i.size(), i++) {
    if (list.get(i) % 2 == 1) return false;
  }
  return true;
}

Answer? It depends. This is where understanding how the implementation (ArrayList? LinkedList? Something else?) underneath a given abstraction works matters.

If you’re using an ArrayList, this will be linear, just as when it was for an array. But remember that to get to the ith element in a linked list, you have to traverse the list. So if we’re using a linked list, each time we call get here, we are doing something that’s dependent upon the length of the list. So it will be quadratic!

To further muddle this mess, the enhanced for loop:

boolean allEven(List<Integer> list) {
  for (int i: list) {
    if (i% 2 == 1) return false;
  }
  return true;
}

is actually smart enough to “remember” where it was the next time through, so it won’t be quadratic. But you wouldn’t know this unless you knew how lists were implemented. Take 187! :)

But, usually the Java Docs will help you here. Take a look at ArrayList to see that:

The size, isEmpty, get, set, iterator, and listIterator operations run in constant time. The add operation runs in amortized constant time, that is, adding n elements requires O(n) time. All of the other operations run in linear time (roughly speaking). The constant factor is low compared to that for the LinkedList implementation.

Compare with the LinkedList:

All of the operations perform as could be expected for a doubly-linked list. Operations that index into the list will traverse the list from the beginning or the end, whichever is closer to the specified index.

I guess you need to know what “as could be expected” means. Again, take 187 to be a better programmer.

Analyzing search

Let’s look at one particular method on lists: indexOf. indexOf searches a list for an element, and returns its index (or -1) if not found. How long must a search take?

Well, knowing nothing else, we have to check every element of the list (or array, etc.). So? It’s linear, right? Something like:

private E[] array; // note this isn't quite true

int indexOf(E e) {
  for (int i = 0; i < array.length; i++) {
    if (e.equals(array[i])) return i;
  }
  return -1;
}

Linear. But (and this is a big but and I cannot lie) if we know something more about the list, we can leverage that to not have to search the whole list.

For example, if the list is sorted. You know, like a telephone book, or a dictionary, or your phone’s address book, or basically anything that’s long and linear but where we want fast access to an arbitrary entry.

From Downey §12.8:

When you look for a word in a dictionary, you don’t just search page by page from front to back. Since the words are in alphabetical order, you probably use a binary search algorithm:

  • Start on a page near the middle of the dictionary.
  • Compare a word on the page to the word you are looking for. If you find it, stop.
  • If the word on the page comes before the word you are looking for, flip to somewhere later in the dictionary and go to step 2.
  • If the word on the page comes after the word you are looking for, flip to somewhere earlier in the dictionary and go to step 2.

If you find two adjacent words on the page and your word comes between them, you can conclude that your word is not in the dictionary.

We can leverage this to write a faster search algorithm, called “binary search”. It’s called this because each time through the loop, it eliminates half of the possible entries, unlike a regular linear search that eliminates only one. It looks like this:

int indexOf(E e) {
  int low = 0;
  int high = array.length - 1;
  while (low <= high) {
    int mid = (low + high) / 2;   // step 1
    int comp = array[mid].compareTo(e);

    if (comp == 0) { // step 2
      return mid;
    } else if (comp < 0) { // step 3
      low = mid + 1;
    } else { // comp > 0 // step 4
      high = mid - 1;
    }
  }
  return -1;
}

How long does this take to run?

Each time through the loop, we cut the distance between low and high in half. After k iterations, the number of remaining cells to search is array.length / 2^k. To find the number of iterations it takes to complete (in the worse case), we set array.length / 2^k = 1 and solve for k. The result is log_2 array.length. This is sub-linear. For example, for an array of 1,000 elements, it’s about 10; a million elements, about 20; a billion elements, about 30, and so on.

The downside, of course, is that we have to keep the array sorted.

More next class!