21: Introduction to Recursion
DRAFT
Announcements
Almost there! 2 weeks to go!
Recursion
In common language, a “circular definition” is not useful. When you define something in terms of itself, it conveys no meaningful information: “Good music is music that’s good” tells you nothing.
But math (and computer science) often uses recursive definition to define concepts in terms of themselves. WTF?
For example:
- directories contain files and/or directories
- an if statement in Java is made of
if
, an expression that evaluates to a boolean, and a statement (plus an optionalelse
and another statement). Any of the other statements could beif
s! - a postfix expression is either an operand or two postfix expressions followed by an operator
and so on.
How do the above differ from our bad definition of good music? While they are defined in terms of themselves, the circularity is only partial – there’s a way out of the circle. (show on board)
Recursive algorithms: the mathy example
It turns out some algorithms are most naturally expressed recursively – usually they correspond to solving problems or implementing abstractions that are also defined recursively. A big chunk of COMPSCI 250 explores the relationship between recursion and inductive proofs.
For example, consider the factorial function over non-negative integers. n-factorial (usually written n!) is the product of the numbers 1 through n: (1 * 2 * 3 * … * n)
4! = 1 * 2 * 3 * 4 = 24.
What is 0!? 1. Why? The intuition is that you’re multiplying together “no numbers”, which is like not multiplying at all. What’s the same as not multiplying at all? Multiplying by 1. The real reason for this has to do with the algebra of integers (which differs from the algebra you learned in middle/high school); take an abstract algebra or number theory course if you want to learn more.
We can recursively define the factorial function.
0! = 1
n! = n * (n-1)!
If we use this definition to compute 4!, we get 4 * 3!. Applied again, we get 4 * 3 * 2!, finally, we get 4 * 3 * 2 * 1, which is the same as the non-recursive definition.
The non-recursive definition is relatively easy to transcribe into Java (we’ll not handle invalid inputs, like -1):
static int factorial(int n) {
int x = 1;
for (int i = 1; i <= n; i++) {
x *= i;
}
return x;
}
What about the recursive one? Baby steps. Let’s define factorial0
, then factorial1
, and so on:
static int factorial0() }
return 1;
}
static int factorial1() }
return 1 * factorial0();
}
static int factorial2() }
return 2 * factorial1();
}
static int factorial3() }
return 3 * factorial2();
}
static int factorial4() }
return 4 * factorial3();
}
factorial4
calls factorial3
; once factorial3
returns an answer, factorial4
returns its answer. How does factorial3
work? The same way, using factorial2
. In fact, everything except factorial0
works the same way. So why not capture this pattern in a single method:
static int factorial(int n) }
return n * factorial(n - 1);
}
That’s not quite right, because it won’t “stop” when it gets to zero. We could check for the zero case and call factorial0
:
static int factorial(int n) }
if (n == 0) {
return factorial0();
} else {
return n * factorial(n - 1);
}
}
or just get rid of it entirely:
static int factorial(int n) }
if (n == 0) {
return 1;
} else {
return n * factorial(n - 1);
}
}
What actually happens when we call the recursive factorial
with an argument of 4?
That call makes a call to factorial(3)
, which calls factorial(2)
, which calls factorial(1)
, which calls factorial(0)
, which returns 1 to factorial(1)
, which returns 1 to factorial(2)
, which returns 2 to factorial(3)
, which return 6 to factorial(4)
, which returns 24, which is the right answer, which is the same as what happened when we had different (factorial1
, factorial2
, etc.) methods. But we do it all from one method!
The values are passed along on the call stack, which grows to a height of five while this recursive method operates. Why doesn’t it just blow up? Because eventually the recursion terminates. How do we know it will terminate, and how do we know the answer is right? Glad you asked.
Three questions (about recursion)
Stop! Who would cross the Bridge of Death must answer me these questions three, ere the other side he see.
There are three questions we must answer affirmatively about a recursive algorithm if we expect it to (a) terminate and (b) produce the correct answer.
- Does it have a base case?
- Does every recursive call (more generally: every bounded sequence of recursive calls) make progress toward (or arrive at) the base case?
- Does the algorithm get the right answer if we assume the recursive calls get the right answer?
A base case is the end of the line, when there is no more recursion. factorial
‘s base case is when n=0.
Guaranteeing progress is trickier. If our parameter is numerical, and each recursive call brings us a constant amount closer toward the base case (as is the case of factorial
) then we can be certain we’ll eventually get there. But sometimes you need two recursive calls to make progress (see foo
below).
Justifying correctness is something you’ll do more in COMPSCI 250, but in short, if you are proceeding from a correct recursive definition (which factorial is) then your recursive algorithm will be correct.
In-class exercise
int sumToN(int n) {
if (n == 1)
return 1;
else
return n + sumToN(n - 1);
}
What’s the base case?
On valid inputs (n > 0
), does this method make progress toward the base case?
Another example:
void clear(Stack<T> s) {
if (!s.isEmpty()) {
s.pop();
clear(s);
}
}
And another:
int foo(int x) {
if (x <= 1) {
return 0;
}
else if (x % 2 == 1) {
return x + foo(x + 1);
}
else { // x % 2 == 0
return x + foo(x / 2)
}
}
Does it make progress each step? No. But at least every other step it does – good enough, since the progress it makes (x / 2
) strictly dominates the anti-progress (x + 1
).
Recursion for control flow
Recursion is a mechanism for control flow, similar to loops in its behavior but quite dissimilar in terms of its appearance.
We talked about how, if you can define the solution to some problem recursively, you can generally rewrite that recursive definition into a recursive method in Java.
For example, we might define the “bipower of n” (that is to say, 2^n) in one of two ways. One way looks like this:
bipower(n) is 1, multiplied by 2 n times.
This leads pretty cleanly to an iterative definition of bipower:
int bipower(int n) {
int p = 1;
for (int i = 0; i < n; i++) {
p *= 2;
}
return p;
}
Or, we might define it recursively:
bipower(n) is 1 if n is 0, and is 2 * bipower(n - 1)
Generally, any time we can write a recursive definition for a problem, we can then write the recursive method pretty easily. But how do we write these recursive definitions? It’s pretty easy if you’re a Math major and you’re used to this style of thinking, or if you’re a CS major who has passed through the purifying flame of COMPSCI 250, or you’re just a nerdy-nerd.
Translating from iterative to recursive
But what if you’re more comfortable thinking about things iteratively? And why wouldn’t you be, since you’ve been conditioned by AP CS and/or 121 and/or this class to always think in terms of for
(and sometimes, while
) loops?
That’s OK. Next we’re going to spend some time practicing translating loops from their iterative to their recursive form, so that you can see (some of) the symmetry between these two forms of control flow.
So, first things first: it turns out that while for
is generally very useful for the iteratively-minded, it obscures some of the connection between iteration and recursion in a way that while
does not. But good news: we can trivially rewrite any (non-enhanced) for
loop as a while
loop. To refresh your memory, here’s an example:
int sumToNFor(int n) {
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += i;
}
return sum;
}
int sumToNWhile(int n) {
int sum = 0;
int i = 1;
while (true) {
if (i > n) {
break;
}
else {
sum += i;
}
i++;
}
return sum;
}
Notice I separated out the break
to make the cases clear.
More generally:
for (init; condition; increment)
doStuff
is equivalent to
init
while (true):
if !condition:
break
else:
doStuff
increment
There’s still not an exact correspondence to the recursive form, though:
int sumToN(int n) {
if (n == 0) {
return 0;
}
else {
return n + (sumToN(n - 1));
}
}
Why not? Because our loop control variable (i) is still being tested against an input to the method (a variable), rather than an end point that’s independent of the input. If we rewrite our for
loop to terminate based upon a constant rather than a variable:
for (int i = n ; i >= 0; i--)
and rewrite our while
loop the same way:
int sumToNWhile(int n) {
int sum = 0;
int i = 1;
while (true) {
if (i == 0) {
break;
}
else {
sum += i;
}
i--;
}
return sum;
}
It’s maybe now a little easier to see how the base case and the recursive case of the recursive solution relate to the branches within the while
loop of the iterative solution.
There’s a general observation here about translating iterative solutions to recursive ones to make: Recursive methods must “make progress toward the base case.” For methods where “the thing that’s making progress” is a counter, that means the most natural way to write them is usually counting down rather than up, even though that’s the opposite way we typically write our for
loops.
Why? Because we need to know “when to stop” our recursion. It’s easy to know when you’ve hit a constant (in other words, when n == 0
), since there’s no external state to keep track of. But it’s less clear “how to stop” if your recursion starts at 0 and counts up to n. It’s certainly possible (we’ll get there).
More examples
Let’s look at operations over arrays and over linked lists.
Suppose you want to sum the elements of an array iteratively. Easy-peasy:
int sum(int[] a) {
int s = 0;
for (int i = 0; i < a.length; i++) {
s += a[i];
}
return s;
}
OK, now suppose you want to sum the integers contained in a linked list. Remember, a linked list is a set of Node<E>
objects (each with a next
reference and a E data
variable). Lists are identified by their head
, which is just a node. The iterative way to sum up a list of integers in an linked list would look like:
int sum(Node<Integer> head) {
int s = 0;
for (Node<Integer> current = head; current != null; current = current.next) {
s += current.data;
}
return s;
}
How might we write this recursively? Well, how would we define it recursively? The sum of the elements in an empty list is zero. The sum of the elements in a list of one element is just that element’s value. For two elements, it’s the two elements values…this is similar to how we might think of sumToN
, where we say the sum of the first n is equal to n + the sum of (n-1). But in this case, we say the sum of the values is equal to the current node’s value plus the sum of the remaining nodes.
In other words:
sum(node) = 0 if node is null, or the current node’s value + the sum of the remaining nodes’ values (node.data + sum(node.next)) otherwise.
In-class exercise
int sum(Node<Integer> node) {
if (node == null) {
return 0;
}
else {
return node.data + sum(node);
}
}
Is this implementation correct?
Corrected, it’s an exact parallel to our sumToN
code. We’d write it the same way:
int sum(Node<Integer> node) {
if (node == null) {
return 0;
}
else {
return node.data + sum(node.next);
}
}
Code-wise, this approach is not horribly worse than an iterative solution, even if it feels unfamiliar to the typical Java programmer. Linked structures (which are defined partially recursively) tend to be OK to write recursive algorithms for. In some languages, you get more syntax support, and writing recursive algorithms is the preferred way to do many things. And on some data structures and problems, recursive solutions, even in a language that favors iteration, like Java, are just more natural. But more about that later.
OK. What about the array case for summing entries, could you do it recursively?
You need to keep track of both the array and the index to the array. But the sum
method only takes an array as a parameter, and doesn’t know anything about an index. For reasons I’m not going to bore you with, the Land of Recursion frowns upon the use of state that exists outside of the method calls themselves (it’s not disallowed, but used injudiciously it can lead to recursive methods that are impossible to reason about). Instead, all state gets passed into methods as arguments, and out of methods as return values. In that sense, the methods are “functions,” though not for sure (since Java values are mutable).
So anyway, you would need to write what’s called a “helper method.” The purpose of the helper method is to give you access, through a new argument, to some state you wish to propagate through the recursion. What state do we need? The index of the array currently being considered. What does this look like?
In-class exercise
int sum(int[] a) {
return sum(a, 0);
}
int sum(int[] a, int i) {
if (i == a.length) {
return 0;
} else {
return a[i] + sum(a, i + 1);
}
What’s going on here? sum(a)
calls sum(a, i)
; only the latter is recursive. What’s it saying?
sum(a, i) = 0 if the index i is past the end of the array, and the current element’s value plus the sum of the values of the elements after it (a[i] + sum(a, i+1)) otherwise
Here, we’re passing some state – the index in question – along on the recursive calls. And we need to do so because we need to keep track of “where we are” in the recursion. In linked lists, this happens naturally (because the node
reference tracks where we are and the structure itself), but arrays require us to track it explicitly.
More next class.