Questions in black, answers in blue.

**Question 1 (10):**(true/false with justification) If f, g, and h are positive increasing functions with f in O(h) and g in Ω(h)$, then the function f+g must be in Θ(h).FALSE. The function f+g is in Ω(h), but there is no reason to believe that it is in O(h) because g could be much larger than h and still be in Ω(h). A specific counterexample is f = h = n and g = n

^{2}, in which case f+g is in &Theta(n^{2}), not Θ(n).**Question 2 (10):**(true/false with justification) Let W(n), A(n), and B(n) be the worst-case, average-case, and best-case costs respectively of a particular algorithm on inputs of size n. Then A(n) is in Θ(W(n)+B(n)).FALSE. Again one half the Θ fact is guaranteed but not the other. Since A(n) is somewhere between W(n) and B(n), it is in O(W(n)+B(n)). But it is not in Ω(W(n)+B(n)) if the average case is asymptotically better than the worst case. For example, with Quicksort, W(n) is in Θ(n

^{2}) but A(n) is in Θ(n log n).**Question 3 (20):**Let T(n) be the function defined by the following recurrence:T(0) = 3

T(1) = 4

T(n) = (T(n-1)*T(n-2) - 2)/n, for n≥2

Give an exact solution for T(n) and prove your solution correct by induction.

T(2) is (4*3-2)/2 = 5, T(3) is (5*4-2)/3 = 6, and T(4) is (6*5-2)/4 = 7. The apparent pattern is T(n) = n+3. This rule holds for the two base cases n=0 and n=1 by inspection. Assume as inductive hypothesis that it holds for n-1 and n-2. Then

T(n) = (T(n-1)*T(n-2) - 2)/n =

((n+2)(n+1) - 2)/n = (n

^{2 + 3n + 2 - 2)/n = n + 3. }Using the IH, the inductive goal is proved and the rule holds for all n.

**Question 4 (40):**Let A be an array of n positive`int`

values, where n≥3. The*best triple*of A is the set of three distinct indices i, j, and k, with i≤j≤k, that makes the product A[i]*A[j]*A[k] as large as possible. You may assume that this product never causes an overflow.- (a,20)
Write a brute-force algorithm that takes A as a parameter and returns an
array B such that B[0]=i, B[1]=j, and B[2]=k where i, j, and k
form the best triple of A. Analyze the worst-case
running time of your algorithm in
terms of n (the length of A), finding the correct Θ-class.
`int[] bestTriple (int[] A) {//returns [i,j,k] such that A[i]*A[j]*A[k] maximized //assumes n >= 3, A all positive, no overflows int n = A.length; int max = 0; int[] result = [0,0,0]; //note that first check will overwrite this for (int i=0; i < n; i++) for (int j=i+1; j < n; j++) for (int k=j+1; k < n; k++) if (A[i]*A[j]*A[k] > max) { max = A[i]*A[j]*A[k]; result = [i,j,k];} return result;`

The body of the inner loop is clearly Θ(1) time as only simple operations are done. Each loop has n or fewer possible values so the number of times the innermost code is executed is clearly O(n

^{3}). But we have seen that the actual number is also Ω(n^{3}), actually about n^{3}/6, and so the overall time is Θ(n^{3}). - (b,5)
State and justify a lower bound (in Ω form)
on the time needed to solve this problem.
We must use at least n steps, and hence Ω(n), because any algorithm that failed to look at one of the A[i] values would be wrong if that value were huge and thus appeared in the best triple. Only one of the values may be looked at on any given time step.

- (c,10)
Informally describe and analyze an algorithm for this problem that is
asymptotically faster than your brute-force algorithm in (a). For
*full*credit, its time should match the lower bound in (b), though you may get up to eight points for any asymptotic improvement.The key observation is that because the values in A are all positive, the best triple simply consists of the three largest elements. (Why? If I replace any of the factors in the product by a larger number, the product gets larger. So the best product can't leave out any number larger than any of its three factors.)

So for example we could find the largest element in O(n) steps, then the second largest in another O(n), and the third largest in another O(n). (Of course being a little more careful we could sweep A once remembering the three best so far, in a total of O(n) steps.) Then we sort these three elements by index in O(1) time and return their indices as [i,j,k]. This is O(n) time, matching the Ω(n) bound in (b).

Not quite as good as this would be to sort A, remembering the original index of each value, in O(n log n) time, then return the first three indices as [i,j,k].

- (a,20)
Write a brute-force algorithm that takes A as a parameter and returns an
array B such that B[0]=i, B[1]=j, and B[2]=k where i, j, and k
form the best triple of A. Analyze the worst-case
running time of your algorithm in
terms of n (the length of A), finding the correct Θ-class.
**Question 5 (20):**Two questions about a variant of Mergesort:- (a,10)
Suppose you have
*three*sorted arrays A, B, and C, each of length n/3, and you want to merge them into a single sorted array D of length n containing the same elements. Indicate how you will do this (pseudocode or even English is fine) and determine the Θ-class of the running time of your algorithm.`//pseudocode int i = j = k = written = 0; while ((i < n/3) || (j < n/3) || (k < n/3)) //find smallest of A[i], B[j], and C[k], in O(1) time //D[written] = (that element); written++; //(that index)++ //need some care to avoid referring to A[n/3], etc. return D;`

The loop will run until i, j, and k are each equal to n/3, that is, n times. The body of the loop is Θ(1) so the whole thing is Θ(n).

- (b,15)
Consider the variant of Mergesort where you divide the given array into
*three*equal parts, sort each part, and merge them together as in (a). State and justify a recurrence for the running time T(n) of this algorithm on an input array of length n. Solve this recurrence for the case when n is a power of three. (You may quote the Master Theorem if it is applicable.) In what Θ-class is the running time of this algorithm for all n?The sorting algorithm will make three recursive calls to itself, each on a range of size n/3. Apart from that, its running time is dominated by what is essentially the Θ(n) merging operation from (a). So the recurrence is T(n) = 3T(n/3) + Θ(n), with a base case of T(1) = Θ(1). By the Master Theorem, this solves to T(n) = Θ(n log n) because a = b = 3 and d = 1 so a = b

^{d}. The Smoothness Theorem allows the solution for powers of three to be extended to all n as a Θ statement. Actually this use of the Smoothness Theorem is implicit in Levitin's statement of the Master Theorem, which only requires the recurrence rule to hold for powers of b.

Last modified 3 October 2003

- (a,10)
Suppose you have