Questions in black, answers in blue.
Directions: There are 150 total points. Each of the first four questions is a statement that may be true or false. Please state whether it is true or false -- you will get five points for a correct boolean answer and there is no penalty for a wrong guess. Then justify your answer -- the remaining points will depend on the quality and validity of your justification.
TRUE. There is a function f from A-instances to B-instances such that x is in A iff f(x) is in B. There is a function g from B-instances to C-instances such that y is in B iff g(y) is in C. So x is in A iff g(f(x)) is in C. The composition of g and f is the poly-time reduction. It is poly-time because g runs in time polynomial in the length of f(x), which is polynomial in the length of x.
FALSE. Not only is it not always true, it's always false, since f+g is in Θ(g), and o(g) has no functions in common with Θ(g).
TRUE. Certainly you can do it in O(n) time by checking for an edge (u,v) for every u -- there are n-1 u's taking O(1) time apiece. And you need to look at every u in the worst case, which is when the first n-2 u's you look at have an edge (u,v) and so the answer depends on the last one. So the time for the problem is in Ω(n) as well, and thus in Θ(n).
FALSE. Both reducibilities are true if A and B are both NP-complete problems, since any NP problem reduces to any NP-complete problem. The existence of NP-complete languages does not depend on any unproven assumptions like P ≠ NP.
I perform the following operation n times: remove the minimum element of the heap, place it in the next entry of my array, and restore the heapness of the heap. The first two take O(1) each, the last takes O(n) in the worst case because we replace the top of the heap with the last element, which may take O(log n) moves downward to get to its correct place. The total time is thus n times O(log n) or O(n log n).
Given an unsorted array, we can create a heap in O(n) time using the algorithm in the book where we place the elements bottom-up and row by row, adjusting each subheap when we place its root. If we could go from the heap to the sorted array in o(n log n) time, combining this with the O(n) heapification we would have a general sorting algorithm using o(n log n) time, which is impossible using only comparisons and moves by the sorting lower bound.
We let the matrix A be the adjacency matrix of a directed graph G. To get the graph corresponding to B, the reflexive transitive closure of A, we add self loops at each vertex and add an edge from u to v whenever a path from u to v exists in G.
B is equal to (A+I)n-1, where I is the identity matrix and the matrix multiplication is boolean, with AND for multiplication and OR for addition. We need log n matrix multiplications if we use repeated squaring. Since each matrix multiplication takes Θ(n3) time, the total running time is Θ(n3 log n).
Warshall's algorithm computes B in time O(n3). It successively computes matrices A+I=C0,...,B=Cn, where the (i,j) entry of Ck is a boolean telling whether there is a path from i to j using only intermediate vertices numbered at most k. Getting Ci+1 from Ci takes O(n2) time, so the total time is O(n3.
For each vertex i, run a depth-first search of the graph starting from i and set the (i,j) entry of B to be true iff j is found during this search. Each DFS takes O(n+e) time, so we have total time O(n2+ne). If e is in the class o(n2), this time is in the class o(n3).
The guess phase guesses an ordering of the vertices and guesses the number b. The evaluation phase verifies that the each element in the ordering is exactly b greater than the previous one, and that there are n elements in the ordering. If the set is an arithmetic progression, the correct ordering and b might be guessed, and if there is an ordering and a b satisfying the evaluation phase, we have an arithmetic progression. We can represent the ordering as an array of indices into the first array, that is, an array of numbers from 0 through n-1. If the guess phase does not guess a permutation, the evaluation phase will fail, so we need not check element distinctness for the ordering array.
Make one pass through the input to find the minimum element and call this a.
Make another pass to find the maximum element z. If z-a is not divisible by
n-1, return false, else set b to be (z-a)/(n-1). Now
Make a new boolean array F of size n. For each element x of the array, if
(x-a)%b is not zero, return false, else set F[(x-a)/b] to true. If you
finish this task without returning false, check whether F is now all true
and return true iff it is. Each of these passes through an array of size
n takes O(n) time and we needed only O(1) of them for total time O(n).
This looks like the element distinctness problem, which requires
Ω(n log n) time if you may use only comparisons. But the items for which
we must check distinctness are integers in a small range, and we can use a
bitvector as above.
Last modified 12 December 2003