UNKNOWN. This is probably false, but would be true if NL=P.
TRUE. REACH is in P, and CVP is P-complete.
TRUE. They are recursive (in P for that matter), and all recursive languages are r.e. as well.
UNKNOWN. It is in NP but not believed to be NP-complete.
TRUE. We proved NL is contained within AC1, which is certainly inside TC3. Note that the reverse implication is unknown, as for all we know TC0 might contain all of P or even NP.
Note first that we proved in class both that FO is contained in AC0
and that AC0 is contained in NC1 which is contained
in L. To answer the exam question fully, however, you would have to justify
these results rather than merely quoting them.
The simplest proof is by induction on the number of quantifiers in the
first-order formula. If this number is zero, then evaluating the formula
on input x means looking up specified bits of the input structure, evaluating
numerical predicates on numbers of O(log n) bits, and applying boolean
operations to the results. This can be done in O(log n) space, because the
primary demand for read/write memory comes from remembering O(1) indices into
the structure and these have O(log n) bits each.
Now assume that the input formula has the form ∃ x:\Φ(x) and
that by the inductive hypothesis we have a log-space machine that decides
Φ(x) for any index number x. Note that x ranges only over numbers with
O(log n) bits. Our machine to decide ∃x:Φ(x) works as follows.
It uses part of its tape to keep a counter that will range over all possible
values of x. For each of these values in turn it uses the hypothesized
machine to decide whether Φ(a) is true for this value a. If it ever finds
an a for which Φ(a) is true it returns "true". If it finishes all values
and all return "false", it returns "false".
The above argument could also be expressed in terms of a recursive
algorithm, where the recursion depth is O(1) (the number of quantifiers in
the first-order formula) and each recursive call needs only O(log n) space.
Let M be a ASPACE(log n) machine and choose c so it uses at most c(log n)
space and thus has O(nc) possible configurations. We must define
a poly-time deterministic machine D that inputs a string x and determines
whether x is in L(M).
D begins by listing all the configurations of M on input x. It then
applies a labelling algorithm to these configurations, marking them as
"accepting" or "rejecting". Eventually it will mark the start configuration
as "accepting" or "rejecting" and this will determine its output.
On its first pass through the configurations D labels all final
configurations where no more moves are possible. Then on each successive
pass it looks for:
By the definition of alternating Turing machines, this procedure always
marks nodes correctly. To be sure it eventually marks the start configuration,
we need to know that M always reaches a final configuration in any sequence
of legal moves from any configuration -- this is easy to enforce if M keeps
a clock and never makes more than knc moves for a suitable k.
All this can be done in polynomial time because the table of configurations
is polynomial length and we only have polynomially many marking rounds.
If A is L-reducible to B (A ≤ B), and B is NP-complete, then A must be NP-complete.
FALSE. A could be the empty language, which is not NP-complete even if P=NP. (No non-trivial language can be reduced to the empty language.)
If B is L-reducible to A (B ≤ A), and B is NP-complete, then A must be NP-complete.
FALSE. Now we know that every language in NP reduces to A, but we have no guarantee that A itself is in NP.
Assuming P is different from NP, there is no poly-time algorithm that can input an undirected graph G and approximate, within 10%, the minimum number of colors needed to color G.
TRUE. If we had such an algorithm, we could use it to solve the NP-complete 3-COLORABILITY problem in polynomial time, which is impossible unless P=NP. The given algorithm would have to return an answer of at most 3.3 on 3-colorable graphs, and of at least 3.6 on graphs that are not 3-colorable (since their minimum number of colors is at least 4).
The Solovay-Strassen randomized algorithm for PRIME (presented in lecture) never indicates that its input number may be prime if it is not prime.
FALSE. The Solovay-Strassen algorithm calculates two functions of its input number m and a random number a. If these functions are the same it says "m is possibly prime", if not it says "m is not prime". If m is composite it may still say "possibly prime" for some a, so the statement is not true. It is true that if Solovay-Strassen says "not prime", we know it is correct.
TRUE. The string x is in this language if and only if all of the clauses
in Φ are satisfied by x. We can check each clause by looking up the
values of each of the three variables involved and seeing whether at least
one of the given literals is true. This takes time equal to the number of
clauses in Φ times the time to look up the three variables, which is
clearly polynomial.
I should have asked about the language {(Φ,x,): Φ(x) is true},
which is also in P by the above argument. (If fact it is in L and
even in FO given a suitable format for the input.) As I wrote the problem,
there is an even easier way to justify a TRUE answer. Since Φ is fixed
for the problem, so is the length of the interesting part of x. (I didn't
rule out x defining other variables besides those in Φ.) This interesting
part is thus of O(1) size and can be checked against a lookup table of
O(1) size.
Last modified 16 May 2003