This page currently contains interesting questions I have received from students in CMPSCI 601, together with my answers. These are questions related to the final exam -- links to previous questions and answers are at the bottom of this page. weeks. Questions are in black, answers in blue. Most recently answered questions are listed first.
Will you be putting up a sample answer to question 4 on homework 8?
I'll sketch one here. The basic idea in each direction is fairly simple
but it turns out there are a lot of details. In Neil's book Descriptive
Complexity there's a complete proof that FO, CRAM-time O(1), and uniform
AC0 are all the same class.
Simulating an ACi circuit with a CRAM in O(logi) time:
Assign a processor to each gate of the circuit. On step
0, have each processor assigned to an input gate look up
its assigned literal and mark itself "0" or "1" accordingly.
Now we'll have O(logi) phases of O(1) steps each.
After phase j we want every processor at height j or less to be
marked. To help us with this we assign another processor to
each edge of the circuit. On the first step of the phase each
edge processor reads the location for its source node, and sees
"0", "1", or "unassigned". On the second step of the phase any
edge processor that has "unassigned" writes "unassigned" to the
location belonging to its destination node. On the third step
each edge processor with a "0" going to an AND node or a "1" going
to an OR node writes that character to its destination node's location.
Now the destination node can figure out its new mark. If any of its
in-edges said "unassigned", it is still unassigned. If it got a "0"
(if it is AND) or a "1" (if it is OR) it takes that as its new value,
otherwise it takes the other value.
In O(logi) total time we will assign a mark to the output node.
Simulating a time O(logi n) CRAM in ACi:
We'll have a gate for each of the following predicates:
If w is restricted to be O(log n) bits then there are only polynomially
many of these. Each of the time-t predicates can be given by a FO
formula (and hence an AC0
circuit) in terms of the time-(t-1) predicates.
(I'll omit the details of each of these as they will involve the exact
instruction set of the CRAM.) This makes the whole circuit, as t ranges
from 0 to O(logi n), an ACi circuit.
On Neil's exam from 2001, question 1, he asks what class S1 is in:
S1 = { M | M is a deterministic TM that takes at most 3|w|+3 steps on each input w }
I can't figure out why this isn't in co-NP. It would seem that I could nondeterministically create a certificate w, and then simulate M(w) for 3|w|+3 steps in polynomial time. If it fails to halt in that time, then M is in S1-bar.
The given answer is that it is co-re complete. Can you explain where I'm getting confused? Thanks.
When you're wondering whether something is polynomial,
always ask yourself "polynomial in what"? The input to
this problem is a TM. The certificate w could be extremely
long, too long to guess in time polynomial in the length of
the string denoting M.
I think you're implicitly thinking of the language
{(M,w): M is a DTM that takes at most 3|w|+3 steps on w}
which is in P because you could just run it. If it were:
{(M,w): M is an NTM that always takes at most 3|w|+3 steps on w,
whatever the guesses it makes}
then this is in co-NP and I'm pretty sure is co-NP complete.
I'm not sure that I fully understand the solution on question 5. While doing other homeworks on an NP-complete problem, we first show it is in NP, then we show that some known NP-complete problem logspace reducible to it.
Isn't it the same thing here: since B is L-reducible to A and B is NP-complete then A is NP-complete?
You have overlooked the other part of the definition
of NP-complete, that an NP-complete language must be
in NP. In our NP-completeness proofs in lecture, we
did not only show a reduction from an NP-complete problem.
We also had to show that the new language was in NP, though
this was usually pretty obvious.
It's perfectly possible for the reduction from B to A to
exist, but for A to be much harder than NP.
On slide 22.10 you say that we can make a 1-gate circuit which decides if the length of an input n is in K, without actually looking at the input. Can you explain what this circuit is doing, and how it determines membership without looking at the input? Thanks.
The circuit has one gate, either a constant 0-gate if the answer
is no or a constant 1-gate if yes. (I suppose we never defined
constant gates, but you can make them by having an AND or OR
with no inputs, or you can just take "x1 AND NOT x1"
for the
0-gate and "x1 OR NOT x1" for the
1-gate. If you do the latter
the size is four or something instead of one, and you do look at
one bit of the input instead of none, but the idea is the same.)
As to how it answers without looking at the input, remember how
circuit families work. The circuit Cn is chosen based on the input
length, but not on which string of that length is the input. But
for this silly language UK, once you know the input length you know
whether the input is in the language or not! So your circuit needs
no further information from the input to make its output.
Sometimes the ATM and its explanation in term of a game played between White & Black gets confusing to me. Consider the example of finding the minimal circuit: Circuit C is optimal if no ciruit D with fewer gates computes exactly the same function as C. (assume that they have the same number of inputs). Formally we can define C belonging to OPTIMAL iff for every D, there exists some x such that (|D| > |C|) or (C(x) != D(x)).
If White & Black play the game,input of the game is Circuit C,Black provides Circuit D, White provides some string x, White wins if either |D| >= |C|, or C(x) != D(x). First order formula is true. White will win under optimal condition.
This is correct.
I am not convinced by this proof, if in the game Bob chooses circuit D and loses the game that doesn't mean C is optimal universally. The alternating machine isn't searching through all global space to make sure that C is the optimal circuit and no smaller circuit is possible. Even if Black is clever by just making one choice circuit D we cannot make sure that C is universally optimal.
For the game to correctly represent the language (in this case,
MIN-CIRCUIT = L(M)), you need to show two things:
In this example, (1) is true because if C really is the minimal circuit,
any D that Black comes up with will either be at least as big as or
have a different language. If it has a different language then White
can find a string x on which C and D differ.
(2) is true because if C is not in the language, then some smaller
equivalent circuit D exists and Black can pick it. Then whatever x
White picks, White will lose because C and D will give the same answer
on x.
<2>Even if (2) is true, there are circuits Black could pick that would cause
him to lose. But we assume that our players _always make the best move_
for their respective goals, and only _then_ does White win iff the input
is in the language.
On the other hand explanation of Alternating Machine looks convincing it say For all D and for some x alternating maching in parallel try to find if C is OPTIMAL which is good because we are searching through whole universe of circuits...
There's a second-order formula:
∀ D: ∃ x: [|D| ≥ |C|] or [C(x) != D(x)]
that defines the language. When in the game semantics you say
"Black chooses a D", you could also say "for any choice of D that
the machine makes"
On question 2 on the practice exam, I understand how to do the proof by building a DSPACE[log n] TM and using induction, but would have it been sufficient to say that since FO = AC0 and AC0 is in L, FO is in L? I'm not sure if that would count as quoting too many results.
This would be quoting too many results, unless you explained substantial details of the two results you quote. If I were convinced from what you wrote that you understood the two results you quoted, that would be fine.
I'm a bit confused regarding the sketch of the Barrington-Immerman-Straubing Theorem on slide 6 of lecture 23. I don't see how the circuit there is of constant depth. It seems to me that as you increase the number of variables in phi, you increase the depth of the circuit.
As you increase the number of quantifiers in phi, you
certainly increase the depth of the circuit, because there
is one level of gates for each quantifier.
The point is that you don't increase the number of quantifiers
in φ.
A more precise statement of the fact "FO is contained in AC0"
is:
"If φ is any FO formula, then L(φ) is decidable by a family of
circuits with nO(1) size, unbounded fan-in, and O(1) depth."
The "O(1)" in the depth means that as the _input size_ increases, the
depth stays the same. The formula phi does not change as the input
size increases, since we are always deciding L(phi). So the depth,
which is roughly the number of quantifiers in phi, stays the same.
Answers to Questions during HW#8 (through 13 May 2003)
Answers to Questions during HW#7 (through 30 Apr 2003)
Answers to Questions during HW#6 (through 23 Apr 2003)
Answers to Questions during HW#5 (through 9 Apr 2003)
Answers to Questions on the Midterm
Answers to Questions during HW#4 (through 26 Mar 2003)
Answers to Questions during HW#3 (through 17 Mar 2003)
Answers to Questions during HW#2 (through 3 Mar 2003)
Answers to Questions during HW#1 (through 12 Feb 2003)
Last modified 17 May 2003