CMPSCI 601: Theory of Computation

David Mix Barrington

Spring, 2004

This page is for student questions and my answers related to HW#5. Questions are in black, answers in blue.

Question 5.12, 14 April 2004

So the intuition behind #5 is that if B's decider runs in polynomial time, and you have a converter that takes the input of A and converts it to input for B (such that we can use B to decide A) and it takes no more than linear time, then A can be decided in polynomial time. It seems really obvious that the time to decide A is the time to decide B plus the time to run f, which is O(t(B)), (because f is linear time, and B is polynomial time, and so B's time dominates).

What am I missing? Is it just that I have to show how f can take the output of A and convert it to something B can work with, in no more than linear time?

For this problem, you can't say just "polynomial", because you care about the degree of the polynomial (it is O(n^k) for some k, but what k?). Also the input size to the B-decider is not n, so you have to compute the B-decider's running time based on what the input size could maximally be.

Question 5.11, 14 April 2004

On lecture 11, slide 10,

how is the function NEXT defined? What is the use of the parameter n in the NEXT function?

NEXT has three arguments. The first is a TM number n, and the other two are both configurations (or instantaneous descriptions) of a computation of M_n. NEXT(n,c,d) is true iff d is the configuration that is obtained from c by the TM M_n running for one step.

A configuration is a sequence whose items are either tape letters or states of the M_n. For NEXT(c,d) to be true, c and d have to be mostly the same except where the head is in each, where they will differ according to the rules of the TM M_n.

This is essentially what we did with the η predicate in the proof of Fagin's Theorem in Lecture 19 today.

Question 5.10, 14 April 2004

After reading the Q&A today, I revised/expanded the bounded formula-->Floop-computable direction of my proof for #2. I argue that for the \exists half of the induction, I get a formula of the form $\exists var [ prop(var) ]$, which can be computed in Floop as follows:

      var = zero();
      return one(); // true

Do I need to worry about the fact that if $\exists var [ prop(var) ]$ is false, the while loop will never terminate? My instinct is that I don't, because we're only arguing that functions which /can/ be represented as bounded formulas can be computed by Floop, and we've already observed in class that bounded sentences are not closed under negation. But at this point I've psyched myself out so badly I need confirmation...

A Floop program may fail to terminate because of a while loop. So a Floop program in general represents a partial function.

A forall-bounded formula phi(x_1,...,x_k,y) may or may not represent a partial function (depending on whether there are two different numbers y_1 and y_2 where phi(x_1,...,x_k,y_1) and phi(x_1,...,x_k,y_2) are both true for some values of the x_i's). If it does represent a partial function, this may or may not be a total function, depending on whether a y exists for _every_ value of the x's. As you say, whether the function is total is in general a statement with unbounded foralls and thus in general not expressible by a bounded formula.

You're asked to show that a _partial_ function is Floop-computable iff it is represented by a bounded formula. For a value of x's that causes the Floop function to fail to return a value, there should be no y such that phi(x's, y) is true. And vice versa -- if there is no y, the Floop program can and should fail to find one.

Question 5.9, 13 April 2004

On problem 2, we're going to show that Floop iff A-bounded. We've also learned that FOL can't express "connectedness" (pg 112 [P] and Corollary 16.3, pg 6 lec 16) in graphs even though the algorithm for strongly connected graphs clearly can be computed by a TM and is in P (pg 553 of CLRS). Floop is equivalent to general recursive which is equivalent to partial recursive and partial recursive functions are those computed by TMs.

So, if TMs can do "connectedness", and Floop can do TMs, why can't FOL do "connectedness"? Does bounding the universal quantifiers make FOL more powerful? That is counterintuitive to me.

I'm clearly missing something but can't figure out what it is. Am I making a type mistake?

A bounded formula of number theory can talk about huge numbers using arithmetic operators. A first-order formula of graph theory can only refer to the numbers of the vertices in the graphs. We saw in Lecture 19 that a single FO formula on a structure can only express log-space computable properties.

Question 5.8, 13 April 2004

In the definition of reduction, S <= T iff Exists a FUNCTION f in F(L) such that Any w in M, w in S iff f(w) in T.

In the soluction of midterm question 6, K <= {M: ...}. w=n, and f(n)=input of M? Is that a FUNCTION? I am not clear of the definition. If we do not consider the restraint for function f, we can say S<=T and T<=S and the same time. In the solution of question 6, I would rather call it {M: ...}<=K than K<={M: ...}. How do you explain the FUNCTION requirement?

What might be a little confusing is that I defined the language Z as {M: exists w such that M halts...} rather than {n: exists w such that M_n halts...}.

The function takes a number n of a TM and gives a number f(n) of another TM, except that the way I defined Z I have to say that f(n) _is_ the TM rather than being just the number of the TM. So what I did in the solution was to name the machine "M_{f(n)}".

I need to prove:
  (n in K)   iff    M_f(n) in Z

so that

  M_n(n) = 1    iff  exists w: M_f(n)(w0) and M_{f(n)}(w1)
                     both halt with same output

I don't see what problem you have with this f being a function, in this case from the set of numbers to the set of TM's. For every number n, M_{f(n)} is a TM, so I have a function from numbers to TM's.

Question 5.7, 13 April 2004

I have two small questions:

1. in NT, do we know \sigma(0)=1? This is not given in any of the NT axioms. However, in Papadimitriou, it used this to prove "{NT} proves \forall x(\sigma(x)=x+1)". I wonder where that comes from....

"1" in number theory is just a name for "sigma(0)" so yes, we do know this. So [P]'s lemma is really "forall: x sigma(x) = x + sigma(0)" and the base case is trivial.

2. In lecture notes 16, for the definition of bounded formula, do we not care whether the \exists quantifier is bounded?

That's right, a "bounded" formula means that forall's are bounded but exists's might not be.

Question 5.6, 13 April 2004

What's the symbol "∃≤12x" on [BE] page 551? I can't find it in their index.

It means "there exist at most twelve x such that" and is defined on pages 370-1.

Question 5.5, 13 April 2004

On Problem 3 in the current homework, can we appeal to the primitive recursive proof of COMP? It seems like that, combined with the answer to Problem 2 solves this problem. Is that good enough?

Someone observed below that the answer to Q3 helps you with Q2, but you're asking the other way round. Yes, there is a simple argument that takes the result of Q2 and then solves Q3 with it. Just don't try doing the referral both ways, as you need to do the real work somplace...

Question 5.4, 12 April 2004

In your pseudo-Java code on slide 10 of Lecture #18, shouldn't the two break statements be continue statements? You want to continue the loop with the next value of z, not break out of the loop entirely.

Yes, you're quite right, it shows that it's been two years since I last taught Java...

Question 5.3, 11 April 2004

So, I'm trying to go about proving Thm. 16.11 by translating into bounded formulas all the p.r. functions in lecture 11 leading up to the proof that COMP(n, x, c, y) is p.r.--i.e. PrimeF(x,y) = "y is prime number x". (This is so that I can exhibit the bounded formula for COMP(n, x, c, y).)

1. Is this the right general strategy?


2. Assuming it is, is it legal to have a recursive bounded formula? i.e. my bounded formula for PrimeF(x,y) makes reference to PrimeF(z,w) inside a \forall z < x clause. (it also has a ``base case''.)

No, unless you can show that you're only going to recurse O(1) times, because a formula must be a fixed formula independent of any numerical arguments.

Question 5.2, 11 April 2004

I'm a little confused about the Floop-computable --> ∀-bounded function direction of the proof in question 2. In the past we've shown that the values of all variables at the end of a (B/F)loop block were (primitive/general) recursive functions of the values of the variables at the beginning of the block. But I don't quite see how to extend that approach to bounded-forall we do something like defining a predicate ResultB(i_1,...,i_n, o) which indicates that o is the result of a block B with inputs/beginning values i_1 through i_n? Do we have access to +, x, pow and such when giving the formula for ResultB?

A forall-bounded formula can use arithmetic operations when it defines the bound on the foralls' variables -- look at the example on slide 13 of lecture 16.

You _may_ find it easier to use what we know about sequence-coding to prove this direction. You have three different characterizations of the general-recursive functions to work with: Floop, the original definition in terms of the unbounded mu-operator, and Turing machines (since we proved that general recursive and partial recursive functions are the same). The last might actually be the easiest to use...

Hrm. It seems like this direction more or less follows from Theorem 16.11 (which I have to prove for question 3). Is this so? Can simply say so (slightly more formally), and then just do the real work in question 3?

Sure, as long as you're not somehow using the result of question 2 in your proof of question 3 as well. And the "more or less follows" does require something of an argument though I agree that 16.11 takes care of most of the hard stuff. The graders will be happy to grade a shorter, simpler correct proof...

Question 5.1, 10 April 2004

In question 2, I'm trying to show for one direction of the proof that I can represent a ∀-bounded formula with a floop program. My basic idea is to take the "non-quantifier" body of the formula and represent it using floop, then wrap that up in while loops for each unbounded ∃ and for loops for each bounded ∀. However, the definition of bounded formula doesn't say anything about the "non-quantifier" part of the formula, just that all quantifiers are in front. In order to do the proof, I believe that I need to be able to say that the "non-quantifier" part of the formula is floop computable. Since we're talking about number theory in lecture 16, can I implicitly assume that the non-quantifier part of any bounded formula is a statement of number theory that doesn't include quantifiers?

Yes, in the base case of your induction it is a formula of number theory with no quantifiers, meaning that it's a boolean combination of statements about equality and order of terms made from variables and constants.

In the inductive case you need some statement about these formulas that you can prove to be true for formulas with one more quantifier...

Last modified 14 April 2004