CMPSCI 601: Theory of Computation

David Mix Barrington

Spring, 2004

This page is for student questions and my answers related to HW#4. Questions are in black, answers in blue.

Question 4.12, 25 March 2004

Is it necessary to use the approach outlined in the hint (showing Ackermann's function is total recursive, and yet grows faster than any primitive recursive function)? Or can we use one of the other arguments (diagonalization, for example) ?

You may diagonalize directly. I'm not sure which is the better way to do it, though I would use Ackermann first. (The complication in the diagonalization is that you have to say something about why you can figure out in Floop what the i'th Bloop function is...

Question 4.11, 25 March 2004

Do we appeal to Tarski's inductive definition of truth in problems 18.11 and 18.12? Those problems have us look at alternative definitions for truth assignments, but it seems like they won't get off the ground unless we employ Tarski's definition at some point.

When talking about truth inside a structure (the statement "M models S") we use the Tarski definition. But 18.11 and 18.12 also deal with truth assignments to statements that do not come from a structure, and for these we don't use the Tarski definition.

Also, is it correct that in Boolean logic, there is no difference between saying that a truth assignment h assigns true to some expression A, (h(A)=TRUE) and saying that h models A (h double-turnstyle A)? However, now in FOL we *are* distinguishing the two and problems 18.11 and 18.12 are asking us to compare them. The inductive definition for the double-turnstyle in FOL is given by Tarski, but where is there a definition for h?

In _propositional logic_ we use the double-turnstile to mean "semantic consequence" or "satisfaction", yes. Page 468 of [BE] defines a "truth assignment" in the context of first-order logic, as an assignment of a truth value to each _atomic_ sentence.

Question 4.10, 25 March 2004

So I have a followup to question 4.6.

In the proof that primitive recursive = bloop, we showed that every p.r. function can be implemented in bloop, which is straightforward.

The other direction was that the output of any bloop program is p.r. In question 4.6, the analogous task is showing that the variables at the end of any Floop block are ___-recursive. And the only case that is different from the Bloop proof is the case for the while loop. (Am I right so far?)

Yes.

So we would need to show that any variable defined at the beginning of the while loop was ___-recursive at the end of the while loop. But here's where I'm confused. In the mu-operator, the function f was a primitive recursive function? So in the while loop, the code block *within* the loop is primitive recursive, and it is the fact that the x in the while(x) might always be positive that makes this general recursive (yes?). So for the proof, do we show that the block of code inside the loop is primitive recursive, but that the loop itself is potentially unbounded?

This is a misinterpretation -- in the definition the argument of the mu-operator need not be p.r. although there is a theorem that says that you can define any g.r. function by applying the mu operator once to a p.r. function.

I'm having trouble seeing the relationship between the while loop and the mu-operator (beyond just showing how to code the mu-operator using the while loop), and I'm having trouble seeing the relationship between bloop and the mu-operator.

Your problem for the while loop case is to show that if all the results of the loop body are g.r. functions of the values when the loop body starts, then the results of the entire loop are g.r. functions of the values before the whole loop starts. So you have these functions which by the IH are g.r., and you want to show that these new functions are g.r., probably by forming them out of the loop-body functions using the mu-operator.

Question 4.9, 25 March 2004

In the previous year's HW3.1c, you suggested using BLUEDIAG as an alternative to the Ackerman function. I assume we may use BLUEDIAG for HW4.1.

Yes, I'm not sure which proof is easier. The Ackermann proof involves less mumbling about Church's thesis, I think. But any clear, correct, and convincing argument is fine.

Question 4.8, 25 March 2004

When the mu operator is applied to function f with respect to x, does x have some initial value already, or is it merely a ``pointer'' into which mu puts the value it finds (if it ever finds one)?

The mu operator acts on a function of arity k+1 and returns a function of arity k, much as a quantifier acts on a formula with k+1 variables and creates a formula with k free variables.

That is, if I were implementing mu in a real programming language (say, Python), would I call it like this:


 def f(x, args):
    # implementation of f here
 args = [y1,y2,yk]
 x = mu(f, args) # f is a function object, args is a list of arguments
 y1 to yk, mu returns integer x (maybe)

This is good, except in this context I think I would have mu return a function object to be closer to the spirit of the original definition. But the function you're calling "mu" here would be the function returned by my "mu". So I'd set x to the same value as you do but by saying "g = mu(f); x = g(args);".

or like this:


 def f(x, args):
    # implementation of f here
 args = [y1,y2,yk]
 x = 3
 ret = mu(x, f, args) # mu returns the smallest satisfying value greater
 than or equal to 3, if it finds one

This is more an implementation of the _bounded_ mu-operator -- your x here is acting as the "y" in the definition of that.

Question 4.7, 25 March 2004

Can we cite results from the notes without proving or paraphrasing them? As in, "We have shown in Lecture 11, slides 14 and 15 that any variable defined at the end of a program block is a primitive recursive function of those variables defined at the beginning."

Or do we need to restate those proofs?

If the whole proof is in the notes or on a previous hw, as in this case, you may quote the result as you do here.

Question 4.6, 24 March 2004

I'm having a little trouble understanding the difference between primitive recursive and general recursive in formal terms (informally, I understand just fine, I think). My trouble is coming from the fact that in the lecture notes, general recursive is defined using the ``initial functions'' and closure under composition and the unbounded mu operator. But the class of primitive recursive functions isn't defined analogously (i.e. simply bounding the mu operator): instead of the bounded mu operator, it's defined using closure under primitive recursion. Since there isn't parity between the two definitions, I'm unsure how to extend the proof of Primitive Recursive = Bloop to General Recursive = Floop. It seems to me that if the definition of general recursive were closer to the definition for primitive recursive--replacing primitive recursion with unbound recursion, or something of the sort--then adapting the proof would be straightforward. Is there an easy way to translate between primitive recursion and bounded mu or un-primitive recursion (what would that be? no guaranteed base clause?) and unbounded mu in order to get the two definitions in parity, or should I be thinking about this in a different way?

I don't see that the difference should cause you too much trouble. The definition of g.r. comes in when you're simulating g.r. in Floop. It means that your proof of the inductive case for the mu-operator isn't so much like the Bloop inductive case for primitive recursion, but the argument should be straightforward -- all you have to do is explain how to implement the mu-operator in Floop.

I _think_ if you replace primitive recursion with the bounded mu-operator then you get exactly p.r., but don't quote me yet...

Question 4.5, 24 March 2004

For the proof in Question 2, can we reuse the results that every primitive recursive function can be implemented in Bloop and that the output of any Bloop program is primitive recursive? That is, can we argue that in order to prove "general recursive iff Floop" we need only add a case to the induction steps of the previous proofs for the unbounded mu operator (going from left to right) and add a case for while loops (going from right to left). Or do we have to right out the whole proof again?

You may absolutely use the fact that p.r = bloop.

In general you may use _any_ previously proved fact unless this would somehow trivialize the problem. "As we proved on HW2, ..." is a valid justification even if you only read the proof in the solutions.

Question 4.4, 24 March 2004

Primitive recursive functions are f(x) = 0 the successor function, the projection function and any function that can be made from combining these elements with function composition or recursion.

Is it necessary that every primitive recursive function have each of the elements f(x) = 0, successor and projection? Or only that you can make a function using one or more of those elements, and the allowable ways of combining them?

There is no rule saying that you must use all the base cases. For example, the "+2" function is p.r. using composition on two copies of the successor function, and this derivation does not use the other two base cases. One could use them pointlessly in a different derivation, I suppose, but there is no rule requiring you to.

Question 4.3, 24 March 2004

I'm very confused by [BE] Problem 18.11 and 18.12 -- I don't understand the relationship between the truth assignment and the model satisfying a statement. How, for example, could the statement "b=b" possibly be false?

The point of these two questions is to focus on exactly this relationship, so you're confused about the right thing. The normal situation is to first have a model M, then define the truth values of all formulas according to the Tarski definition. Here in 18.11 they create a truth assignment in a different way, and in 18.12 they create a model out of the truth assignment.

There is an unfortunate typo in 18.12, where they refer to a model M where they mean Mh.

You are right that in any first-order structure with a constant or variable b, "b=b" must be true. This is because the Tarski truth definition for "b=b" is whether "b" and "b" refer to the same object in the domain, which of course they do. But in 18.12 we are staring with an arbitrary truth assignment to the atomic formulas of the language. So each formula of the formula R(a,b) gets a truth value, and so do "a=b" and "b=b". Since it's arbitrary, there is no reason "b=b" has to be true.

What you are looking at here is the extent to which we can create a structure Mh to match h. We can take care of the atomic formulas, except for equality, by the given definition. This forces certain other sentences to have the property given in part 1, as you'll show. But we can't handle an arbitrary truth assignment for sentences with the identity symbol or quantifiers, and parts 2 and 3 are just explaining examples to show that this is the case.

In lecture 15 we'll see the trouble we have to go through to make a structure that models exactly an arbitrary FO-consistent set of sentences. (FO-consistent means that you can't use the Fitch rules to get a contradiction from the set.) But you shouldn't need that in order to solve these problems.

Question 4.2, 24 March 2004

I'm trying to solve Question 1 by proving, for each p.r. function f, that f doesn't compute the Ackermann function. I've having trouble with some of the inductive steps...

Remember that with the busy-beaver function, we proved not just that every total recursive function was unequal to busy-beaver, but that every such function is eventually exceeded by busy-beaver. The same idea is useful here. You want to prove by induction on all p.r. functions that those functions are eventually exceeded by the Ackermann function. Here's a hint -- Ackermann is often defined in terms of a binary function A(i,j), where the non-p.r. function of n is then A(n,n). If you can prove that for every p.r. function f there exists an i such that f is eventually exceeded by A(i,n), this is good enough, and it might be easier.

Question 4.1, 24 March 2004

I'm confused by the definition of the general μ-operator on slide 16 of Lecture 11. Where does the variable y come from?

You are right to be confused, because the y is a mistake. Delete the part about "x<y". The unbounded μ-operator gives a function g(y1,..,yk) that just looks for the first x such that f(x,y1,...,yk)=0 and is undefined if there is no such x.

Last modified 25 March 2004