This page is for student questions and my answers related to HW#3. Questions are in black, answers in blue.
Following up on Question 3.10 below...
I think I'm getting there. The only problem that I have left now is how to show that f is Bloop computable. I mean, do I need to show that I can simulate each of the Turing machines actions by a Bloop program, and count its steps? For example, I know that I can perform the successor function with a Turing machine in O(n) steps. How can I show that this number of steps is indeed Bloop computable? Do I have to show exactly what the Turing machine is doing, and that I could do that same algorithm using Bloop operations, and count the steps needed to do that? Or is there some simpler way to show this? Or is the notion O(n) already sufficient, since f(n) = n in this case, and this function can of course be computed by the bloop program "return n", independently of the TM? This means, the time bound for the successor function is Bloop computable, but finding this time bound might not.
I'm certainly willing to believe that the function "n" is a Bloop-computable function of n.
In other words, do I have to show that a given operation can be computed by a TM in O(g(n)) steps, and that the function g(n) can generally be computed by a Bloop program independently where g(n) comes from, or do I have to show that there is actually a Bloop program counting the steps the TM needs, iow, that some Bloop program could come up with g(n) itself without me reasoning about the TM first?
What you need to do is to prove that for every program block, the
TM and the Bloop program for the time bound both exist. Certainly
the TM and the time bound program will depend on the block you're
simulating.
So if the block is the statement "x++", and you can argue that if x
has value n before this block then your TM will simulate the block
in n steps, then because "n" is a Bloop-computable function you are done.
Or, put just another way (it's really hard for me to explain what my problem is, I guess), must there be a Bloop program finding an upper bound for the TM's computation steps, or can I find an upper bound myself doing whatever reasoning I want, and then just show a Bloop program that computes that upper bound using n as an input?
You need only show that the time-bound Bloop program exists for any
original program block that computes an upper bound on the time of your
TM simulating the block. Any valid argument that the time bound is
correct is ok -- the time-bound program's code need not be derived in
any particular way from the simulated program's code.
But the natural way to construct the time-bound program is by
induction on the structure of the simulated block. If you do it
this way there will of course be some similarity between the structures
of the time-bound program and the block, in where they make function
calls and so forth.
In the Slides in Lecture 11 p8 you mention that the key to the last problem in HW 3 lies in coding of sequences of numbers as single numbers. I am not sure how to use this in answering the question. Currently my approach is to use induction on small Bloop programs - compute them using turing machines and demostantrate how they can be computed in f(n) and then show that larger progrmas are built on smaller ones. Am I missing something?
No, you are not missing anything, and my statement in lecture was at best misleading. I did say "a key step" rather than "the key step", you must admit, but I'm still wrong. What I was referring to is that we need coding of sequences to show that the COMP predicate is p.r. or Bloop computable. But this is relating p.r. functions to TM's in a different way than is done in HW#3 problem 6. The approach you describe above is the correct one for that problem.
I have one question concerning question 6 of the homework. Do I understand correctly that the time bound f(n) depends on the input value we feed into P? Or, IOW, is the n in P(n) and that in f(n) the same?
The two n's are the same. You must show that for every program P, there exists a TM M and a Bloop-computable function f such that for all input numbers n, M on input n outputs the number P(n) after at most f(n) steps.
My idea was to show inductively that each component of any Bloop program can
be computed by a TM in polytime, therefore the complete program can be
computed in polytime. f would therefore be some polynomial, and any
polynomial can be computed in Bloop since we have addition and multiplication
and such (or can implement it).
This is false, as P might be computing a number that is too large
for any TM to _store_ in polynomial space, hence too large to compute
in polynomial time.
The induction is the right idea (using as a model the proof in lecture
11 that all Bloop-computable functions are p.r.). I think that once you
understand what you need to prove at each inductive step, it's
straightforward. For example, for the concatenation step you can assume
that you already have two TM's that compute the functions implemented
by each block, and a Bloop-computable time bound for each of these TM's.
You do have to be careful of the fact that the input to the second TM
is not n, but the output of the first TM...
I am confused about how the example in slide 16 of Lecture 8 is an GJNDTM. In the Questions and answers you mention that GJNDTM -> ONDTM is essentially done in the lecture. Could you please provide me a little more insight into where I can find that information in the lecture.
That example is an ONTM, because its actions are given by its state table. But there are presumed to be a whole lot of other states that are only reachable from state q. If this is true, and the other states are all deterministic, then look at what the NTM does. It writes a string on the tape nondeterministicly and then starts operating on that string deterministicly. This is what a GJNTM does, except that the GJNTM also has an input tape that the deterministic part operates on as well as on the guess tape.
A question about the level of detail required in 601 HW3.1:
A = L(GJNTM) without proving it?:
"We can simulate any arbitrary ONTM with a binary ONTM which may choose between no more than two possible transitions at each of its finite state transition table entries, as illustrated on slide 18 of lecture 8."
...or do I need to prove this to be so, but describing how to translate an arbitrary ONTM to a binary ONTM?
You should give some kind of justification of this, because as defined an ONTM might have a large constant number of possible transitions (any combination of new state, letter written, and action might be allowed).
How can we show that the encoding of an arbitrary TM (to be fed as input to a universal TM) is not larger than some polynomial number of bits? I want to show that in the operation of a universal TM, the task of finding the rule from the input TM's transition function to apply takes only a polynomial number of steps. (I'm assuming this is the case otherwise Question 2 seems impossible.)
This seems hard to do without being able to assume things about the number of states or tape symbols - which we can't do if the TM we're working with is to be arbitrary.
If you are dealing with one TM at a time, its tape alphabet and number of states is _independent_ of its input size, meaning that it is O(1). This should solve your problem.
I had a neat idea for Question 6! Suppose I take my Bloop program B, define a new variable at the beginning, and ++ that variable after every step of B. Voila, I have a "profiler", written in Bloop, that simulates B and counts the number of steps it takes! The number of steps is Bloop-computable!
It is indeed a neat idea. The reason it doesn't completely solve your problem is that you're trying to show the function to be in F(DTIME[f])) for some Bloop-computable f, and the running time of f is defined in terms of multitape Turing machines. So if you can figure out how many TM steps it takes to implement a given Bloop step, and add that number to your time counter, you can implement your idea.
I'm having trouble getting started on Question 6, about Bloop programs. How do I prove something about an arbitrary Bloop program?
Bloop programs can be defined inductively, allowing you to prove facts about them by induction. In Lecture 11, notes for which are now up, this inductive definition is spelled out and used to prove that all Bloop programs calculate primitive recursive functions. This should be helpful for this problem.
How does the GJNTM of Question 1 of HW#3 relate to the "verifier" in Sipser's discussion of NDTM's?
The discussion of NP in an algorithms course often says something
like "a problem is in NP if there is a guess-and-verify procedure for
it, where you guess a poly-length string and verify it with a poly-time
algorithm". Our GJNTM has two phases, a nondeterministic "guess" phase
where it writes down the string, and a deterministic "verify" phase
where it computes on the guessed string and its original input.
You need to show that if A is the language of some GJNTM it is the
language of some ONTM, and vice versa. One direction is easy and was
essentially done in lecture. The hard part is to simulate an ONTM with
a GJNTM. For this problem there is a back door, in that you can
simulate the ONTM by a DTM. You can solve this problem that way but you'd
be better off solving it the right way because this will help you on the
later problem where you care about time bounds.
It may be useful to note that the "string" guessed in the guess phase
could code some other kind of data.
I had just a small question on slide 15 of lecture 7 where we prove rec = r.e. intersect co-r.e., I understand the proof but was curious to what you meant when you showed that X_{bar S}(x) = 1-X_s(x) is also a recursive function. I understand that it is simply the opposite output...but what is the meaning of the subtraction from 1 in 1-X_s(x)? Thanks,
Remember that chi_S(x) is the function that outputs the number
1 if x is in S and the number 0 if x is not in S. So if I subtract
the output of chi_S(x) from 1, I get 0 if x is in S and 1 if x is
not in S. This is exactly chi_{S-bar}(x).
(Here "chi" is the Greek letter you write as "X" above.)
Following up on Question 3.1, is it true that this table will contain recursive sets as well (which are r.e. by definition)?
Yes.
(All total recursive functions are partial recursive functions, right? I.e., the upwards-right arrow does not exclude halting with 0?)
It is true that all total rec functions are also partial rec functions
directly from the definition, because partial rec means "computable by
a TM" and total rec functions clearly satisfy this.
What you _mean_ is also sort of true, I think -- the function that proves
set A to be recursive (by returning 1 on x in A and 0 on x not in A) _also_
_indirectly_ proves that A is r.e. (since the W-set corresponding to
this machine is {x: machine returns 1 on x} which is exactly A.
But we _defined_ r.e. sets to have a machine that returns 1 on x in A
and fails to return anything on x not in A. The latter case includes
not halting and halting without a result properly on the tape, but not
returning another value than 1.
How is it that we are able to construct the infinite two dimensional array of r.e. sets from Lecture 9 Slide 16? It seems that there should be no 0's in the table, because for an r.e. set, we never know when it rejects. It seems like only for recursive sets could we build such a table.
You are right that we cannot construct this table because we could
never be assured of any of the zero entries. The table is meant
as an illustration of what the sets K and K-bar are.
The table, viewed as the set {P(i,j): i in W_j}, is an r.e. set because
you can easily make a TM that on input i and j will return i if i is
in W_j and never halt otherwise.
Is it the case that in principle we know what the elements of an r.e. set are but that we still cannot compute them?
We often talk about sets that we cannot decide.
Last modified 5 March 2004