Q1: 10 points Q2: 10 points Q3: 10 points Q4: 10 points Q5: 10 points Q6: 10 points Q7: 20+10 points Q8: 40 points Total: 120+10 points
Question text is in black, solutions in blue.
The set N of natural numbers is {0, 1, 2, 3,...}, not quite as defined in Sipser.
For this exam, the input alphabet of all machines will be Σ = {0, 1}.
A palindrome is a string that is equal to its own reversal. Let PAL = {w: w is a palindrome}.
If C is any class of computers, such as DFA's, CFG's, TM's, strange variant TM's, etc.:
A language is Turing recognizable (TR) if it is equal to L(M) for some Turing machine M.
A language is Turing decidable (TD) if it is equal to L(M) for some Turing machine M that halts on every input.
A language is co-TR if and only if its complement is TR.
A function f from strings to strings is Turing computable if there exists a Turing machine M such that for any string w, M when started on w halts with f(w) on its tape.
Recall that if A and B are two languages, A is mapping reducible to B, written A ≤m B, if there exists a Turing computable function f: Σ* → Σ* such that for any string w, w ∈ A ↔ f(w) ∈ B.
A Volatile Memory Turing Machine or VMTM is a multitape deterministic Turing machine where Tape 1 is a read-only input tape and the other tapes have the following property: Any cell that has not been written to in the last ten time steps changes its content to the blank symbol, without the intervention of the head.
A Write-Once Turing Machine or WOTM is a deterministic one-tape Turing machine with the property that each cell may have its contents changed only once during a computation. That is, once a new character replaces the original contents of the cell, that new character remains there forever. There are no restrictions on the machine's reading or moving.
TRUE. A TM can input (M) and run M on all palindromes in parallel, accepting if any are accepted by M.
Alternatively, we can write "(M) ∈ ANYPALTM" as "∃w:∃h: PAL(w) ∧ ACH(M, w, h)", showing that ANYPAL is a Σ1 or TR language.
FALSE. We will show that ATM-bar ≤m ONLYPALTM, which implies that ONLYPALTM is not TR because we know that ATM-bar is not TR.
Given any TM M and string w, we define f(M, w) to be a TM N such that on input x, N ignores x and runs M on w, accepting if and only if it accepts. If w ∈ L(M), then L(N) = Σ* and N ∉ ONLYPALTM. If w ∉ L(M), then L(N) = ∅ and N ∈ ONLYPALTM.
TRUE. One way to show this is to first show that PAL-bar is a CFG. (It is the language of the grammar S → ΣSΣ, S → 0T1, S → 1T0, T → ΣTΣ, T → Σ, T → ε.) (Here Σ abbreviates "0 or 1".)
We can rewrite "(D) ∈ ONLYPALDFA" as "¬∃w: (w ∉ PAL) ∧ (w ∈ L(D))", or "X ≠ ∅" where X is the language PAL-bar ∪ L(D).
Since X is the intersection of the CFG PAL-bar and the regular language L(D), we can reduce the question "(D) ∈ ONLYPALDFA" to the question "(GX) ∉ ECFG", where GX is a grammar for X that we can easily construct from D. Since ECFG is TD, so is ONLYPALDFA.
TRUE. For any grammar G, "(G) ∈ ANYPALCFG" can be rewritten as "∃w: PAL(w) ∧ w ∈ L(G)". The language PAL ∪ L(G) may not be a CFL, but it is certainly TD, so this definition shows that ANYPALCFG is Σ1 and thus TR.
Alternatively, we can simply use a decider for ACFG to test every palindrome (in series or parallel) to see whether any of them are in L(G).
TRUE. It is easy to see that AWOTM ≤m ATM, because an ordinary TM can just simulate a WOTM, stopping if it violates the write-once condition.
The interesting direction is to simulate an arbitrary (ordinary, one-tape, deterministic) TM with a WOTM, thus showing ATM ≤m AWOTM. (The two languages ATM and AWOTM are thus each reduced to the other, so we know that the latter is TR and not TD because the former is.)
The basic idea is to simulate the ordinary TM by constructing its computation history on tape of the WOTM. We begin with the input w on the tape, as the configuration c0. We then successively construct the successor configurations c1, c2, etc., to the right of co, without changing the latter, until or unless we detect an accepting configuration. To do this we need to copy each ci with the appropriate local changes around the head position, so that the copy is the correct ci+1.
There is a subtlety involved in this copying, and I only gave full credit to the few people who addressed it. We discussed how to copy a string from one place on the tape to another in lecture. We can only copy one character at a time (well, maybe O(1) characters) so we must make repeated passes from the original to the copy and back. On each return to the original, we must find the last character that we copied so that we can choose the right one to copy next. In our early examples we did this by deleting the characters as we copied them, but that won't work here because we already wrote these characters over a blank and can't change them.
One way to get around this is to leave a blank space between every pair of consecutive letters in a copy. Then we can "mark" a letter we have copied by marking the space after it, without violating the write-once restriction.
TRUE. There are at least two solutions to this, one simple and one sophisticated. The simple one is to note that any TM that never halts has this property, since the implication "if it halts on w leaving n, then it halts on w leaving n + 1" is true vacuously for any w. (As a deterministic TM, R can't halt on the same input with two different outputs, which implies that any correct R must never halt, or at least never halt with a natural number as output.)
The sophisticated solution recognizes that the language of the problem suggests the Recursion Theorem. If we simply use the Recursion Theorem to try to construct such an R, we succeed. R obtains its own description, runs itself on w, and if the simulation halts with output n, itself halts and outputs n + 1. Such an R does exist, and of course it never outputs a natural for any input.
Let R be an arbitrary regular language, and let D be a DFA such that R = L(D), which exists by Kleene's Theorem. The easiest solution simulates D with a VMTM that ignores its volatile tapes and simple makes one pass left to right over the read-only input tape, using its own state to keep track of the state of D. It accepts if and only D is in a final state at the end of the input. Another solution is to keep the state of D on one of the volatile tapes -- the volatility does not hurt because we only need to keep the state on the tape long enough to read the next character of input.
Note that if D were not a constant for the machine, this simulation would not work because we might need more than ten tape letters to record the state of D. So we cannot show ADFA to be VMTM-decidable by this method.
The key point is that any cell of a volatile tape that is more than five spaces away from the head is irrelevant to the rest of the computation, since it must be at least five time steps old now and will be ten steps old before the head can reach it. So we can simulate each volatile tape with another volatile tape of only eleven cells, recording only those cells near the head.
We can thus simulate the entire VMTM with a two-way DFA, using the state of the 2WDFA to represent all the volatile tapes and moving back and forth on the read-only input tape as needed. The language A2WDFA is decidable, because we need only simulate the DFA for O(n) steps (where n is the input size) until it has been twice been in the same state with the input in the same place. If this happens the 2WDFA will never halt.
As we showed above, we can simulate any fixed VMTM with a 2WDFA. We asserted but did not prove in class that the language of any 2WDFA is regular. It is not enough to observe that the simulating machine uses only O(1) memory, because an ordinary DFA can only move in one direction on its input and this machine can move in both directions.
Given an arbitrary TM M, we need f(M) to be a TM M' such that L(M) = Σ* if and only if PAL ⊆ L(M'). Thus we need there to be a string w with w ∉ L(M) if and only if there is a palindrome p such that p ∉ L(M'). Many of you wanted to have the function f take a string w to the palindrome wwR, but this confuses the types of the objects involved. We need f to input a Turing machine and output another Turing machine.
One solution is to have M', on input x, simulate M on the first half of x. Then if w ∉ L(M), the palindrome wwR is not in L(M') because on input wwR, M' simulates M on w. Note that we don't care what M does on non-palindromes.
This is an easy one. Our function f can be one that takes any grammar (G) as input and outputs a Turing machine that decides L(G). (For example, f(G), on input w, might run the known decider for ACFG on the input (G, w).) Then L(G) = L(f(G)), and either both these languages are Σ* or neither is.
Following the hint, let f(M, w) be the machine that ignores its input and runs M on w, just as in Question 2 above. Then if (M, w) ∈ ATM, then L(N) = Σ* and (N) ∈ ALLTM. If (M, w) ∉ ATM, then L(N) = ∅ and (N) ∉ ALLTM. So f is the desired reduction.
We know that the relation ≤m is transitive and that the classes TR and co-TR are both closed downward under ≤m.
Since ATM ≤m ALLTM by (c), and ALLTM ≤m ALLPALTM by (a), and ATM is not co-TR, we know that ALLPALTM is not co-TR.
The proof in lecture showed that ATM-bar ≤m ALLCFG. Since in (b) we showed that ALLCFG ≤m ALLTM, and in (a) we showed that ALLTM ≤m ALLPALTM, we know that ATM-bar ≤m ALLPALTM. Since ATM-bar is not TR, neither is ALLPALTM.
Last modified 6 April 2016