Question text is in black, solutions in blue.
Q1: 30 points Q2: 35 points Q3: 60 points Total: 125 points
X is neither an equivalence relation nor a partial order. It is
pretty clearly reflexive and transitive. It need not be symmetric
because if, for example, Ace has all of Biscuit's playthings plus
the tennis ball, X(b, a) is true but X(a, b) is false. It need
not be antisymmetric because two different dogs might have exactly
the same set of playthings.
Y is an equivalence relation but not a partial order. It is
also pretty clearly reflexive and transitive, and it is symmetric
because switching g and g' in the definition leads to an
equivalent statement. But it need not be antisymmetric, again
because two different dogs could have exactly the same set of playthings.
We must prove ∀p: H(c, p) → H(d, p). Let p be an arbitrary Plaything and assume H(c, p). By specification on Y(a, c), we know that H(a, p) ↔ H(c, p) and therefore that H(a, p) is true. By specification on X(a, b), we know that H(a, p) → H(b, p) and therefore that H(b, p) is true. Finally, by specification on Y(b, d), we know that H(b, p) ↔ H(d, p) and thus that H(d, p) is true. We have completed a direct proof of H(c, p) → H(d, p) for an arbitrary p, so we have proved the desired ∀ statement by generalization.
(H(c, r) ⊕ (H(d, s) → H(b, s)). Hardly anyone interpreted the "only if" correctly. "Duncan has it only if Biscuit has it" in English means the same thing as "If Duncan has it, Biscuit also has it". The placement of the comma indicates that the → operation is evaluated before the ⊕ operation.
If Ace has the Tennis Ball or Cardie has the Rope Chew, or both, then Duncan has the Tennis Ball or Ace has the Rope Chew, or both.
Statement I need not be true given the assumptions, since they say
nothing about whether Cardie has the Rope Chew and thus the
entire statement might be true or false even though the
implication is true.
Statement II can be proved by direct proof and proof by
cases. Assume H(a, t) ∨ H(c, r) for the direct proof. For
the first case, assume H(a, t). Then H(b, t) follows by
specification on X(a, b) and Modus Ponens, and H(d, t) follows by
specification on Y(b, d) and equivalence, so H(d, t) ∨ H(a, r)
follows by joining. For the other case, assume ¬H(a, t),
which means by the assumption that H(c, r) is true. By
specification on Y(a, c) and equivalence, we know that H(a, r) is
true, and thus H(d, t) ∨ H(a, r) follows by joining. We have
proved our conclusion in both cases and have thus completed a
direct proof of the implication.
The transitive closure of the λ-moves contains the four
existing λ-moves plus (1, λ, 5) and (3,
λ, 5).
The move (1, b, 3) yields itself, (1, b, 2), and (1, b, 5)
because 1 can be reached only from itself and 3 can go to
itself, 2 or 5.
The move (4, a, 2) yields itself, (4, a, 5), (1, a, 2), and
(1, a, 5), because 4 can be reached from 1 and 2 can reach 5.
The move (5, a, 5) yields (1, a, 5), (2, a, 5), (3, a, 5),
(4, a, 5), and itself, because 5 can reach only itself but can
be reached from anywhere.
We have created twelve letter moves, which reduce to ten
when we remove duplicates. State 1 becomes a final state
because we can reach a final state from it on λ, so we
have the same five states, the same start state, final state
set {1, 2, 5}, and the ten transitions above.
The start state is {1}, which is final. State {1} on a goes to {2, 5}
and on b goes to {2, 3, 5}. State {2, 5} on a goes to {5} and
on b goes to ∅, the death state (which goes to itself on
both a and b). State {2, 3, 5} also goes to {5} on a and to
the death state on b. State {5} also goes to itself on a and
to the death state on b. We are done, with five total states,
and every state except the death state is final.
If we choose to minimize this DFA, we first divide into
classes F and N, and find that state {1} must be separated out
becuase it goes to F on both a and b, while the other three
states in F go to F on a and to N on b. Then we find that all
three states in the non-trivial class, {2, 3, 5}, {2, 5}, and
{5}, go to this class on a and to N on b. So our minimized
DFA has three states, {1}, X, and N, with {1} and X final, {1}
the start state, and transitions ({1}, a, X), ({1}, b, X), (X,
a, X), (X, b, N), (N, a, N), and (N, b, N).
Starting from M, we can keep 1 as the start state but we must add a
new final state f and transitions (2, λ, f) and (5, λ,
f). We kill state 3, making the single transition (1, b, 2). We kill
state 4, making the two transitions (1, a, 2) (which merges into (1,
a+b, 2)) and (1, λ, 5). We then kill 2, giving (1, a+b, 5)
(which merges into (1, λ+a+b, 5)) and (1, a+b, f). Finally we
kill 5, giving (1, a + b + (λ+a+b)a*, f) and a final
regular expression of a + b + (λ+a+b)a*.
Starting from the minimized D, we add a new final state f, add two
transitions ({1}, λ, f) and (X, λ, f), kill the death
state making no new transitions, and kill X so that we can read off
the final regular expression of λ + (a + b)a*. This
is easily seen to be equivalent to the regular expression above.
We first make states s and f, and implement (s, aba, f) by making two new states 1 and 2 and transition (s, a, 1), (1, b, 2), and (2, a, f). To implement (s, (b + a∅b)*ba, f), we first handle the concatenation by making two new states 3 and 4 and transitions (3, b, 4), and (4, a, f). It then remains to implement (s, (b + a∅b)*, 3). To do this we need two new states 5 and 6 and transitions (s, λ, 5), (5, λ, 6), (6, λ, 5), and (6, λ, 3). We then have to implement (5, b + a∅b, 6). We add a transition (5, b, 6), then add two new states 7 and 8 and transitions (5, a, 7) and (8, b, 6). We implement (7, ∅, 8) by doing nothing at all -- ∅ represents the empty set and the λ-NFA for ∅ just has a start and final state with no transitions at all.
We define WV(λ) to be 0. If w is any string and a is any letter, we define WV(wa) to be WV(w) + LV(a).
int wv
(string w)
to compute WV for any pseudo-Java string, using the
base methods isEmpty
, last
, and
allButLast
. (Recall that last("DOG")
is
'G'
and that allButLast("DOG")
is
"DO"
. Also remember that these are pseudo-Java
string
values, where string
is a primitive
type, not real Java String
objects.) Don't worry about
input strings that are not over the correct alphabet. You are given
a method int lv (char ch)
that returns LV(ch) for any
character ch in that alphabet.
I did not take off for using the real-Java syntax
int wv (string w) {
if (isEmpty(w)) return 0;
return wv(allButLast(w)) + lv(last(w));}
w.isEmpty()
, etc., or for saying w.length ==
0
instead of using isEmpty
. I didn't even mind if you mixed real-Java
w.last()
, etc., with pseudo-Java w == ""
.
I said "you can build" the DFA, which doesn't excuse you from doing
it! The DFA I had in mind had state set {0, 1, 2, ..., n, n+1},
start state 0, final state set {n}, and δ(i, a) = min(n + 1, i
+ lv(a)) for every state i and letter a. This makes
δ*(0, w) equal to min(n + 1, WV(w)) for any string
w, so that w ∈ L(M) if and only if WV(w) = n.
An alternate argument is that for any fixed n, the language
Cn is finite, and any finite language is regular. To get
full credit for this you had to argue why Cn is finite
(since each letter has an lv of at least 1, there can be at most n
letters), and why every finite language is regular (describe a regular
expression, NFA, or DFA for it).
"11111$DOG"
.) Is Z a regular language?
Prove your answer. Is Z the language of any two-way DFA?
Z is not regular. For any two natural numbers i and j, the strings
1i and 1j are not Z-equivalent, because for
example 1i$Oi is in Z and
1j$Oj is not. I gave only partial credit for "a
DFA must know the number of 1's when it reaches the $" -- this is a
correct intuition but not a proof.
Many of you argued either that a DFA could test the language
1*$(A+B+...+Z)*, which is the wrong language
because it includes strings where n ≠ WV(w), or that Z is regular
because Cn is regular. The latter misunderstands the
definition of Z -- for any fixed n the set of strings in Z with WV(w)
= n is regular, but Z includes strings for all possible values of n.
Since Z is not regular it has no two-way DFA, by a result quoted in
lecture.
Z is Turing decidable and thus also Turing recognizable. The easiest argument is that a Java program could compute n, compute WV(w), and compare them (after checking that the input is of the correct form), and so a Turing machine could be built to do this. To build an actual Turing machine, we could look at each letter ch of w, then delete it and delete LV(ch) ones from the first part of the string. Like our decider for {anbn}, this TM accepts if and only if it runs out of ones at the same time it runs out of letters. The state table contains the information about the value of each letter -- this is still a fixed number of states.
Remember that you are asked for an induction on k, not an induction on strings. The base case is k = 0, which is easy but still requires a statement involving paths. If w is an arbitrary string with WV(w) = 0, w must be λ, and we know that there is a path of length 0 from the node for λ to itself. For the inductive case, we assume that the stated path of length m exists for any string v of length m, for any m with m ≤ k (where k < 30, or else we are done by vacuous proof). Then we let w be an arbitrary string with WV(w) = k+1, and we must prove that a path exists from λ to w with weight k+1. Since w cannot be λ, w = va for some string v and letter a and we know that WV(w) = WV(v) + LV(a). By the definition of G, an edge exists from v to w of weight LV(a). By the strong IH, since WV(v) ≤ k, a path exists from λ to v of weight WV(v). By the definition of paths, then, we have a path from λ to w of weight WV(v) + LV(a) = WV(w). Since w was arbitrary of weight k+1, we are done with the inductive step and also done with the proof.
We will search the part of G that contains nodes reachable from u, which are the nodes for all strings uv such that WV(uv) ≤ 30. If this set of strings contains any goal nodes (that is, if any word in D has u as a prefix) we will find such a word with the smallest possible WV value, because this word will be closest to u under the given notion of distance in G.
Last modified 9 May 2012