This page is for student questions and my answers related to HW#2. Questions are in black, answers in blue.
I'm working on the extra credit for HW3.2. I think I've thought of a way for a TM to do Times in F(DSPACE(log n)), but I don't think it works in F(DTIME(n2)). Does that matter--i.e., is there any requirement that a function be in both classes /at the same time/, or is it enough to show that one implimentation can achieve F(DTIME(n2)), and another F(DSPACE(log n)) ?
You will get full extra credit if you are in F(DSPACE(log n)) -- I never said your algorithm must stay in time O(n2). It will be in poly time because it is in log space, but it might well be a higher-order polynomial. Tradeoffs like this are typical.
In how much detail should we describe our Turning Machine for question 2 on the homework? Full transition table? Or is an english description, or a composite of simpler TMs good enough?
As I think I've said, you need to give a convincing argument that
the machine exists. Composition of simpler machines is an effective
tool to do this, as is analogy to machines we've already seen.
Sipser refers to an "implementation-level description" where you refer
to tapes and things but don't do nearly as far as a state table. That's
what I'm looking for here.
On slide 11 of Lecture 8 you say that it is "obvious" that DTIME(t(n)) is a subset of DSPACE(t(n)). Why is it obvious?
What I said in lecture was roughly "because a machine can only
use O(1) new tape cells on each time step". But let's go through
the proof more formally as it's a useful exercise.
First we look up the definitions of DTIME(t(n)) and DSPACE(f(n))
and find that they are sets of languages. This lets us translate
the statement into the following:
"If A is a language, and A is decidable by a multitape TM in O(1+t(n))
steps on any string of length n, then A is also decidable by a TM
with read-only input tape that uses O(1+t(n)) space on any string of
length n".
So let M be the machine that decides A in time O(1+t(n)). We need a
machine N that uses space O(1+t(n)). The natural thing to say would
be "Let N be M itself, since M only has time to use O(1+t(n)) tape
cells." And this would be right, except that we defined things so that
N has to have a read-only input tape but M might not. Rather than go
back and change the definition of multitape TM's to make them all have
read-only input tapes, which would be fine, we'll construct N more
carefully.
Let N be the TM with read-only input tape that first copies its
input onto its second tape, then simulates M on its worktapes until
it halts. N now runs for O(n+t(n)) steps and thus might use O(n+t(n))
space. Do we have a problem?
No, because the conditions at the top of slide 11 include the statement
that t(n) &ge n, which implies that O(n+t(n)) equals O(t(n)). It's this
very subtlety that requires us to have that condition on the theorem.
Is this "obvious"? I think it really is, in that it follows from
the definitions of the terms in the statement without any clever new
ideas. But to get to the complete proof requires that you are comfortable
with the rules of mathematical proof, such as the idea that you prove a
subset relation by showing that an arbitrary member of the first set is
a member of the second.
Could you please explain why function Plus is F(DSPACE(1)) again? I remember you said you have x and y on 2 tapes by copying y onto the 2nd tape and then sweep from the right to left on both tapes and remember the carry bit as you go along. I assumed when you said it's DSPACE(1) means you the 2nd tape is actually the output tape so you will not read from it. But how can you do the addition when you sweep both tapes from right to left without reading y bits from the 2nd tape? If not, how can it be DSPACE(1), wouldn't that be DSPACE[n] already?
You're quite right, this algorithm used space Θ(n) and thus does
not put PLUS into F(DSPACE(1)). I don't remember exactly
what I said in lecture, but I think I described that algorithm in order
to show that PLUS is in F(DTIME(n)), which it does. What I should have
said is that PLUS is in F(DSPACE(1)) if the input and output are
formatted the right way.
If the binary input numbers x and y are given to you as "x y", you
are not going to be able to add them with a two-way DFA (which
is what a DSPACE(1) machine is). I won't prove the impossibility here,
but it's because the 2WDFA can't possibly line up corresponding bits in
the two numbers.
But if you get the bits of x and y interleaved (so that "0101" and "1100"
would turn into the single string "01110010" -- we must pad with 0's if
necessary to make x and y the same length) then you can have the 2WDFA
sweep the input right to left and give the output low-order bit first.
In most cases such small details of the input and output encoding don't
matter, because the machine in question is able to translate from one format
to another. But not here -- whether a function is in F(DSPACE(1)) can depend
very crucially on the input and output format.
Following up on 2.1 below:
I think the book, BE, may have just answered that for me... On page 213 in the exercises they say: "For simplicity we have assumed "the unicorn" refers to a specific unicorn named Charlie. This is less than ideal, but the best we can do without quantifiers"
Is this applicable, also, to question 8.4?
Yeah, they're treating "the unicorn has horns" and "Charlie the unicorn has horns" as the same statement, and most importantly they are not analyzing the statement at all. The proof asked for in 8.4 essentially _doesn't notice_ that there is any more similarity between "the unicorn has horns" and "the unicorn is a mammal" than there is between any two arbitrary propositions. Once you start calling the first one "horned(u)" and the second "mammal(u)" you are doing predicate calculus rather than propositional calculus.
In Question #8.4 from BE (p.204), I'm wondering if "The unicorn" is a set of unicorns, u, or if it's an argument... To clarify, the "for all" symbol isn't introduced in the book yet, but what I really want to say is something like:
For all unicorns(u):
horned(u) -> ...
but what I seem to have ended up with is having to argue that 'u' is a unicorn in every sentence, something like:
(unicorn(u) and horned(u)) -> ...
Is it okay to leave out all the unicorn(u)'s and just assume 'u' is a unicorn?
A good question. Since as you say [BE] hasn't yet introduced
quantifiers as this point, we are dealing only with propositional
logic. You have particular atomic boolean statements about which
the premises give you information: "The unicorn is horned", "the
unicorn is a mammal", etc., and have a boolean conclusion that is
only about "the unicorn".
It is true that this proof would be _part_ of an FOL proof of
"for all unicorns u, if u is horned,..." after you initially said
"let u be an arbitrary unicorn". But here we need only do the
propositional part of the argument.
In other words, _yes_, you do not need to have your statements say
anything explicit about u being a unicorn.
Last modified 29 February 2004