This page currently contains the interesting questions I received while HW#1 was due, through 12 Feb 2003. (Actually through 19 Feb for off-campus students.)
The main question-and-answer page with the most current questions and answers is here.
Questions are in black, answers in blue. Most recently answered questions are listed first.
You proved in lecture that the language {0^n1^n: n &ge 0} is not regular. But I have built an NFA that accepts all the strings in this language. [This NFA has two states, a 0-loop on the start state, an epsilon-move to the second state, and a 1-loop on the second state which is final.] Is this a contradiction?
No, because you're mistaken about the definition of "the language of an NFA". To "accept a language", the NFA must accept all the strings of the language and no other strings. So the language of your NFA is 0^*1^*, because it can accept any string that is a block of 0's followed by a block of 1's. This is, of course, a regular language.
There's something that I don't think you said at the end of today's 601 lecture that's bothering me a little. Neil's hierarchy diagram, in some ways, depicts unproven hypotheses about complexity classes. I think it's a very useful diagram, but at the same time, it confuses a lot of people. Last year [...] I talked to a grad student who had already taken 601 who didn't believe that P might be equal to PSPACE and tried to use this diagram to prove they were not equal. This is probably obvious to many people in the class, but it might not be to everyone.
A good point that is worth brining up right away.
The chart represents known containment relationships among classes.
As you say, some of the containments are known to be strict but many
are not, and the chart does not indicate which.
We have proved in lecture that the regular languages are strictly
contained in the context-free languages, and that the context-free languages
are strictly contained in "truly feasible". (The latter is because
any sensible general computing system ought to easily decide the languages
that we proved to be not context-free.) I asserted in class that while all
recursive languages are r.e., there are r.e. languages that are not recursive.
We will prove this soon, but we have not proven it yet.
According to the definition of regular expressions presented in lecture 1, the union of regular expressions should be written as "(a U b)" and the concatenation of regular expressions as "(a o b)". I wanted to confirm that it is acceptable to write things like "a U b U c U d" instead of "(((a U b) U c) U d)" or "ab*a" instead of "((a o b*) o a)" for simplicity and readability.
Yes, you may use any standard notation for regular expressions as long as it is quite clear what you mean. Since the union operation is associative, you may omit parens when three or more things are unioned together. And if you use the "algebraic" notation for concatenation, writing "ab" instead of "a o b", you may also make use of the convention that concatenation has a higher priotity than union. In this sense regular expressions are written and read as though union were addition, concatenation were multiplication, and the star operator were exponentiation.
Is the last part of Question 4 on HW#1 ("Can you find a DFA with...") a rhetorical question, a yes/no ("is it possible to find a DFA with..."), or a "give a DFA with ..." question?
I generally don't ask rhetorical questions, so I'm looking for some kind of response. It's phrased as a yes/no question (meaning "does there exists a DFA with..."), but you are expected to give convincing justification for your answer. So if the answer is "yes", the most obvious way to justify it would be to describe the DFA, probably informally.
I want to know if I am right in assuming that the statement (Question 2 in HW) "induction on all regular expressions" is equivalent to "induction on all regular languages"?
To prove something holds for all objects of type X "by induction on type X", you need to have an inductive definition of the objects of type X. For example, we have a definition of the regular expressions (r.e.'s) that says that certain base expressions are r.e.'s, that certain operations applied to r.e.'s give you more r.e.'s, and that this is the only way to make r.e.'s, so we can do "induction on all regular expressions". We did this in lecture 1 to prove that if R is any regular expression, L(R) is the language of some NFA. If by "induction on all regular languages" you mean exactly this, that's fine. If you mean something else, you may be confused.
On homework 1, question 5a, you suggest that in order to prove the complement of NONCFL is context-free we show that it is the union of several context-free languages. Would you like us to prove that context-free languages are closed under union or may we assume that result?
You should give some justification, on the order
of a line or two. It's needed to justify your result, but clearly not
the main point of the problem, so you should explain why you're sure
that it's true but you don't need a detailed proof. The main point of
this problem is to find the right languages to union together and to
argue that each is context-free.
A general rule that might be helpful is to imagine an audience for
your homework (and exam answers) that has not taken this course but has
all the prerequisites. Your audience will cut you some slack if you don't
do all the details of something basic -- for example, you can give induction
arguments rather informally because you and your audience both understand
induction proofs pretty well. But when it's new material central to the
problem, you should be more careful. Kazu will give feedback as to when
you're skipping too much and when you're putting in more than you need to.
We're encouraged to work together, right? At what points should we a) pause and cite each other, or b) stop talking to each other entirely? What about using outside materials--homeworks from an undergrad theory course, the web, etc?
The important thing is that the presentation be entirely your own. You may get the idea of how to solve the problem from conversation with another student or one of the other sources you mention. But once you begin writing your solution you should be on your own, with only a few notes about this idea. Often the writeup process will reveal problems in your argument. You may discuss or research those, but then return to writing up your answer alone. I hope that's reasonably clear -- I don't expect many problems in this regard from graduate students.
Also, do you have a set late policy?
I do now, thanks for asking. Because the off-campus students do everything a week later, I won't post solutions until a week after the due date. Thus if you need an extension for reasons beyond your control, such as illness, I can grant up to a week or otherwise excuse you from the assignment. Normally in my courses I don't accept late homework at all, but since we have this week I'll take it for half credit up to a week late for on-campus students only.
May I use complementation to make the regular expressions in Problem 1, HW#1?
No. Regular expressions are defined to have only union, concatenation, and Kleene star -- not complement. It is true that the complement of a regular language is regular, so you may use complementation to prove that a language is regular. But here you are asked to construct a regular expression, which has a clearly-defined meaning.
I am having trouble designing a DFA for problem 3a that does not accept the empty string.
Actually, your DFA should accept the empty string because the empty string is an element of the language A6. This is because the empty string denotes the number "zero", and zero is divisible by six.
If we provide a tuple, say D_6, for an answer to HW#1 problem 3a, do we have to prove that Language(D_6) = A_6 ?
Answer: No, if your DFA is correct. I would like everybody to think about how you might prove that a DFA has a particular language, but in this question a correct DFA needs no proof. In Question 1 I said "you are responsible for convincing the grader that your regular expression is correct". By this I don't mean that you're required to prove it correct, but that if the grader can't decide whether it's correct or not and your submission doesn't help him to do so, you lose.
What does the symbol "|" mean in the notes for lecture 2, where the language "{w in {0,1}* | (7 | w)}" is defined?
Answer: In this part of the notes the first
"|" may be read as "such that". I usually use a colon for this but
Neil's notes, from which I'm cribbing heavily, use a vertical bar.
The second "|" means "divides" in the
sense of number theory -- "7 divides w" means that the remainder of
the integer w when divided by 7 is 0, or in C/Java terms "w%7 == 0".
Here w is a string of 0's and 1's that is interpreted as a binary
integer in the usual way. We'll talk more about this in Lecture 2.
I was looking at the alternate mechanism for converting a
DFA to a RE from page 18 of the (Lecture 1) slides.
However, I'm having trouble
deducing what this process is from that one slide. Is there somewhere
I can refer to to get a better description? Does this process have a
recognized name (like state elimination from Sipser does)? Thanks.
This method is called "the dynamic programming method" and is
used in Lewis and Papadimitriou. I will say more about it in
Lecture 2 on Monday. The state elimination method is far more
useful in calculating the regular expression for a DFA by hand.
But the dynamic programming method arguably has a simpler proof
of correctness. The dynamic programming method is also similar
to Warshall's Algorithm for transitive closure of a graph.
Last modified 15 February 2003