CMPSCI 601: Theory of Computation

David Mix Barrington

Spring, 2004

This page is for student questions and my answers related to HW#6. Questions are in black, answers in blue.

Question 6.11, 25 April 2004

Can you explain why there is a 1t in the definition of MEMBER-NP in Question 2? Would the language still be NP-complete if this were left out?

It turns out that if we had, say, {(M,w,k): M can accept w in time nk} as our definition, we would define a language that is NP-hard (everything in NP reduces to it) but would not be in NP!

This should make sense once you have the correct proof and look at it carefully. Remember that to be in NP, there must be a single NTM with a single polynomial time bound that works for any input. But an arbitrary language in NP might have an arbitarily large polynomial bound on its machine's running time.

Question 6.10, 25 April 2004

I have a question about problem 1 on the homework. Initially I thought that I could prove it in a way similar to the way that we proved that REACH is complete for NL, but then I realized that this was not a log-space reduction (lecture 17 p 12-14) since CompGraph has nodes on the order n (for each h<=n). Is this correct? If so, then I am struggling to figure out how I can proceed. Any hints, or am I missing something obvious?

What is the reduction that you think is not log-space? Do you see that the reduction from w to Compgraph(N,w), where N is a logspace NDTM, is a logspace reduction?

Hmm. I guess I'm not really sure what it means to be a log-space reduction. I was interpreting this to mean that f(w) = CompGraph(N,w) may require at least one state for every position h<= n in w, therefore it's polynomial space but not log space. DOes it instead mean that a turing machine that generates f(w) from w uses only O(log n) worktape bits (whereas the number of output bits would reflect the size of f(w))?

Yes, exactly. The output goes on a write-only tape that doesn't count as space. You should go back over all the reductions that are asserted in lecture to be log-space and see that they can be computed with only O(log n) bits of worktape. Usually most of this worktape is used to keep one or more pointers into the input or to maintain the index variables for loops.

Actually in this problem you probably don't want to follow the proof of "REACH is NL-complete" too closely, though it's probably possible to do it that way. It's easier to (a) prove that BETWEEN is in NL, and (b) reduce some known NP-complete problem, such as REACH, to BETWEEN. The latter reduction would have to be logspace, of course.

Question 6.9, 24 April 2004

For question 2, what is n when considering wether MEMBER-NP is in NP. In other words, you have w and t, and if t = n, it seems like you could solve the problem in linear time, by running M on w for t time, and returning positive if it works. If w = n, this makes more sense.

Here "n" is the total size of the input . There is no problem with being able to do it in linear time -- in fact MEMBER-NP is an example of a language in NTIME(n) that is complete for NP under _log-space_ or _poly-time_ reductions. It isn't complete under linear-time reductions, for reasons we saw on HW#5.

For question 4: by "distance between every pair of distinct nodes" do you mean weighted distance, or just distance by number of nodes traversed in the path? Or are the edges all of unit length?

No, The graph is complete, each edge has a weight, and by "the distance between a pair of nodes" I meant the weight of the edge between those two nodes.

Also, I'm having trouble figuring out how the decision problem helps determine a polytime algorithm for questions 3,5,6. Am I just missing something from the lectures? Should we actually be able to find these algorithms, or just postulate there existence based on what we're given from the decision knowledge.

This is an algorithm-construction problem. You are given (by the assumption) a poly-time algorithm for the decision problem, and you have to figure out how to use this (up to poly-many times along with poly-many other operations) to solve the other problems.

Question 6.8, 24 April 2004

In Homework 6, Question 2, can we refer to the running time of an arbitrary NP language? That is, in performing the reduction of an arbitrary NP language L to MEMBER-NP, can we pass as parameters the NTM that accepts L as well as the time in which that NTM accepts?

If A is in NP, you may assume that a machine _and_ a polynomial exist such that the machine can accept an input within the time bound iff the input is in A. If you define the reduction function in terms of the machine and the time bound, then it also exists, and this is what you need to show that the reduction exists.

Is the TSP decision problem mentioned in problems 5 and 6 the sane as the one taking G and length k as arguments as in problem 4?

Yes.

Question 6.6, 23 April 2004

A followup to QA 6.5:

Can I say A ≤ B if A is in F(NP) and B is NP-complete?

It's munging the difference between F(C) and C for a complexity class C, but I feel like it's a fairly obvious thing...we can translate any F(C) function to a C language without changing the complexity class, can't we?

I think you should be careful to only use the ≤ operator and the property NP-complete on languages. If a function is in F(NP), its bitgraph language is in NP, but it isn't in NP. If what you're planning to do is legitimate, there is a legitimate way to phrase it.

Question 6.5, 23 April 2004

Fact 21.5 discusses Hamilton circuits for UNdirected graphs; HW6.3 talks about HAMCIRCUIT for directed graphs. I wouldn't expect this to make a difference, as undirected is a special case of directed, but I wanted to make sure I could utilize Fact 21.5 with impunity in my proof of question 3...

For the homework problem you actually only care that directed-HC is in NP, which is pretty obvious. You may take that as given. Once you have that, as you say, the NP-completeness of directed-HC follows immediately from that for undirected, as you can reduce undirected-HC to directed-HC by what amounts to a type-cast function (replace each undirected edge by two directed edges).

Question 6.4, 13 April 2004

I'm sure I missed this point in the class, so pardon me for asking this again: in some other material, I read about NP-complete problems and they talk about reduction in poly-time, but we always talked about log-space. It's still not clear to me how these two are corelated.

In 611 and in most other algorithms courses NP is defined in terms of poly-time reductions. In this course it is defined in terms of log-space reductions. The problems proved to be NP-complete with either definition are also NP-complete under the other, because in all cases we study the reductions are the same function and only log-space is needed to compute them.

It is likely that there are languages that are 611-NP-complete but not 601-complete, but certainly this could only happen if P were different from L, which we believe but can't prove. If there is a "one-way function" h that is computable in P but whose inverse is not computable in L, then h(SAT) is 611-NP-complete but not 601-NP-complete.

Question 6.3, April 2004

I have a couple of questions about hw 6.

The solutions to 3-6 seem too easy and that is making me uncomfortable.

Questions 3 and 6 seem to have the same simple solution of [redacted]. This seems too easy, especially since it answers two questions.

Hey, you know, sometimes I have trouble coming up with original questions. You want to explain this pretty thoroughly, but that's really the only idea, yes.

4 obeys the triangle inequality so we know there is a 1/2 approx from lecture.

Hmm... I didn't see that, but actually the argument I had in mind was even simpler.

5 is just [redacted]

Yeah.

Is there something I am missing?

'Fraid not, I'll try to make #7 more challenging. I'm sure 611 asked harder questions on this material.

Question 6.2, 21 April 2004

In the homework, question 2 may have a typo in it, where you say Member-MP instead of Member-NP... unless there's a problem called Member-MP, too?

That was a typo that I've now fixed, thanks.

Question 6.1, 21 April 2004

Question 4 on the homework says that we need to show that under the provided assumptions, TSP has a 1-approximation algorithm (based on the definition of a n-approximation in lecture 21). I'm assuming that the definition in lecture 21 is the same as the one given in the spring lecture on approximation (i.e. abs(c(M(x)) - OPT(x))/max(OPT(x),c(M(x))) <= n).

In the spring lecture it says that this is only valid for values less than one. So a 1-approx. doesn't fit the definition. Do we just ignore that constraint in the definition? Assuming we do this, saying that we have a 1-approximation really isn't saying anything by this definition. For example, if an approximation approximates a solution of A and the optimal solution is B, and assuming both are positive integers, then we know that (abs(A-B)/max(A,B)) < 1 in all cases. It seems with the provided assumptions we can easily provide a 1/2-approximation. Maybe the definition in lecture 21 is different then the Spring version, and maybe I'm completely misunderstanding it. Please explain, clarify or verify that this is in fact what you are looking for.

Yes, I screwed up, I meant a 1/2-approximation using this definition. I've fixed the assignment page and I mentioned this in class on 22 April. As I also said then, different definitions of the approximation parameter ε are sometimes used -- in a given paper you may have to read carefully to find out which is meant.

Last modified 25 April 2004