# Questions and Answers on Homework Assignment #3

#### HW#2 is due on paper in discussion class, Friday 26 February 2010, or to the CMPSCI main office by 4:00 p.m. that day.

Question text is in black, my answers in blue.

• Question 3.1, posted 23 February:

For Exercise 3,5, do they mean a computable function that isn't time constructible?

That would certainly be a more interesting question, but they didn't say that so you may provide a non-computable one if you want. Actually, though, you should make sure that your function satisfies the requirement that T(n) ≥ n -- making it not time-constructible by violating that is just too silly.

• Question 3.2, posted 23 February: The hint for Exercise 3.6 (a) on page 532 is very helpful. In fact I'm wondering what else there is to say that isn't said there.

You're asked to prove that there is a poly-time algorithm to compute H(n) given n, and the hint doesn't do that -- it just lays out the steps you're going to need to prove that. So describe the poly-time algorithm in some detail (no tapes or heads, but in some detail) and use the hint to see where you have to justify its polynomial time bound and how to do that.

• Question 3.3, posted 25 February: For 2.18, I know I need to take an arbitrary directed graph and make an undirected graph that has a Hamilton path iff the directed graph does. But I am worried that there could be paths in my undirected graph in addition to the ones that take the edges in the direction that I intend for them.

That is indeed a wise thing to be worried about. A useful trick in designing your undirected graph is to ensure that any Hamilton path in it must obey certain constraints. For example, if an undirected graph has exactly two vertices of degree 1, say x and y, any Hamilton path in the graph must go from x to y or from y to x. So you can design the graph so that you control the start and end of any path. If a vertex has degree 2, any Hamilton path must enter that vertex on one edge and leave on the other.

• Question 3.4, posted 25 February: For 3.4, I think the definition of "every large enough n" should be interpreted as:

∃M1: ∀M2: ∃n0(M2): ∀n: (n≥n0(M1)) → (blah blah blah).

But it seems possible to read the statement as:

∃M1: ∃n0: ∀M2: ∀n: (n≥n0) → (blah blah blah).

In the latter case I don't see how to prove that anything is superior to anything else. Is my first interpretation correct? That is, can the n0 in the "large enough n" depend on M2?

Yes, your first interpretation is correct. Under the second interpretation, if the class C2 containing the M2's is defined in terms of big-O, there's nothing to stop the behavior of M2 from being arbitrary on a very large initial segment of the possible inputs, thus as you say preventing any class from being superior to that class.