# CMPSCI 611: Theory of Algorithms

### Questions and Answers on Homework #3

Questions are in black, answers in blue.

• Question 3.15, 7 November: What's the "-1" referred to in the notes about the Miller-Rabin algorithm? Aren't all the numbers in Zp in the range from 0 through p-1?

Yes, but we refer to p-1 as "-1" because it is the additive inverse of the multiplicative identity and thus shares all the interesting properties of -1, particularly that it and 1 are the only square roots of 1 if p is prime.

• Question 3.14, 7 November: In Problem 3.5, may I assume that it's impossible for two wells to have the same x-coordinate?

No, this could happen, but (a) you need a separate pipe from each well to the main pipeline, and (b) these pipes may go along the same course (perhaps stacked vertically?) and you are charged for the length of each of them.

• Question 3.13, 7 November: In Problem 3.4b, the time should be polynomial in n, right, not simultaneously polynomial in n, k, and ε?

Right, the point is that if k and ε are constants, the time is polynomial in n. The dependence on k and ε need not be polynomial.

• Question 3.12, 7 November: In Problem 3.3c, I'm not sure how to apply the result from lecture, because in lecture s2r was n-1, and here it is φ(n).

I admit that this is a little unclear, but look at my formulation in Question 3.9 of the Q&A below. What the result I quote needs about s2r is that it is even, and that as2r is guaranteed to be 1 -- this holds for both the lecture situation and in 3.3c.

• Question 3.11, 7 November: In Question 3.2 of the Q&A you said that in Problem 3.4a we could replace the "fewer than" with "at most" to make it easier to apply the Chernoff bound. Does the same hold for Problem 3.4b?

Yes, if it's easier you may change the "at least" to "more than" in 3.4b. But I don't think you need this -- the probability of success is "at least 1 - ε" iff the probability of failure is "at most ε".

• Question 3.10, 7 November: In 3.3c, do we have to factor the number n completely, i.e., find all its prime factors? I see how to use φ(n) to get a pair of nontrivial factors, but unless you then give me φ of those numbers I don't see how to continue.

Yes, this is a mistake on my part. The result that I meant to refer to was that if you have an algorithm that gives you φ(k) for any k, you can use the various parts of this problem to factor any n completely. But 3.3c should just ask you to find two nontrivial factors.

• Question 3.9, 4 November: I'm confused by the number theory in Problem 3.3, particularly part (c).

The point of this problem is to see that you understand enough of the number theory to get the sketch proof I gave in Lecture 14, that the Miller-Rabin primality test has at most a 1/2 probability of failure. In that argument I quoted without proof a particular fact about the numbers as, a2s,..., a2rs when a is chosen uniformly at random and a2rs is guaranteed to be 1. You'll want to quote this fact and apply it to solve (c).

• Question 3.8, 4 November: I'm confused by the role of the variables in Problem 3.4, even after the addendum that ε does not depend on n.

You should treat ε as an input to the problem. In (a) you must show that given n and ε, you can find a c that works. In (b) you are given k, n, and ε and you have to find f(n). You say ε is positive -- may I assume that it is at most 1? If ε > 1, you are guaranteed that no more than ε2n strings of length n have any property you like, because there are only 2n strings. Where does the variable δ from the Chernoff bounds come in?

First off, note that the problem statement doesn't even mention probability, so if a rondom process is to be involved in the problem you are going to be the one to introduce it. So it's up to you what kind of process gives rise to a binomial random variable for which the Chernoff bounds are relevant. Once you have such a variable, the Chernoff bounds are true for any δ in the given ranges. This means that you should pick a value for δ, so that the bounds will give you the result you need. Thus your δ will probably depend on the other input variables of the problem.

• Question 3.7, 4 November: In Problem 3.5, I have succeeded in answering the problem in terms of another problem that is thoroughly covered in both CLRS and your lecture notes. Must I describe the algorithm and analysis for that problem in my answer, or may I cite these authorities?

You may cite either of those authorities. The point of this problem is to explain exactly how the solution to this other problem may be used to place the pipeline. This is just like quoting the running time of depth-first search, for example. There is certainly no point in making you copy out portions of the text in your homework!

• Question 3.6, 4 November: In Problem 3.2b, I think I can evaluate the root node of the tree in the alloted time. But you say that evaluating the tree means assigning a value to every internal node. Since there are 3h-1 internal nodes just on the first level, and I have to look at two or three leaves to assign each, doesn't this mean that I will need to check at least 2 times 3h-1 leaves?

This is not what I meant, but I admit that what I wrote was ambiguous and I've now clarified it. The value of the root is defined in terms of the value of all the leaves, but may not depend on the value of all the leaves. By "evaluate each internal node", I meant that you could do this in order to define the value of the root -- I did not mean to imply that reporting results for all the internal nodes was a necessary requirement of evaluating the tree.

• Question 3.5, 4 November: Are you sure you mean "polynomial time" in Problem 3.1? I think that the regular expressions are going to get really big, too big to manipulate in polynomial time. But it's possible that these big expressions are always equivalent to smaller expressions -- should I work harder to find a general way to simplify them?

Whoops, this is another mistake on my part which I've now corrected. Your algorithm only needs to use a polynomial number of regular expression operations. The simplification you mention may not be possible in general -- in any case we are concerned with the expressions that this algorithm actually creates rather than some hypothetical simplified ones.

• Question 3.4, 2 November: I don't understand how, in Problem 3.2, I could guarantee that the deterministic algorithm must visit all the leaves. What if the first two it sees are siblings and are the same? Then it doesn't have to look at the third one.

I added the words "in the worst case" to the problem text to describe what I mean. The point is that for any possible deterministic algorithm, there is some possible input that will cause that algorithm to look at all the leaves. The way to show that this exists is an adversary argument. You show that if you are allowed to pick the answers for the leaf values with the intent of forcing the algorithm to look at all the leaves, you can do so. (So if the first thing it did was to ask about two siblings, you would give it two different answers so it would have to look eventually at the third sibling.) Because there is a consistent setting of the inputs causing it to either look at all the leaves or give a wrong answer, it must look at all the leaves in the worst case. Your job is to describe how to pick the leaf answers to make sure that this is true.

• Question 3.3, 2 November: I'm confused about the lengths of regular expressions in Problem 3.1 -- shouldn't the length of R+S be longer than |R| + |S| + 1, for example, because you need parentheses? If not, why isn't the length of RS just |R|+|S|?

I'm defining the length of regular expressions in the problem -- I agree that my definition does not correspond to the number of characters in the string denoting the expression. I just want a simple definition that will correspond roughly to the string length -- my definition gives the number of atomic symbols and operators in the expression. I added the word "defined" to the problem to help clarify this.

• Question 3.2, 2 November: In Problem 3.4, I could find a c making fewer than ε2n strings have that property, but because the Chernoff bounds are in terms of ≤ instead of <, it would be much easier to prove that "at most" that many strings have the property. Would this be all right?

Yes, the "fewer" makes the problem more complicated without being more interesting, so I've changed the problem as you suggest.

• Question 3.1, 28 October: In problem 3.3, your definition of φ(n) does not match the definition of "Euler's phi-function" that I've found elsewhere. In fact you say that φ(p2), for example, is the number of elements of Zp2*, and give this number as p2 - 1. But there are 6, not 8, members of Z9 that are relatively prime to 9, the set {1,2,4,5,7,8} -- 6 is p2 - p, not p2 - 1.

Is the definition wrong? Part (a) is easy for the case a=b=1, where the two definitions coincide, but it's not so obvious in the other cases.

Yes, I messed up the definition -- you are quite right. I've added a new part (a) that has you show that if you are given n and the correctly-defined φ(n), then you can nontrivially factor n if any prime occurs more than once in its factorization. The old part (a) is now part (b), in the case now where n=pq. As you say, this is pretty easy. Part (c) has been downgraded to 10 points to compensate.