Questions in black, solutions in blue.
Directions: Each of the first three questions is a statement that may be true or false. Please state whether it is true or false -- you will get five points for a correct boolean answer and there is no penalty for a wrong guess. Then justify your answer -- the remaining points will depend on the quality and validity of your justification.
FALSE. A DFS tree can have back edges that are part of a simple path but are not part of the forest. (The question is a little vague, it should have said "the DFS forest contains the path".) For example, suppose G has nodes {A,B,C,D} and edges (A,B), (A,C), (B,C), and (B,D). A DFS starting at A and looking at edges in alphabetical order will take (A,B), (B,C), and (B,D) as the tree edges, giving the DFS tree depth two. But there is a simple path of length three from A, going A to C to B to D.
FALSE. If m is the length of u, brute force takes Θ(mn) = Θ(n). Boyer-Moore takes Ω(n) in the worst case, which asymptotically is no better though it is better by a constant factor.
TRUE. By the repeated squaring method we can compute An using O(log n) matrix multiplications. And multiplying two 3 by 3 matrices by the brute-force method takes O(33) = O(1) integer operations. So the total time is O(log n) times O(1) times O(1), or O(log n).
Recursively find the minimum element in each of the two subheaps (whose roots
are the children of the original heap's root. Compare these two elements
and return the smaller. (There are some base cases -- if the heap is size
one return the root's value, if it is size two return the value or the single
leaf, which is the left child of the root.) This is clearly correct because
the minimum value is in one of these two subheaps unless both are empty.
To find the time we use a recurrence, where T(n) is the time of this
algorithm on a heap with n nodes. We have T(1) = O(1) and T(2) = O(1) as
base cases. Then if m is the number of nodes in the left subheap we have
T(n) = T(a) + T(n-a-1) + O(1), which solves to T(n) = O(n). (For the inductive
step of the proof, if T(a) ≤ ca, T(n-a-1) ≤ c(n-a-1), and the O(1) term
is ≤ c, then the total is ≤ cn as desired.)
For each triple of distinct nodes a, b, and c, check whether the three given edges exist and return true if they all do. If you complete the loop without returning, return false. This is O(n3) because we can have three nested loops ranging over all a, all b, and all c respectively, and testing for distinctness and for the existence of the three nodes is O(1).
If A is the adjacency matrix of G, we know that for any i and j, A3i,j is the number of paths of length exactly three from i to j. Thus the graph has a triangle iff for some i, A3i,i is positive. We can calculate A3 using two matrix multiplications on n by n matrices, which can each be done in o(n3) time by Solovay-Strassen. Then checking for a positive entry on the diagonal takes only O(n) time, which is also o(n3).
In O(n+e) time we can create a DFS forest for G, recording the depth of each
node. Then in O(n+e) time we can check each edge to see whether it goes from
a node to its grandparent.
We must prove that there is an edge to a grandparent in the DFS forest
iff G contains a triangle. The forward direction is obvious, as the node,
parent, and grandparent fulfill the condition to be a triangle. For the
other direction, suppose that there is a triangle in G. Let x by the node
in the triangle that is reached first in the DFS. Because all neighbors of
x are searched before leaving x, one of the other two triangle nodes will
be found first -- call this node y. The third node z must be found while
y is being searched, because z is a neighbor of y and all neighbors of y
are searched before leaving y. So x is the parent of y and the grandparent
of z, and the edge (x,z) connects z to its grandparent.
Make one pass through the elements, keeping a variable "candidate". The first time you see an element larger than x, set "candidate" to that element. (If you never see such an element, return that x has no successor.) Later if you see an element smaller than "candidate" but bigger than x, set "candidate" to that element. At the end, return "candidate". You have taken O(n) time because you spent O(1) time with each element. And you have returned the correct successor because when the correct successor is found "candidate" will be set to it, and "candidate" will not be changed after that happens.
If we knew where x was in the array, we would need only O(1) time because we
just look at the element following x. Finding x in the array takes only
O(log n) time by binary search.
Presorting the array would take O(n log n) time. For one successor query
this is clearly slower than doing the search. If we were doing a larger
number of successor queries on the same array, such as Ω(n) of them,
presorting might be worthwhile.
Find the smallest element in O(n) time and make it element 0 of the sorted array. Then for each i from 1 through n-1, find the successor of element i-1 of the sorted array and make it element i of the sorted array. The n successor queries take O(1) time each by assumption, so we have O(n) + O(n) = O(n) total time.
Last modified 6 November 2003