CMPSCI 291b: Reasoning About Uncertainty
Practice Exam for Final Exam
David Mix Barrington
8 May 2008
Directions:
- Answer the problems on the exam pages.
- There are ten problems
for 120 total points plus 10 points extra credit.
Probable scale is A=105, C=70.
- If you need extra space use the back of a page.
- No books, notes, calculators, or collaboration.
- The first six questions are true/false, with five points for the correct
boolean answer and up to five for a correct justification.
- The other questions have numerical answers --
you may give your answer in the form
of an expression using arithmetic operations, powers, falling powers, or the
factorial function. If you give your answer using the "choose" notation, also
give it using only operations on this list.
Q1: 10 points
Q2: 10 points
Q3: 10 points
Q4: 10 points
Q5: 10 points
Q6: 10 points
Q7: 10 points
Q8: 20 points
Q9: 10 points
Q10: 20+10 points
Total: 120 points
Typo in Question 5 corrected 12 May 2008.
- Question 1 (10):
True or false with justification:
If I choose two cards from a standard 52-card deck without replacement,
the probability that both are spades is exactly 1/16.
- Question 2 (10):
True or false with justification:
Suppose we deal one card to each of three players A, B, and C and place a
fourth card face-up on the table. The probability that at least one player
has a card of the same rank as the card on the table (for example, both
sevens) is strictly less than 3/13.
- Question 3 (10):
True or false with justification:
If I throw three fair, independent, six-sided dice (i.e., I "throw 3D6"),
then the probability that the total of the numbers on the three dice is
11 or more is exactly 1/2.
- Question 4 (10):
True or false with justification:
If I throw 3D6 as in Question 3, the probability that two dice have the same
number and the third has a different number is exactly (6*6*6 - 6*5*4 - 6)
divided by 6*6*6. (Here "*" is ordinary multiplication.)
- Question 5 (10):
True or false with justification:
Suppose I am playing a game of ordinary ("American") odds-and-evens, where I
and my opponent choose to put out either one or two fingers, and my payoff is
+1 if the total number of fingers is even and -1 if it is odd. Suppose I know
that my opponent plans to put out one finger with probability
1/3 and two
fingers with probability 2/3. Then my optimal strategy is also to put out one
finger with probability 1/3 and two fingers with probability 2/3.
- Question 6 (10):
True or false with justification:
If 80% of all suspects are guilty, and 80% of all guilty people are suspects,
I may conclude that the number of guilty people is equal to the number of
suspects.
- Question 7 (10):
When I return from work, there is a 80% chance that my wife or daughter
have already fed
the cat. On Tuesday I observe that the cat is meowing when I return. He always
meows when he has not been fed, but he also meows 80% of the time when he
has
been fed. What is my best estimate of the probability that he has been fed on
Tuesday, given the new evidence that he is meowing?
- Question 8 (20):
In building a Naive Bayes Classifier spam filter, suppose that we have
found ten "bad words" that each occur in 2/3 of spam messages and in 1/3
of non-spam messages, and ten "good words" that each occur in 1/3 of spam
messages and 2/3 of non-spam messages. We assume that the occurrence of
different words in the same message are independent events. Assume that
half my mail is spam. If I get a new message containing exactly 7 of the bad
words and exactly 2 of the good words, what (according to the classifier) is
the probability that this message is spam?
- Questions 9 and 10 involve the following Markov decision process. There
are two states, A and B, and two actions, X and Y. In state A, action X yields
either A or B with 1/2 probability each, and action Y always yields state B.
In state B, action X always yields state A and action Y yields A or B with
1/2 probability each. The reward function gives +2 for state A and +4 for state
B.
- Question 9 (10):
Give matrices representing the Markov chain where we always take action X, and
for the chain where we always take action Y. Find the steady state distribution
in the case where we always take action X.
- Question 10 (20+10):
- (a,10) Which action, X or Y, maximizes our expected reward on the
following turn, for each of the two possible starting states?
What is the expected reward from each state if we take this action?
- (b,10) Which action on the first turn, X or Y, maximizes our
expected reward on the following two turns, for each of the two
possible starting states? (Remember that on the
second turn we will take the action determined in part (a).) What is the
expected reward for those two turns from each state if we take these actions?
- (c,10XC) Which action from each state maximizes the total expected
discounted future reward, with a discount factor γ of 0.5? What is the
expected discounted future reward?
Last modified 12 May 2008