CMPSCI 383: Artificial Intelligence

Fall 2014 (archived)

Assignment 11 (and 12)

This assignment is due at 0800 on Thursday, 11 December. That time is also the start of the course final exam in Goessman 20.

Unlike all previous assignments, this assignment is optional. If you choose not to do it, it will not affect your grade.

I will be updating the assignment with questions (and their answers) as they are asked.

Option 1: An assignment mulligan

If there is a previous assignment for which you ran out of time, didn’t sufficiently test, etc., you may give it another attempt. You may refer to posted solutions, but you should fix your own (or rewrite it). Do not copy and submit a posted solution. We will consider it plagiarism if you do.

Submit your fixed and complete assignment as assignment11. Include in your submission a readme.txt. Be very specific about whether you believe your new submission fully completes the assignment or not, and why.

Option 2: Choose your own adventure

Design a new programming assignment appropriate for this course (or one that you find interesting and requires material from this course).

  • Write up the assignment in the style of previous assignments. Format this writeup using GitHub Flavored Markdown. Be sure to include example input and output. For example, here is the markdown source for Assignment 06.
  • Write one or more test cases to test solutions to the assignment.
  • Write a clear solution to the assignment, with associated commentary about the approach (not just in-line code comments).

To be sure you don’t waste time on an assignment that’s too easy or too hard, run your idea by me before you start. Please don’t design an assignment where a Java solution is trivially available on the Internet (for example, a Sudoku CSP).

Some suggestions to get you started:

  • An interesting A* search problem peg solitaire, or other solitaire games with perfect information.
  • An optimization problem that’s best solved by local search of some sort, like the TSP.
  • An interesting adversarial search problem, perhaps a game of chance. Make sure that a version of the problem is solvable by minimax (like 3x3 Tic-Tac-Toe), and that a more complicated version requires more advanced techniques.
  • A CSP, such as planning, scheduling, map coloring, and some solitaire games (like 0hh1).
  • An inference technique on Bayes Nets, such as exact inference or likelihood weighting.
  • A supervised learning technique, such as an association rule miner or a neural network, in the style of Assignment 07 or 08. You might also consider a problem that involved categorical (rather than simply binary) variables, or a regression technique that can handle continuous values.
  • A clustering algorithm, such as DBSCAN or OPTICS.
  • An HMM assignment involving filtering or MLE. Consider a problem that involves a less trivial state space than Assignment 10, like a localization problem or the problem of “undoing” (expanding) an initialism given a training corpus.
  • A mix-and-match of previous assignments. For example, suppose you have training data like the voting data from Assignment 07, and a test instance with one or more ? values. You can reconstruct the most likely value using the following (sketch of an) algorithm. For each variable, learn and prune a decision tree based upon the training data. Build a Bayes Net using the 17 learned decision trees; each node should be linked to other nodes on the basis of which variables appear in its tree. The graph is undirected, and is technically a dependency network rather than a Bayes Net. Now use Gibbs sampling (which works on dependency networks, see http://luci.ics.uci.edu/websiteContent/weAreLuci/biographies/faculty/djp3/LocalCopy/p264-heckerman.pdf), treating the known values in the test instance as evidence, to determine the most likely setting of unknown variables (which are treated as query variables). (Thanks to Prof. David Jensen for this idea.)

Grading

You may submit up to twice for this assignment, choosing either option each time. If you choose to submit twice, submit the second time as assignment12.

We will regrade whichever assignment(s) you choose to repeat for Option 1. We will replace your original grade for the assignment with (0.3 * original_grade) + (0.7 * new_grade). Again, do not copy and submit a posted solution. Fix or re-write yours. (And no, you may not resubmit the same assignment twice for Option 1.)

For submissions for Option 2, we will consider each of the following criteria and produce an overall score. The questions listed after each criterion are examples of how we’ll evaluate, but are not an exhaustive list.

  • Assignment description clarity: Does it provide an overview and reasonable specifics? Are there enough examples or precise enough text to describe inputs and outputs unambiguously?
  • Formatting and organization: Is the writeup in Markdown format? Is it well-organized?
  • Interest and difficulty: Does the assignment have some interesting point related to AI? Is it non-trivial?
  • Test cases: Do the test cases cover different possible failure states? Do they cover all reasonable failure states? Are there test cases that exercise the entire system? Are there test cases that cover corner cases?
  • Solution: Is the solution correct? Is the code well-written?

We will replace your lowest assignment grade with the grade you earn for your Option 2 submission(s). (But let’s be honest here: if you’re choosing this option, you probably don’t need a grade replaced in the first place. Here’s your chance to show me how 1337 your AI skills are.)

Questions and answers

I had a question about the weightings for the assignments, are all the assignments weighted equally, including 00? Because I missed the very first one and was wondering if it was worth redoing for assignment 11/12.

Yes, all assignments, including Assignment 00, are weighted equally.

It should be simple to re-do Assignment 00 if you so desire, and we will adjust your grade for it as specified in this assignment if you choose to do so.