# Administrivia Exam 1 is next Thursday (25 Sep) at 1900 in HAS 134. HW3 is due next *Wednesday* (extended from Friday) # Today - defining and representing constraint satisfaction problems (CSPs) - CSPs as search - being smart about search in CSPs - variable and value ordering - propagating information through constraints - intelligent backtracking - CSPs as local search a potpourri of approaches, best approach will vary by problem: you gotta think about it! # What is constraint satisfaction? A family of methods for solving problems where the internal structure of solutions must satisfy a set of specified constraints Big ideas: - Variable and value ordering: Choose next states wisely - Propagate information: Provide good information to your heuristics - Intelligent backtracking: Backtrack in a smart way # You've already done constraint satisfaction - your class schedule, both by semester and overall - logic puzzles - sudoku # AI Jeopardy These entities are one part of the definition of a CSP, and they define candidate solutions : Variables These entities define the possible set of values that variables can take on: Domains These entities are the other part of the definition of a CSP, and they define limitations to possible assignments: Constraints # CSPs are a subset of general search In search (BFS, A*, etc.), states are essentially opaque, and serve only as input to transition fn and evaluation fn. In CSP, states are defined by assignments of values to a set of variables X1...Xn. Each variable Xi has a domain Di of possible values. States are evaluated based on their consistency with a set of constraints C1...Cm over the values of the variables. A goal state is a complete assignment to all variables that satisfies all the constraints. # Example: Map coloring aka the god damn map of australia, also in ppt NT Q WA SA NSW V T /NT--Q / | /| WA-SA -NSW \ | V T Variables: WA, NT, Q, SA, NWS, V, T Domains: Di = {R, G, B} Constraints: adjacent regions must be different colors - e.g. WA != NT - or (WA, NT) in {(R,G), (R,B), ... } - or (WA, NT) not in {(R,R), (G,G), (B, B) } # Example: solution WA = red, NT = green, Q = red, NWS = green, V = red, SA = blue, T = free # Constraint graph nodes are variables; links between any two nodes that participate in a constraint Q1: Draw the constraint graph for Australia: /NT--Q / | /| WA-SA -NSW \ | V T # Cryptoarithmetic T W O + T W O ------- F O U R Q2a. What are the variables? Q2b. What are the domains? Note the hidden variables (carry digits). Q2c. What are the constraints? Unlike map, some involve >2 variables. - AI Jeopardy? unary, binary, n-ary constraints # n-ary constraints For n-ary constraints, we use a hyper-constraint graph (otherwise you end up with a clique in the worst case) n-ary constraints are more concise than the binary decomposition, and enable special purpose inference # Two in-class exercises clapping for time constraints numbers for search under constraint # we're bound for South Australia is coloring the map of Australia a search problem? - what is a state? - what is the successor fn (i.e. action and transition) - goal test? - branching factor? - what search method would you apply? - heuristics? # CSP solutions as search formulation is identical for all CSPs solution found at depth n (for n variables). path is irrelevant depth first search is a good fit (with backtracking!) branching factor b at the top level is nd. b=(n-l)d at depth l, hence how many leaves? (n!)(d^n) only d^n complete assignments # example DFS on australia ppt # simple backtracking search Depth-first search Choose values for one variable at a time Backtrack when a variable has no legal values left to assign. If search is uninformed, then general performance is relatively poor How can we do better? # improving backtracking efficiency Basic question: What next step should our search procedure take? Approaches: - Minimum remaining values heuristic - Degree heuristic - Least-constraining value heuristic see ppt for illustration # what about special algorithms for constraint satisfaction? we can propagate information through constraints, and substantially prune the search tree as we go (in essence, improving the heuristics) # providing better information to heuristics Forward checking: - Precomputing information needed by MRV anyway - enables early stopping Constraint propagation: - enforces constraints locally on constraint graph - Arc consistency (2-consistency) - n-consistency # more ppt for examples # n-consistency Arc consistency does not detect all inconsistencies: Partial assignment {WA=red, NSW=red} is inconsistent. Stronger forms of propagation can be defined using the notion of n-consistency. A CSP is k-consistent if for any set of k-1 variables and for any consistent assignment to those variables, a consistent value can always be assigned to any kth variable. e.g.: - 1-consistency or node-consistency - 2-consistency or arc-consistency - 3-consistency or path-consistency (in binary constraint graphs) # Other techniques Check global constraints: - AllDiff can be checked many ways. One simple way is pigeonhole principle; m variables, n values, if m > n, no satisfying assignment is possible. - AtMost() and other integer domain constraints are handled by bounds propagation (propagate intervals, rather than assignments) Intelligent backtracking: Standard is chronological backtracking i.e. try different value for preceding / most recently assigned variable. Another option: - determine conflict set (set of assignments that cause a conflict at dead-end) - back jump to the most recent element of the conflict set - forward checking computes the conflict set (and in fact obviates it if run after every assignment) # is this amenable to local search? Do we need the path to the solution or only the solution itself? Is memory a constraint? Can we apply local search methods? - Hillclimbing - Simulated annealing - Genetic algorithms What are good heuristics for choosing moves in local search? # min-conflicts heuristic To enable local search: - allow states with unsatisfied constraints - operators reassign variable values Variable selection: randomly select any conflicted variable Value selection by min-conflicts heuristic: - choose value that violates the fewest constraints - then gradient descent with h(n) = total number of violated constraints # Example: n-queens States: 4 queens in 4 columns (4^4 = 256 states) Actions: move queen in column Goal test: no attacks; h(n) = 0 Evaluation: h(n) = number of attacks . . . . . Q . . . Q . . . Q . Q -> . . . Q -> . . . Q Q . Q . Q . Q . Q . . . . . . . . . . . . . Q . can solve random n-queens in almost constant time for arbitrary n with high probability (works because state space is dense with possible solutions; sparse state space requires more advanced techniques)