# Administrivia

- finals schedule?
- late adds?
- A00 graded
- A01 status

# Today
- Informed search methods
- Heuristic search methods
    - Greedy best-first search
    - A* search
- Heuristic functions

**bold claim**: Many problems faced by intelligent agents, when properly formulated, can be solved by a single family of generic approaches.  **Heuristic** Search.

# More AI Jeopardy

- this set of states are all reachable from the initial state
    - state space
- this expression provides a value for each node in the search tree. 
    - evaluation function (f)
- this method selects the successor which has the highest value of the evaluation function
    - greedy best-first search
- this expression *estimates* the cost of the lowest-cost path from a node to the goal state
    - heuristic function (h)

# Recall our CS → Goessman example

see ppt

# Review: uninformed search strategies

- BFS
- DFS
- IDDFS

# Things to think about

- state space vs generated search tree
- loopiness

to the board + ppt

- things to show:
    - loopiness in BFS/DFS
    - IDDFS doesn't need to watch for loops (less state)
    - sub optimality of DFS

# Recall how we judge search

- completeness
- optimality
- time
- space

# How could we make DFS optimal?

DFS just dives into the tree; we can(?) make it pick the most promising avenue

The "next state" to expand should be the one with the smallest expected cost. 

We can estimate this cost with h, a heuristic.

  - "proceeding to a solution by trial and error or by rules that are only loosely defined." (Oxford American Dictionary)
  - "a technique designed to solve a problem that ignores whether the solution can be proven to be correct, but which usually produces a good solution or solves a simpler problem that contains or intersects with the solution of the more complex problem." (Wikipedia)

# Greedy best-first search

Explore node closest to goal first: 

  - f(n) = h(n)

What is a reasonable heuristic for our robot?

Is GBFS optimal and/or complete?

What types of errors will GBFS make?

Q1. Construct a minimal graph (V+E) such that GBFS finds a sub-optimal path from the start state to the goal state.

# A note on errors, generally

bias/variance illustration

# A*	search

Tries to pick the node on the minimum cost path to goal

  - f(n) = g(n) + h(n)

g(n) is path cost so far

A* can be both complete and optimal, depending upon h(n)

What must be true for A* to be complete/optimal? (never overestimate, obey triangle inequality)

# AI Jeopardy

- this condition holds for a heuristic if it never overestimates the distance to the goal state
    - admissibility
- if this condition holds, then the values of the evaluation function along any path are non-decreasing
    - consistency (monotonicity)
    - can still find optimal paths, but requires additional bookkeeping

# properties of A*
- complete (unless infinitely many nodes have underestimated f(n))
- optimal iff h(n) is admissible and consistent (or do the bookkeeping)
- A* is optimally efficient **for a given heuristic**
    - A* expands only nodes necessary to prove it's found the shortest path, and no others (except for tie values of f(n))

# back to the ppt

- contour lines show values of the evaluation function
- subsuming contours represent higher costs, for admissible heuristics
- if h(n)=0, then circles around the start state (just path cost)
- if h(n)>0, then stretch toward the goal state
- with admissible heuristics, states that are never on the path are *pruned*

# more about heuristics

tile puzzles (ppt)

# examples

on board: show example random state, and goal state

# eight-puzzle as search problem

can we define this as a search problem?

 - states
 - initial state
 - action / transition
 - goal
 - path cost

Q2a. What is the size of the state space for the 8-puzzle?

Q2b. What is the branching factor of the 8-puzzle?

Q2c. Is the 8-puzzle state-space loopy?

Q3a. What is (a bound on) the size of the state space for the n x m Rubik Toroid?

Q3b. What is (a bound on) the branching factor of the n x m Rubik Toroid?

Q3c. Is the n x m Rubik Toroid state-space loopy?

# nodes in search tree vs state space

a state minimally represents game state

a node represents some of:
  - game state
  - path to state from initial state (often just predecessor)
  - depth
  - path cost (g)
  - estimated cost to goal state (h)

# good heuristics for 8-puzzle?

goal: never over-estimate remaining distance

- number of misplaced tiles 
    - why admissible?
- sum of manhattan distances for each tile from current space to end space
    - why admissible?

# how to generate heuristics

heuristics are often the exact answer to a *relaxed* version of the problem (meaning: adds, never removes, edges to search tree)

- e.g., if tiles could swap with any other tiles, misplaced tiles is exact
- e.g., if tiles could move through other tiles, manhattan is exact

# heuristics from subproblems

consider a subproblem, where we move only 0-1-2-3-4 into position and "don't care" about 5-6-7-8

on board:

  - random init
  - goal with * for 5-6-7-8

How many start states are there for this subproblem?

# What to heuristics get us

Heuristics reduce the number of states explored.

In essence, reduce the *branching factor*: super important since many bounds are a polynomial function of b (b^d) or a multiple of b for space.

IDS for 8-puzzle: b ~= 2.78
A* for 8-puzzle (manhattan): b ~= 1.25

2.78^10 ~= 27000
1.25^10 ~= 9

# What about other heuristics?

h(n) =

- 0: uniform cost search
- < actual cost: optimal; the lower the cost, the closer to closer to UCS (more nodes expanded)
- == actual cost: optimal, and optimally efficient: only expands nodes along path to goal (and nodes of tied cost)
- g.t. actual cost: can expand fewer nodes, but may find suboptimal path (bounded by overestimate)
- h >> g: becomes GBFS

