# overview - problem-solving as search - uninformed search methods - problem abstraction **bold claim**: Many problems faced by intelligent agents, when properly formulated, can be solved by a single family of generic approaches. **Search** # R+N > ...an agent with several immediate options of unknown value can decide what to do by first examining different possible sequences of actions that lead to states of known value, and then choosing the best sequence. Have you done any search today? (Movement through the world? Planning your workout? etc.) # definitions are boring but essential so let's play jeopardy - This process looks for sequences of actions that are solutions to some problem - search - This state occurs at the start of a solution - initial state - This is the set of states that are all reachable from the initial state - state space - This state is the end result of a search process - goal state # search basics How do we formulate a search problem? - initial state - actions - transition model - goal test - path cost What is a solution? - an action sequence (aka path) that leads from initial and goal states - quality is measured by path cost - optimal path has the lowest possible path cost # example: campus navigation (see ppt) how can we formulate this as a search problem? (ppt) what has been abstracted away? # assumptions of simple search abstraction - static: the world does not change on its own, and our actions don’t change it. - discrete: a finite number of individual states exist rather than a continuous space of options - observable: states can be determined by observations - deterministic: Action have certain (fixed) outcomes # Q: a short history of sexism, classism, and racism jealous husbands, servants and masters, missionaries and cannibals > Three missionaries and three cannibals are on one side of a river, with a boat that can carry one or two people. How can they all cross the river, but never leave a group of missionaries outnumbered by a group of cannibals? Q1. Formulate this problem. That is, what is the minimal amount of information (i.e. best abstraction) needed to represent a state? Q2. What is the start state? Q3. What is the goal state? Q4. What are the actions/transistion models that are possible given a state? Q5. Enumerate the entire state space. # more jeopardy - This data structure is defined by the initial state and the successor function - search tree - These nodes have no successors in the search tree - leaf nodes - This operation adds nodes to a leaf node using the successor function - node expansion This general approach defines which states to expand in the search tree - search strategy # uninformed search - have no information about which nodes are on promising paths to a solution; aka: blind search - Q: Wwat would have to be true for our agent to need uninformed search? - no knowledge of goal location (can only test current location); or - no knowledge of current location or direction (e.g., no GPS, inertial navigation, or compass) # how do you evaluate search strategies? - completeness: does it always find a solution if one exists? - optimality: does it find the best solution? - time complexity - space complexity # uninformed search strategies - BFS (var. uniform cost) - DFS (var. depth-limited) - Iterative deepening DFS # BFS demo on board - complete? yes (if b is finite) - optimal? yes, if cost=1 per step (not in general though) - time? 1 + b + b^2 + b^3 + ... + b^d = O(b^d) - space? O(b^d) O(b^d) sucks, see ppt # DFS demo on board - complete? yes (in finite-depth spaces) - optimal? no - time? O(b^m) (m is max node depth) - space? O(bm) # Iterative deepening demo on board - complete? yes (if b is finite) - optimal? yes (if costs are uniform) - time? O(b^d) (m is max node depth) - space? O(bd) # Q: on computable states > Consider a state space where the start state is 1, each state k as two successors: 2k and 2k+1 Q6. Draw the state space for 1..15 Q7. If the goal is 11, list the nodes in order for BFS Q8. Ditto, for IDS. Q9. Is search the optimal strategy? If not, briefly state the better algorithm. # other items of interest - Bi-directional search (on board) # today's big ideas - Our first AI abstraction: Search - Search specifies: Initial state, actions, transitions, goal test, and path cost - Four ways to evaluate search: Completeness, optimality, space complexity, and time complexity - Different methods have different characteristics; choice depends upon problem