To: social@cs.umass.edu
Subject: Tea in 5 minutes 
Date: Thu, 15 Feb 2001 15:55:13 -0500
From: Marc Pickett I 

Dear Reinforcement Learner,

Suppose you receive a stimulus (tea email), perform an action (going
downstairs to tea), and receive a reward (tea and cookies).  Suppose,
in another trial, you receive the stimulus again, perform a different
action (not going downstairs to tea), and receive a different reward
(neither tea nor cookies).

After several trials, a proper reinforcement learning agent will
eventually attach a value to [receiving the tea email] that is
positively correlated with the reward of [receiving tea and cookies],
and the agent will further learn that the optimal action upon this
stimulus is going downstairs to tea.

At least that's how some of the people in Andy Barto's Adaptive
Networks Lab (who is sponsoring this week's tea) see it.

By now, you've been conditioned.  Now come to tea.  (The sooner, the
better, since the reward is multiplied by a discount factor at every
time step.)

    We are, 
           ``The Tea Totallers''
    Pippin Wolfe   and*   Marc Pickett I

____
* Author order is strictly alphabetical, but using the Futhark this
time, where W comes before P.  (That would make it "futharkical"
order?)
---------------------------------------------------
---------------------------------------------------
CONTRIBUTIONS: Mail to social@cs.umass.edu
UNSUBSCRIBE: Send "unsubscribe social"  to majordomo@cs.umass.edu
PROBLEMS: Report to owner-social@cs.umass.edu
TO SUBSCRIBE: Send "subscribe social" to majordomo@cs.umass.edu