MLFL Wiki |
Main /
Getting More For Less Optimized Crowdsourcing With Dynamic Tasks And GoalsAbstract: In crowdsourcing systems, the interests of contributing participants and system designers are often at odds. Participants seek to perform easy tasks, which offer them instant gratification, while system designers want them to complete more difficult tasks, which bring higher value to the crowdsourced application. Here we present techniques that optimize the crowdsourcing process for both parties involved, by jointly maximizing the worker longevity in the system and the true value that the system derives from worker participation. We first present models that predict the “survival probability” of a worker at any given moment, that is, the probability that she will proceed to the next task offered by the system. We then leverage this survival model to dynamically decide what task to assign and what motivating goals to present to the worker. This allows us to jointly optimize for the short-term (getting a difficult task done) and for the long-term (keeping workers engaged for longer periods of time). We show that dynamically assigning tasks significantly increases the value of a crowdsourcing system. In an extensive empirical evaluation, we observed that our task allocation strategy increases the amount of information collected by up to 117.8%. We also explored the utility of motivating workers with goals. We demonstrate that setting specific, static goals can be highly detrimental to the long-term worker participation, as the completion of a goal (e.g., earning a badge) is also a common drop-off point for many workers. We show that setting the goals dynamically, in conjunction with judicious allocation of tasks, increases the amount of information collected by the crowdsourcing system by up to 249%, compared to the existing baselines that use fixed objectives. Bio: Ari is a computer science M.S./Ph.D. student at UMass Amherst where he is advised by Dr. Andrew McCallum and is a member of the Information Extraction and Synthesis Laboratory. Ari works at the intersection of machine learning and crowdsourcing and is primarily interested in methods for constructing and maintaining domain specific knowledge bases. In his research, Ari often works on different varieties of coreference resolution and data integration with and without humans in the loop. Ari completed his B.S. in computer science at Tufts University in 2010. Before enrolling in graduate school, Ari worked for two years at MIT Lincoln Laboratory on decision support technology for United States intelligence analysts. |