People can often perform tasks, such as natural language understanding, vision, and motion planning with greater accuracy and speed than the best algorithms available to us. Computers are good for repetitive, mechanical tasks, but many AI-style tasks remain elusive. By combining humans and computers, crowdsourcing has the potential to create a new class of applications which combine the best qualities of both.
However, unlike traditional computer programs, working with people introduces number of complications:
• People don’t work for free. How much should you pay them?
• Compared to programs, people are slow. How should you write your program to minimize latency?
• People make mistakes. Between spammers and well-intentioned but mistaken workers, how do you know that your answers are correct?
We developed the AutoMan system with these concerns in mind. AutoMan abstracts away the issues of payment, scheduling, and quality control so that programmers can focus on the purpose of their applications. Formerly difficult crowdsourcing tasks become simple, declarative programs:
AutoMan allows programmers to combine off-the-shelf code written for the Java Virtual Machine with quality-controlled, high-performance human subroutines. We have focused our research primarily on the Mechanical Turk platform but the system was designed to be platform-agnostic, only requiring implementers to provide a backend driver.
AutoMan provides question answers with a statistical confidence guarantee. Often, there is a direct trade-off between the quality required by the programmer and the cost of a task. Task wages are determined dynamically, freeing the programmer from having to determine a fair wage.
Handling these concerns in the language’s runtime means that programs that have no or ad hoc quality control schemes and would normally need a trusted supervisor to periodically watch over them are now completely automatic. Freed from requiring constant supervision, programmers can integrate human judgment into large-scale, real-world applications.
Using AutoMan, we’ve explored a variety of tasks, ranging from image recognition and categorization tasks to complex, real-world tasks like automatic license plate identification. We’re continuing to explore what is possible with AutoMan while enhancing the simplicity, reliability, and performance of the system.
AutoMan is available on our GitHub page. Give it a try and tell us what you think!
For more information, see our OOPSLA 2012 paper: AutoMan: A Platform for Integrating Human-Based and Digital Computation.