The Noisy Channel

 

Holding Back the Rise of the Machines?

February 20th, 2010 · 6 Comments · General

Amazon’s Mechanical Turk is one of my favorite examples of leveraging the internet for innovation:

Amazon Mechanical Turk is a marketplace for work that requires human intelligence. The Mechanical Turk web service enables companies to programmatically access this marketplace and a diverse, on-demand workforce. Developers can leverage this service to build human intelligence directly into their applications.

But, in my view, Mechanical Turk does not take its vision far enough. In the conditions of use, Amazon makes it clear that only human participation need apply: “you will not use robots, scripts or other automated methods to complete the Services”.

On one hand, I can understand that Amazon’s vision for Mechanical Turk, like Luis Von Ahn‘s “games with a purpose“, explicitly aims to apply human intelligence to tasks where automated methods seem inadequate. On the other hand, what are automated methods but encapsulations of human methods? It seems odd for Amazon to be so particular about the human / machine distinction, especially given that terms of service impose practically no other constraints on execution (beyond the obvious legal ones), Moreover, Mechanical Turk offers developers a variety of ways to assure quality (redundancy, qualification tests, etc.).

Granted, there are some important concerns that would have to be addressed if Amazon were to relax the “humans-only” constraint. For example, a developer today can reasonably assume that two different human “Providers” execute tasks independently. With automated participation, there’s a far greater risk of dependence–e.g., from multiple programmers applying the same algorithms. This possibility¬†would have to be taken into account in quality assurance.

Still, the benefits of allowing automated participants would seem to far outweigh the risks. At pennies a task, Mechanical Turk has a limited appeal to the human labor force–indeed, research by Panos Ipeirotis suggests that Amazon’s revenue from the service may be so law that it doesn’t even cover the costs of a single dedicated developer!

In contrast, there’s evidence that programmers would take an interest in participation, were it an option. Marketplaces like TopCoder and competitions like the Netflix Prize suggest that computer scientists take an interest in proving their mettle in many of the kinds of tasks for which organizations already use Mechanical Turk.

So, why not give algorithms a chance? Surely we’re not that afraid of Skynet or the “technological singularity“. Let’s give machines–and their programmers–a chance to show off the best of both worlds!

6 responses so far ↓

  • 1 Jim Moran // Feb 20, 2010 at 5:17 pm

    I am a big fan of the MTurk platform and the power of being able to hire 1,000 people for 10 minutes of work. And I agree with the benefit of Turkers enabling automated solutions.

    I imagine Amazon’s reasoning is to prevent fraud (e.g., someone builds a script that automatically submits thousands of bad results, hoping the HITs will be approved). Certainly the requestors could build ways to not approve such results, however, I imagine Amazon is requestor-focused, especially in ameliorating their concerns for quality. Some requestors I know approve all HITs as policy and just avoid any fraudulent workers in future tasks.

    Humans also occasionally submit bad results, but I think the overall level of bad results may increase if automated submissions were facilitated.

    There should definitely be MTurk like platforms that are optimized for developers (ideally also on the Amazon system), for those interested in finding out ways to automate tasks. That way requestors could pick and choose.

  • 2 davidc // Feb 20, 2010 at 6:29 pm

    I find the mechanical turk fascinating as well. But I am cynical about how it is actually being used. About 40% of the tasks seem to be to create spam of some kind. Another 20% seem related to porn.

    I think you have a good idea though. How about a online prize site. You would need measurable tests on each task and give the reward to the best algorithm. Robin Hanson has a good paper on the value of prizes for science here
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.67.9839&rep=rep1&type=pdf

  • 3 Daniel Tunkelang // Feb 22, 2010 at 10:01 am

    Jim, I understand the concern with fraud, though they could could something as simple as only allowing automatic submissions as an opt-in feature for Developers (i.e., task providers). Then, by defaults, nothing changes.

    I’m not convinced that automation would decrease quality on a per-submission basis, but it certainly introduces thorny dependency issues. But I think that’s an acceptable price to pay in order to potentially expand the marketplace by several orders of magnitude.

    Davidc, perhaps I’m naive to take Mechanical Turk at face value. Though even spam creation is ripe for automation. Anyway, I do like Hanson’s attitude (this link worked better for me). But my goal is here is the advancement of commerce rather than science. That the latter would benefit is merely a benign side effect. :-)

  • 4 renaissance chambara | Ged Carroll - Links of the day // Mar 1, 2010 at 8:02 pm

    [...] Holding Back the Rise of the Machines? [...]

  • 5 Weekly Search & Social News: 03/02/2010 | Search Engine Journal // Mar 2, 2010 at 10:38 am

    [...] Holding Back the Rise of the Machines? – Noisy Channel [...]

  • 6 Weekly Search & Social News:¬†03/02/2010 | YouAreLookingFor Info // Mar 3, 2010 at 12:33 am

    [...] Holding Back the Rise of the Machines? – Noisy Channel [...]

Clicky Web Analytics