Matt Lease: Recent Adventures in Crowdsourcing and Human Computation

Today we (specifically, my colleague Daria Sorokina) had the pleasure of hosting UT-Austin professor Matt Lease at LinkedIn to give a talk on his “Recent Adventures in Crowdsourcing and Human Computation“. It was a great talk, and the slides above are full of references to research that he and his colleagues have done in this area. A great resource for people interested in the theory and practice of crowdsourcing!

If you are interested in learning more about crowdsourcing, then sign up for an upcoming LinkedIn tech talk by NYU professor Panos Ipeirotis on “Crowdsourcing: Achieving Data Quality with Imperfect Humans“.

And if you’re already an expert, then perhaps you’d like to work on crowdsourcing at LinkedIn!

By Daniel Tunkelang

High-Class Consultant.

2 replies on “Matt Lease: Recent Adventures in Crowdsourcing and Human Computation”

This presentation was pretty interesting. I’ve not looked at crowdsourcing before. My comments below are related to one mentioned application: using crowdsourcing to gather information on website usability.

In the finite amount of time I had to review the slides, I didn’t see any statistics quantifying the benefits actually derived from using crowdsourcing to evaluate (and improve upon) the usability of websites. That is, after receiving the results, were substantive improvements in a website’s UX actually derived? (Is it proving helpful?)

I mention this because if crowdsourcers are simply rating/evaluating the ‘what is’ but are not provided with the tools to suggest ‘what should be’ then it seems that perhaps only incremental improvements can be achieved.

I’m guessing that for website UX evaluation, perhaps the real goals/motivation for using crowdsourcing are: 1) QA, 2) to identify the areas (heat map) which need attention from internal UX personnel, and 3) to obtain external pre-validation of new UX changes prior to their being released to production.

Also, for those who lead website usability improvement efforts at their employers, I still feel that these internal UX specialists should be able to instinctively detect areas where the UI/UX needs improvement (by actually using it themselves while assuming different user perspectives) and efficiently fix the problems without requiring crowdsourcing input and be able to innovate new UX concepts as-needed. (That is, if a company has the right UX talent, then for website UX improvement, crowdsourcing should not really be necessary except perhaps for external pre-production validation of new changes.

This brings to mind a recent case when a popular social networking site released to production a UX design where the posting title font color was composited on top of user-supplied graphics, and as a result, often resulted in difficult-to-read titles because the text foreground color did not have sufficient contrast against the varying background to make the title/font legible. In this case, the internal UX team should not have released the new product and pre-validation using crowdsourcing would have helped avoid this release.

And, this last paragraph brings to mind the very effective release protocol/process which Sun Microsystems employed in the 80s and 90s to ‘get it right’ WRT their OS and other products by essentially crowdsourcing the review/dogfooding (I hate this word) of its products to internal users at successively larger scopes within the company.


Comments are closed.