The second day of the SIGIR 2010 conference kicked off with a keynote by TREC pioneer Donna Harman entitled “Is the Cranfield Paradigm Outdated?”. If you are at all familiar with Donna’s work on TREC, you’ll hardly be surprised that her answer was a resounding “NO!”.
But of course she did a lot more than defend Cranfield. She offered a comprehensive and fascinating history of the Cranfield paradigm, starting with the Cranfield 1 experiments in the late 1950s which evaluated manual indexing systems.
Most importantly, she defined the Cranfield paradigm as defining a metric that reflects real user model and building the collection before the experiments to prevent human bias and enable reusability. As she noted, this model does not say anything about only returning a ranked list of ten blue links–which is what most people (myself included) associate with the Cranfield model. Indeed, she urged us to think outside this mindset.
I loved the presentation and found the history enlightening (though Stephen Robertson corrected a few minor details). Still, I wondered if she was defining the Cranfield paradigm so broadly as to co-opt all of its critics. But I think the clear dividing line between Cranfield and non-Cranfield is whether user effects are something to avoid or embrace. I perceive the success of Cranfield as coming in large part from its reduction of user effects. But I think that much of the HCIR community sees user effects as precisely what we need to be evaluating for information seeking support systems.
In any case, it was a great keynote, and Donna promises me she will make the slides available. Of course I’ll post them here. In the mean time, check out Jeff Dalton’s notes on his great blog and the tweets at #sigir2010.