The Noisy Channel


Nick Belkin at ECIR ’08

April 6th, 2008 · 11 Comments · General

Last week, I had the pleasure to attend the 30th European Conference on Information Retrieval, chaired by Iadh Ounis at the University of Glasgow. The conference was outstanding in several respects, not least of which was a keynote address by Nick Belkin, one the world’s leading researchers on interactive information retrieval.

Nick’s keynote, entitled “Some(what) Grand Challenges for Information Retrieval“, was a full frontal attack on the Cranfield evaluation paradigm that has dominated IR research for the past half century. I am hoping to see his keynote published and posted online, but in the meantime here is a choice excerpt:

in accepting the [Gerard Salton] award at the 1997 SIGIR meeting, Tefko Saracevic stressed the significance of integrating research in information seeking behavior with research in IR system models and algorithms, saying: “if we consider that unlike art IR is not there for its own sake, that is, IR systems are researched and built to be used, then IR is far, far more than a branch of computer science, concerned primarily with issues of algorithms, computers, and computing.”

Nevertheless, we can still see the dominance of the TREC (i.e. Cranfield) evaluation paradigm in most IR research, the inability of this paradigm to accommodate study of people in interaction with information systems (cf. the death of the TREC Interactive Track), and a dearth of research which integrates study of users’ goals, tasks and behaviors with research on models and methods which respond to results of such studies and supports those goals, tasks and behaviors.

This situation is especially striking for several reasons. First, it is clearly the case that IR as practiced is inherently interactive; secondly, it is clearly the case that the new models and associated representation and ranking techniques lead to only incremental (if that) improvement in performance over previous models and techniques, which is generally not statistically significant; and thirdly, that such improvement, as determined in TREC-style evaluation, rarely, if ever, leads to improved performance by human searchers in interactive IR systems.

Nick has long been critical of the IR community’s neglect of users and interaction. But this keynote was significant for two reasons. First, the ECIR program committee’s decision to invite a keynote speaker from the information science community acknowledges the need for collaboration between these two communities. Second, Nick reciprocated this overture by calling for interdisciplinary efforts to bridge the gap between the formal study of information retrieval and the practical understanding of information behavior. As an avid proponent of HCIR, I am heartily encouraged by steps like these.

11 responses so far ↓

  • 1 Jon Elsas // Apr 9, 2008 at 7:18 am

    Daniel — thanks for posting this. Sounds like a fascinating talk.

  • 2 FD // Apr 9, 2008 at 8:10 am

    From the perspective of a relatively green IR researcher, the IR community started as a combination of computer science and information/library science researchers. Nick’s work—past and present—and the work of information and library scientists is extremely relevant to and, in my opinion, overlooked by the computer science IR community. I’d give my left toe for SIGIR to drop five papers which claim marginal DCG improvement—or better yet every “computational advertising” paper—for an equal number information science papers.

  • 3 Daniel Tunkelang // Apr 10, 2008 at 4:35 am

    In fairness to the SIGIR community, the divergence in methodology between the information retrieval and information science communities has made it very hard for the two to collaborate. IR researchers want repeatable experiments, which information and library scientists emphasize user studies that are inherently not repeatable. It is, as Nick said, a grand challenge.

  • 4 Jon Elsas // Apr 10, 2008 at 6:37 am

    Daniel — I think the divergence in the two fields is quite a bit more complicated than simply a focus on repeatable experiments vs. user studies. There’s baggage included with IS’s close ties to LS, there’s lots of appeal for CS students to manipulate formulae to eek out incremental improvements, the nuts-and-bolts software engineering expertise doesn’t really exist in many IS departments, and the list goes on. Even though I came through an IS department on my way to CS, I haven’t completely wrapped my head around all the reasons *why* this divide exists.

    It is great to see venues like SIGIR and ECIR really elevating the role and visibility of good IS research in the IR community — the most recent best papers at both conferences are perfect examples.

    (and, as a side note, IMO if the conclusions of your user study aren’t repeatable, then you’re missing something in your study design or analysis… but that’s probably another discussion)

  • 5 Daniel Tunkelang // Apr 11, 2008 at 12:16 pm

    Jon, you’re right–I am guilty of oversimplification. I think the crux of the problem, which is hardly unique to IR, is that it’s easier–at least in the academic world–to propose solutions that incremental improve relative to an well-accepted problem statement than it is to propose changing the problem statement. This conservative attitude has some merit: it certainly filters out a lot of cranks. But it also discourages radical innovation.

  • 6 fd // Apr 12, 2008 at 12:21 pm

    Accepting nominations for SIGIR “Unfiltered Crank” award.

  • 7 Daniel Tunkelang // May 30, 2008 at 4:36 am

    Thanks to Jeff for posting a link to Nick’s talk, recently published in the SIGIR Forum.

  • 8 SIGIR 2009: Day 2, Interactive Search Session | The Noisy Channel // Jul 26, 2009 at 9:13 pm

    […] search sessions were the most interesting, and this one was no exception. Ironically, even though many of us (myself included) feel that interaction is marginalized within the SIGIR conference and even the […]

  • 9 Norbert Fuhr’s Probability Ranking Principle for Interactive Information Retrieval | The Noisy Channel // Aug 10, 2009 at 9:15 am

    […] even heard of this paper, despite Nick Belkin citing it in the ECIR 2008 keynote that inspired my first blog post! In my defense, the citation is a single sentence that offers more of a tease than an explanation […]

  • 10 Saracevic on Relevance and Interaction | The Noisy Channel // Aug 11, 2009 at 1:43 pm

    […] but the unmet challenge, as Ellen Voorhees makes clear, is evaluation. We need to address Nick Belkin’s grand challenge and establish a paradigm suitable for evaluation of interactive IR […]

  • 11 Human-Computer Information Retrieval in Layman’s Terms | The Noisy Channel // Sep 27, 2009 at 10:12 pm

    […] the occasional ideas I am fortunate enough to conceive–are worthy of broader consideration. I started blogging in order to bring greater visibility to HCIR–to convince people that the choice between human […]

Clicky Web Analytics