Categories
General

Announcing HCIR 2011!

As regular readers know, I’ve been co-organizing annual workshops on Human-Computer Interaction and Information Retrieval since creating the first HCIR workshop in 2007. These have been a huge success, not only bridging the gap between IR and HCI, but also bringing together researchers and practitioners to address concerns shared by both communities. Past keynote speakers have included such information science luminaries as Susan Dumais, Ben Shneiderman, and Dan Russell.

Every workshop has improved on the previous year’s, and HCIR 2011, which will take place on Thursday, October 20, will be no exception.

Our venue will be Google’s headquarters in Mountain View, California. We could hardly imagine a more appropriate venue: Google has done more than any another company to contribute to everyday information access. Google has been extremely generous as a host and sponsor (other sponsors include Endeca and Microsoft Research), and its location in the heart of Silicon Valley is ideal for attracting researchers and practitioners building the future of HCIR.

Our keynote speaker will be Gary Marchionini, Dean of the School of Information and Library Science at the University of North Carolina at Chapel Hill. Gary coined the phrase “human–computer information retrieval” in a lecture entitled “Toward Human-Computer Information retrieval“, in which he asserted that “HCIR aims to empower people to explore large-scale information bases but demands that people also take responsibility for this control by expending cognitive and physical energy.” We are honored to have Gary deliver this year’s keynote.

But of course the main attraction is the contribution of participants. This year we invite three types of papers: position papers, research papers and challenge reports. Possible topics for discussion and presentation at the workshop include, but are not limited to:

  • Novel interaction techniques for information retrieval.
  • Modeling and evaluation of interactive information retrieval.
  • Exploratory search and information discovery.
  • Information visualization and visual analytics.
  • Applications of HCI techniques to information retrieval needs in specific domains.
  • Ethnography and user studies relevant to information retrieval and access.
  • Scale and efficiency considerations for interactive information retrieval systems.
  • Relevance feedback and active learning approaches for information retrieval.

Demonstrations of systems and prototypes are particularly welcome.

Building on the success of the last year’s HCIR Challenge to address historical exploration of a news archive, this year’s HCIR Challenge will focus on the problem of information availability. The corpus for the Challenge will be the CiteSeer digital library of scientific literature.

For more information about the workshop, including how to submit papers or participate in the challenge, please visit the HCIR 2011 website.

Here are the key dates for submitting position and research papers:

  • Submission deadline (position and research papers): July 31
  • Notification of acceptance decision: September 8
  • Presentations and poster session at workshop: October 20

Key dates for Challenge participants:

  • Request access to corpus (contact me) deadline: June 19
  • Freeze system and submit brief description: September 25
  • Submit videos or screenshots demonstrating systems on example tasks: October 9
  • Live demonstrations at workshop: October 20

I’m looking forward to this year’s submissions, and to a great workshop in October. I hope to see many of you there!

By Daniel Tunkelang

High-Class Consultant.

8 replies on “Announcing HCIR 2011!”

At the risk of running a little curmudgeonly, don’t you see the irony in hosting HCIR at Google?

Sure, it’s true as you say: “Google has done more than any another company to contribute to everyday information access.

But look again at Marchionini’s definition of HCIR (which also struck me the first time I read it; I wholeheartedly agree):

HCIR aims to empower people to explore large-scale information bases but demands that people also take responsibility for this control by expending cognitive and physical energy.

See in particular that last bit: people taking responsibility and expending more cognitive energy. The HCI can’t just do it for them. It has to be designed to let the user express that effort, themselves.

I’m fully on board with that goal. But I can’t think of a company more against that goal than Google. Most of what Google does is algorithms and interfaces that are geared toward reducing the need, or even the ability, for users to improve their information seeking task by taking responsibility and expending cognitive energy.

For example, “Google Suggest” as an HCI interaction mechanism quickly fills in the most popular, successful queries, steering you away from your own, unique expressions. And Google Universal Search takes away your ability (by default) to express what type of information you are looking for, e.g. a video, a book, etc. Instead, it just blends it best guess of result type into a single universal interface.

Those are all very nice tools. Don’t get me wrong. But they hardly seem in line with Marchionini’s vision, in that they’re all designed to lower the cognitive effort of the user, rather than give the user an interface in which to more powerfully express his or her cognitive information seeking efforts. I’ve been waiting for over a decade now, for example, for explicit relevance feedback. Not an explicit +1, which doesn’t affect my current information seeking task and therefore is not a true HCI expression of my cognitive efforts to find information. But real relevance feedback.

[/usualrant]

Like

I bounced over to the HCIR website, and became intrigued by the “IP&M Special Issue”. Went to the IP&M site and tried to figure out how to get a subscription… they don’t seem terribly keen to increase readership. HHIR fail, but I’m going to buy it anyways 🙂

Wordle word cloud FTW? Telltale grey line down the right – I’ve created many a screenshot with the same watermark.

Like

Jeremy: you would disappoint me if you weren’t at least a little bit curmudgeonly. 🙂 I’m certainly aware of the irony — I’ve done my share of positioning HCIR relative to the expectations set by Google. That makes me all the more proud of enlisting Google as a sponsor last year (not to mention having Dan Russell as a keynote) and as a host this year. Google may have differences with the HCIR vision, but it’s certainly taking that vision seriously.

molten_tofu: yup, it’s a Wordle. I thought of editing it, but I decided to keep it as is — I borrowed it from Tony Russell-Rose. As for IP&M, I believe that Elsevier sells individual articles and subscriptions through ScienceDirect. You can always ask them.

Like

Hey, thanks! Now a new subscriber to Tony’s blog and I’ll be sure to checkout ScienceDirect for that subscription.

Cheers,
mt

Like

The challenge task sounds pretty fun! At first I was thinking of a clustering approach, but I kind of think you might have better luck with random walks of the citation graph. On the example task, something like taking the search results for [Latent Semantic Indexing Deerwester], walking papers cited by those results and citations of those results, then filtering for 1988 or before. After all, no real need to do the relevance matching on topic when the authors of papers probably have already done that for you in their citations. Probably need a bit of tuning to make sure you’re sticking to documents that are unusual to be cited — everyone cites the 1983 Information Retrieval textbook, for example, so Deerwester citing that doesn’t indicate anything particularly useful — but I suspect that might pretty good results pretty easily. Hmm, I wonder what kind of UI you’d want on top to try to make it easy for people to walk through the documents and find relevant ones. Beyond just finding the documents, you really need to highlight relevant snippets out of the paper or otherwise help focus attention, but it’s not trivial to know what a relevant snippet is given that they don’t really use LSI or terms from the definition of LSI. That part could be a bit tougher. Seems like you easily could go down a rats nest of work only to find that just surfacing the abstract and conclusion is a more useful to help people filter than all the attempt you made to have the system highlight snippets deeper in the paper. Well, anyway, sounds like a fun challenge!

Like

Absolutely, you have done more than your share of HCIR vs Google, in capacities both inside and out.

I just wonder sometimes what Google (the personified entity) thinks about all this stuff.

Like

Hi, have the authors been already notified regarding the acceptance of the papers? Our team has received no response.

Like

Comments are closed.