The Noisy Channel

 

HCIR 2010: Bigger and Better than Ever!

August 27th, 2010 · 6 Comments · General

Last Sunday was HCIR 2010, the Fourth Annual Workshop on Human-Computer Interaction and Information Retrieval, held at Rutgers University in New Brunswick, collocated with the Information Interaction in Context Symposium (IIiX 2010).

With 70 registered attendees, it was the biggest HCIR workshop we have held. Rutgers was a gracious host, providing space not only for the all-day workshop but also for a welcome reception the night before.

And, based on an informal survey of participants, I can say with some semblance of objectivity that this was the best HCIR workshop to date.

The opening “poster boaster” session was particularly energetic. There was no award for best boaster, but Cathal Hoare won an ovation by delivering his boaster as a poem:

If a picture is worth a thousand words
Surely to query formulation a photo affords
The ability to ask ‘what is that’ in ways that are many
But for years we have asked how can-we
Narrow the search space so that in reasonable time
We can use images to answer questions that are yours and mine
In my humble poster I will describe
How recent technology and users prescribe
A solution that allows me to point and click
And get answers so that I don’t feel so thick
About my location and my environment
And to my touristic explorations bring some enjoyment
Now if after all that you feel rather dazed
Please come by my poster and see if you are amazed….

As in past years, we enlisted a rock-star keynote speaker–this time, Google UX researcher Dan Russell. His slides hardly do justice to his talk–especially without the audio and video–but I’ve embedded them here so that you can get a flavor for his presentation on how we need to do more to improve the searcher.

We accepted six papers for the presentation sessions–sadly, one of the presenters could not make it because of visa issues. The five presentations covered a variety of topics relating to tools, models, and evaluation for HCIR. The most intriguing of these (to me, at least) was a presentation by Max Wilson about “casual-leisure searching”–which he argues breaks our current models of exploratory search. Check out the slides below, as well as Erica Naone’s article in Technology Review on “Searching for Fun“.

As always, the poster session was the most interactive. Part of the energy came from HCIR Challenge participants showing off their systems in advance of the final session that would decide which of them would win. In any case, I felt like a heel having to walk through the hall of poster three times in order to herd people back to their seats.
Which brings us to the Challenge. When I first suggested the idea of a competition or challenge to my co-organizers back in February, I wasn’t sure we could pull it off. Indeed, even after we managed to obtain the use of the New York Times Annotated Corpus (thank you, LDC!) and a volunteer to set up a baseline system in Solr (thank you, Tommy!), I still worried that we’d have a party and no one would come. So I was delighted to see six very credible entries competing for the “people’s choice” award.

All of the participants offered interesting ideas: custom facets, visualization of the associations between relevant terms, multi-document summarization to catch up on a topic, and combining topic modeling with sentiment analysis to analyzing competing perspectives on a controversial issue. The winning entry, presented by Michael Matthews of Yahoo! Labs Bareclona, was the Time Explorer. As its name suggests, it allows users see the evolution of a topic over time. A cool feature is that it parses absolute and relative dates from article test–in some cases references to past or future times outside the publication span of the collection. Moreover, the temporal visualization of topics allows users to discover unexpected relationships between entities at particular points in time, e.g., between Slobodan Milosevic and Saddam Hussein. You can read more about it in Tom  Simonite’s Technology Review article, “A Search Service that Can Peer into the Future“.

In short, HCIR 2010 will be a tough act to follow. But we’re already working on it. Watch this space…

6 responses so far ↓

  • 1 Greg Linden // Aug 30, 2010 at 3:12 pm

    Hey, Daniel, I was wondering what you thought of Dan Russell’s claim in his talk that we can teach searchers.

    On the one hand, I agree that searchers get better when they learn more about what is available. That’s why I’m a big fan of search as a dialog, expecting the searcher to iterate, and personalizing search results depending on what a searcher has done in the last few searches.

    On the other hand, searchers don’t use advanced search features, tend to do very short queries, and don’t bother spelling things correctly, suggesting that they want to do things with minimal effort.

    In the end, I don’t think a search engine will do well trying to force searchers to do anything. I think the best bet is getting a good answer back quickly and, failing that, help the searcher iterate quickly. But I don’t think we’re likely to get searchers to do longer queries, use advanced operators, or anything that requires effort on their part.

    What do you think?

  • 2 Daniel Tunkelang // Aug 30, 2010 at 3:43 pm

    I don’t think that teaching searchers and engaging in dialog are mutually exclusive. And interfaces teach users all the time, e.g., through the size of the search box, text around it, etc. Marti Hearst documents some of the research in her book.

    That said, I grant that there’s a limit to how much we can or even want to teach searchers. For example, even if we can teach searchers to use Boolean syntax, we probably won’t teach them to use it well–even professionals struggle with it!

    But I do think that the ideal information seeking experience should leave the user smarter. For example, rather than simply auto-phrasing a query (and risking misinterpreting the user’s intent), the system can try to teach the user about quotation marks. Same with spelling correction. Maybe I’m naive/idealistic, but I like the idea that search engines make people smarter and more engaged rather than dumber and more passive. There’s a point at which catering to minimal effort actually results in a worse user experience.

    But perhaps Dan agrees with you more than you think. Google Quick Scroll seems tailor-made for users who don’t know (and don’t want to learn) about control-F.

    It’s a balancing act. But I think Dan’s experiments suggest that we should at least put some effort into improving the user. Maybe in grade school.

  • 3 FXPAL Blog » Blog Archive » Searching deeper // Sep 1, 2010 at 10:47 am

    […] The effectiveness of modern search engines may have lulled people into the implicit belief that if Bing (or Google or Yahoo!) can’t find a document in the top few results, then that document doesn’t exist. This assumption may be true for really common information (such as what Britney Spears is up to at any given moment), but many kinds of information are not  readily findable using a typical search engine. Sometimes, as Daniel Russell likes to document on his blog, it’s a matter of phrasing the query appropriately; but often you have to think more broadly about the collections you search. (For more on this, see Daniel’s Russell’s HCIR 2010 Keynote slides.) […]

  • 4 Suzan Verberne // Sep 7, 2010 at 9:39 am

    Thanks for sharing these presentations! Very interesting.

  • 5 Why is search easy and hard? « The Findability blog // Sep 16, 2010 at 3:39 am

    […] I found most interesting this year. (Daniel Tunkelang who was one of the organizers also posted a good overview of the event on his […]

  • 6 Why is Search Easy and Hard? // Oct 16, 2012 at 4:53 am

    […] I found most interesting this year. (Daniel Tunkelang who was one of the organizers also posted a good overview of the event on his […]

Clicky Web Analytics