Last Sunday was HCIR 2010, the Fourth Annual Workshop on Human-Computer Interaction and Information Retrieval, held at Rutgers University in New Brunswick, collocated with the Information Interaction in Context Symposium (IIiX 2010).
With 70 registered attendees, it was the biggest HCIR workshop we have held. Rutgers was a gracious host, providing space not only for the all-day workshop but also for a welcome reception the night before.
And, based on an informal survey of participants, I can say with some semblance of objectivity that this was the best HCIR workshop to date.
The opening “poster boaster” session was particularly energetic. There was no award for best boaster, but Cathal Hoare won an ovation by delivering his boaster as a poem:
If a picture is worth a thousand wordsSurely to query formulation a photo affordsThe ability to ask ‘what is that’ in ways that are manyBut for years we have asked how can-weNarrow the search space so that in reasonable timeWe can use images to answer questions that are yours and mineIn my humble poster I will describeHow recent technology and users prescribeA solution that allows me to point and clickAnd get answers so that I don’t feel so thickAbout my location and my environmentAnd to my touristic explorations bring some enjoymentNow if after all that you feel rather dazedPlease come by my poster and see if you are amazed….
As in past years, we enlisted a rock-star keynote speaker–this time, Google UX researcher Dan Russell. His slides hardly do justice to his talk–especially without the audio and video–but I’ve embedded them here so that you can get a flavor for his presentation on how we need to do more to improve the searcher.
We accepted six papers for the presentation sessions–sadly, one of the presenters could not make it because of visa issues. The five presentations covered a variety of topics relating to tools, models, and evaluation for HCIR. The most intriguing of these (to me, at least) was a presentation by Max Wilson about “casual-leisure searching”–which he argues breaks our current models of exploratory search. Check out the slides below, as well as Erica Naone’s article in Technology Review on “Searching for Fun“.
All of the participants offered interesting ideas: custom facets, visualization of the associations between relevant terms, multi-document summarization to catch up on a topic, and combining topic modeling with sentiment analysis to analyzing competing perspectives on a controversial issue. The winning entry, presented by Michael Matthews of Yahoo! Labs Bareclona, was the Time Explorer. As its name suggests, it allows users see the evolution of a topic over time. A cool feature is that it parses absolute and relative dates from article test–in some cases references to past or future times outside the publication span of the collection. Moreover, the temporal visualization of topics allows users to discover unexpected relationships between entities at particular points in time, e.g., between Slobodan Milosevic and Saddam Hussein. You can read more about it in Tom Simonite’s Technology Review article, “A Search Service that Can Peer into the Future“.
In short, HCIR 2010 will be a tough act to follow. But we’re already working on it. Watch this space…
