One of the great benefits of practicing, as Daniel Lemire calls it, open scholarship is that I have many opportunities to see how ideas translate across the research / practice divide. In particular, I obtain invaluable feedback on the accuracy and effectiveness of that translation process.
A few days ago, I was exchanging email with serial entrepreneur Chris Dixon about human-computer information retrieval (HCIR). He’d just looked through the accepted submissions list for HCIR 2009 and said, if I may paraphrase: this is great stuff, but it needs to be better communicated for broader consumption. I quickly shot back a reaction that I’ll excerpt here (when in doubt, make it public!):
At some level it’s blindingly obvious: to err is human, to really screw up takes a computer. The HealthBase fiasco isn’t a shocker: lots of people are skeptical of pure AI approaches.
What people don’t get is that you can work to optimize the division of labor. I’m evangelizing it in places like Technology Review–a bit more mainstream than my blog. But ultimately the message has to resonate with entrepreneurs and investors who will make that vision a reality. Endeca is all about HCIR. Bing is a step in the right direction for the open web. But there’s a long way to go.
His response: that’s a lot more consumable that any other description of HCIR he’d seen to date (and he’s a regular reader here!). Having just finished reading Steve Blank’s Four Steps to the Epiphany, I appreciate his point: in a new market, the most critical priority is educating the potential customers.
As a number of us prepare for the HCIR 2009 workshop, that’s something to keep in mind. There’s a natural tension between rigorous scholarship and mass communication, but some have the greatest scholars (e.g., Richard Feynman and Linus Pauling) have shown the way for us mere mortals. Indeed, in a field as cross-disciplinary as HCIR, we would do well to make our work and vision as broadly consumable as possible, albeit without oversimplifying it to the point that it is vapid or even misleading.
Generally speaking, I blog in order to convince people that some of the esoteric ideas I encounter–and the occasional ideas I am fortunate enough to conceive–are worthy of broader consideration. I started blogging in order to bring greater visibility to HCIR–to convince people that the choice between human and machine responsibility is a false dichtomy in almost every aspect of the information seeking process.
In grade school, I learned that division of labor is the cornerstone of civilization–perhaps and our adaptive process of allocating effort our greatest achievement as a species. As machines play an increasingly important role in our lives–and serve as the lenses through which seek and consume almost all information–it is key that we not forget our roots. Let us be neither Luddites nor passive participants, but rather let us help computers help us.
6 replies on “Human-Computer Information Retrieval in Layman’s Terms”
Thanks for the explanation Daniel. But why is it just Human Computer Information Retrieval? What about working on the larger problem of how to best divide general AI tasks between humans and computers? We didn’t have a name for it, but I feel like that’s a lot of what me and my colleagues thought about at my last 2 companies. We would always ask – what are computers good at, and what are humans good at, and try to divide the work up accordingly.
This topic is sometimes referred to as “Mixed Initiative” where people and the computer are free to initiate interaction on their terms. The system can make a suggestion, the person can ignore or act on it; the person can tell the system what needs to be done (but the system should not ignore that — it’s not completely symmetrical).
The two can work independently but in concert. AI is not necessary for this kind of HCI, but it can make system behavior more interesting than just generating occasional alerts.
Chris, you’re right that there’s a broader concern here than HCIR. Tasks like identifying sites with malware (which at least covers one of your recent companies) probably fall outside the traditional domain of information retrieval. But the term “AI” has its own baggage. But you express the sentiment elegantly: ask what are computers good at, and what are humans good at–and try to divide the work up accordingly.
Gene, I’ve only seen the term “mixed initiative” used in the context of natural language dialog systems. Is it used more generally that that?
Of course, both Pauling and Feynman are extreme examples. And maybe not very good ones.
I was too lazy to seek out better ones. Einstein, perhaps? My point is that some of the best scholars are those who recognize–and step up to–the need to communicate beyond their inner circle of peers. I think that point stands, even if I haven’t supplied the best evidence to defend it.
For illustration purpose, here’s an example of a human-computer information retrieval system.