Categories
General

Small is Beautiful

Today’s New York Times has an article by John Markoff called On a Small Screen, Just the Salient Stuff. It argues that the design constraints of the iPhone (and of mobile devices in general) lead to an improved user experience, since site designers do a better job of focusing on the information that users will find relevant.

Of course, on a blog entitled The Noisy Channel, I can’t help praising approaches that strive to improve the signal-to-noise ratio in information seeking applications. And I’m glad to see them quoting Ben Shneiderman, a colleague of mine at the University of Maryland who has spent much of his career focusing on HCIR issues.

Still, I think they could have taken the idea much further. Their discussion of more efficient or ergonomic use of real estate boils down to stripping extraneous content (a good idea, but hardly novel), and making sites vertically oriented (i.e., no horizontal scrolling). They don’t consider the question of what information is best to present in the limited space–which, in my mind, is the most important question to consider as we optimize interaction. Indeed, many of the questions raised by small screens also apply to other interfaces, such as voice.

Perhaps I am asking too much to expect them to call out the extreme inefficiency of ranked lists, compared to summarization-oriented approaches. Certainly the mobile space opens great opportunities for someone to get this right on the web.

Categories
General

Psychology of Intelligence Analysis

In the course of working with some of Endeca’s more interesting clients, I started reading up on how the intelligence agencies address the challenges of making decisions, especially in the face of incomplete and contradictory evidence. I ran into a book called Psychology of Intelligence Analysis by former CIA analyst Richards Heuer. The entire book is available online, or you can hunt down a hard copy of the out-of-print book from your favorite used book seller.

Given the mixed record of the intelligence agencies over the past few decades, you might be wondering if the CIA is the best source for learning how to analyze intelligence. But this book is a gem. Even if the agencies don’t always practice what they preach (and the book makes a good case as to why), the book is an excellent tour through the literature on judgment and decision making.

If you’re already familiar with work by Herb Simon, Danny Kahneman, and Amos Tversky, then a lot of the ground he covers will be familiar–especially the third of the book that enumerates cognitive biases. I’m a big fan of the judgment and decision making literature myself. But I still found some great nuggets, particularly Chapter 8 on Analysis of Competing Hypotheses. Unlike most of the literature that focuses exclusively on demonstrating our systematic departures from rationality, Heuer hopes offer at least some constructive advice.

As someone who builds tools to help people make decisions using information that not only may be incomplete and contradictory, but also challenging to find in the first place, I’m very sensitive to how people’s cognitive biases affect their ability to use these tools effectively. One of the HCIR ’07 presentations by Jolie Martin and Michael Norton (who have worked with Max Bazerman) showed how the manner in which information was partitioned on retail web sites drove decisions, i.e., re-organizing the same information affected consumer’s decision process.

It may be tempting for us on the software side to wash our hands of our users’ cognitive biases. But such an approach would be short-sighted. As Heuer shows in his well-researched book, people not only have cognitive biases, but are unable to counter those biases simply by being made aware of them. Hence, if software tools are to help people make effective decisions, it is the job of us tool builders to build with those biases in mind, and to support processes like Analysis of Competing Hypotheses that try to compensate for human bias.

Categories
General

Librarian 2.0

Many of the words that mark milestones in the history of technology, such as calculator and word processor, originally corresponded to people. Calculating had at least two lives as a technology breakthrough–first as a process, and then as a automatic means for executing that process. Thanks to inventions like calculators and computers, human beings have moved up the value chain to become scientists and engineers who take low-level details for granted.

Similarly, the advances in information science and retrieval have dramatically changed the role of a reference librarian.

Hopefully some of you old enough to remember card catalogs, They were certainly functional if you knew the exact title or author you were looking for, assuming the title wasn’t too generic or author too prolific. Where card catalogs fell short was in supporting exploratory search. In many cases, your best bet was to quite literally explore the stacks and hope that locality within the Dewey Decimal system sufficed for to support your information seeking needs. Alternatively, you could follow citation paths–the dead-tree precursor of surfing a hypertext collection.

For exploratory tasks, library patrons would turn to reference librarians, who would clarify the patrons’ needs through a process called the reference interview. According to Wikipedia:

A reference interview is composed of two segments:

1. An initial segment in which the librarian encourages the user to fully discuss the request.
2. A final segment in which the librarian asks questions to relate the request to the materials available in the library

A reference interview is structured (ideally) according to the following series of steps. First the library user states a question or describes a problem. The librarian then clarifies the user’s information need, sometimes leading him or her back from a request for a specific resource (which may not be the best one for the problem at hand) to the actual information need as it manifests in the library user’s life. Following that, the librarian suggests information resources that address the user’s information need, explaining the nature and scope of information they contain and soliciting feedback. The reference interview closes when the librarian has provided the appropriate information or a referral to an outside resource where it can be found, and the user confirms that he or she has received the information needed.

Fast forward to the present day. Thanks to modern search engines, title and author search are no longer tedious processes. Moreover, search engines are somewhat forgiving of users, offering spelling correction and inexact query matching. Libraries are still catching up with advances in technology, but the evolution is clearly under way.

However, search engines have not obviated the need for a reference interview. Excepting the simple cases of known item search, the typical information seeker needs help translating an information need into one or more search queries. And that information need may change as the seeker learns from the process.

But it should come as no surprise that information seeking support systems need to be more than search engines. The ideal information seeking support system emulates a reference librarian, stepping users through a structured process of clarification. Indeed, this is exactly what my colleagues and I at Endeca are trying to do in our work with libraries and more broadly in pursuing a vision of human computer information retrieval.

What then becomes of librarians? Much as calculators and computers did not obviate the need for mathematicians, I don’t see technology obviating the need for information scientists. Library schools have already evolved into information schools, and I have no doubt that their graduates will help establish the next generation of information seeking technology that makes today’s search engines seem as quaint as card catalogs.

Categories
General

Resolving the Battle Royale between Information Retrieval and Information Science

The following is the position paper I submitted to the NSF Information Seeking Support Systems Workshop last month. The workshop report is still being assembled, but I wanted to share my own contribution to the discussion, since it is particularly appropriate to the themes of The Noisy Channel.

Resolving the Battle Royale between Information Retrieval and Information Science

Daniel Tunkelang
Endeca

ABSTRACT

We propose an approach to help resolve the “battle royale” between the information retrieval and information science communities. The information retrieval side favors the Cranfield paradigm of batch evaluation, criticized by the information science side for its neglect of the user. The information science side favors user studies, criticized by the information retrieval side for their scale and repeatability challenges. Our approach aims to satisfy the primary concerns of both sides.

Categories and Subject Descriptors
H.1.2 [Human Factors]: Human information processing
H.3.3 [Information Systems]: Information Search and Retrieval – Information Filtering, Retrieval Models
H.5.2 [Information Systems]: Information Interfaces and Presentation – User Interfaces

General Terms
Design, Experimentation, Human Factors

Keywords
Information science, information retrieval, information seeking, evaluation, user studies

INTRODUCTION

Over the past few decades, a growing community of researchers has called for the information retrieval community to think outside the Cranfield box. Perhaps the most vocal advocate is Nick Belkin, whose “grand challenges” in his keynote at the 2008 European Conference on Information Retrieval [1] all pertained to the interactive nature of information seeking he claims the Cranfield approach neglects. Belkin cited similar calls to action going back as far as Karen Spärck Jones, in her 1988 acceptance speech for the Gerard Salton award [2], and again from Tefko Saracevic, when he received the same award in 1997 [3]. More recently, we have the Information Seeking and Retrieval research program proposed by Peter Ingwersen and Kalervo Järvelin in The Turn, published in 2005 [4].

IMPASSE BETWEEN IR AND IS

Given the advocacy of Belkin and others, why hasn’t there been more progress? As Ellen Voorhees noted in defense of Cranfield at the 2006 Workshop on Adaptive Information Retrieval, “changing the abstraction slightly to include just a bit more characterization of the user will result in a dramatic loss of power or increase in cost of retrieval experiments” [5]. Despite user studies that have sought to challenge the Cranfield emphasis on batch information retrieval measures like mean average precision—such as those of Andrew Turpin and Bill Hersh [6]—the information retrieval community, on the whole, remains unconvinced by these experiments because they are smaller in scale and less repeatable than the TREC evaluations.

As Tefko Saracevic has said, there is a “battle royale” between the information retrieval community, which favors the Cranfield paradigm of batch evaluation despite its neglect of the user, and the information science community, which favors user studies despite their scale and repeatability challenges [7]. How do we move forward?

PRIMARY CONCERNS OF IR AND IS

Both sides have compelling arguments. If an evaluation procedure is not repeatable and cost-effective, it has little practical value. Nonetheless, it is essential that an evaluation procedure measure the interactive nature of information seeking.

If we are to find common ground to resolve this dispute, we need to satisfy the primary concerns of both sides:

  • Real information seeking tasks are interstice, so the results of the evaluation procedure must be meaningful in an interactive context.
  • The evaluation procedure must be repeatable and cost-effective.In order to move beyond the battle royale and resolve the impasse between the IR and IS communities, we need to address both of these concerns.
  • PROPOSED APPROACH

    A key point of contention in the battle royale is whether we should evaluate systems by studying individual users or measuring system performance against test collections.

    The short answer is that we need to do both. In order to ground the results of evaluation in realistic contexts, we need to conduct user studies that relate proposed measures to success in interactive information seeking tasks. Otherwise, we optimize under the artificial constraint that a task involves only a single user query.

    Such an approach presumes that we have a characterization of information seeking tasks. This characterization is an open problem that is beyond the scope of this position paper but has been addressed by other information seeking researchers, including Ingwersen and Järvelin [4]. We presume access to a set of tasks that, if not exhaustive, at least applies to a valuable subset of real information seeking problems.

    Consider, as a concrete example, the task of a researcher who, given a comprehensive digital library of technical publications, wants to determine with confidence whether his or her idea is novel. In other words, the researcher want to either discover prior art that anticipates the idea, or to state with confidence that there is no such art. Patent inventors and lawyers performing e-discovery perform analogous tasks. We can measure task performance objectively as a combination of accuracy and efficiency, and we can also consider subject measures like user confidence and satisfaction. Let us assume that we are able to quantify a task success measure that incorporates these factors.

    Given this task and success measure, we would like to know how well an information retrieval system supports the user performing it. As the information scientists correctly argue, user studies are indispensable. But, as we employ user studies to determine which systems are most helpful to users, we need to go a step further and correlate user success to one or more system measures. We can then evaluate these system measures in a repeatable, cost-effective process that does not require user involvement.

    For example, let us hypothesize that mean average precision (MAP) on a given TREC collection is such a measure. We hypothesize that users pursuing the prior art search task are more successful using a system with higher MAP than those using a system with lower MAP. In order to test this hypothesis, we can present users with a family of systems that, insofar as possible, vary only in MAP, and see how well user success correlates to the system’s MAP. If the correlation is strong, then we validate the utility of MAP as a system measure and invest in evaluating systems using MAP against the specified collection in order to predict their utility for the prior art task.

    The principle here is a general one, and can even be used not only to compare different algorithms, but also to evaluate more sophisticated interfaces, such as document clustering [8] or faceted search [9]. The only requirement is that we hypothesize and validate system measures that correlate to user success.

    WEAKNESSES OF APPROACH

    Our proposed approach has two major weaknesses.

    The first weakness is that, in a realistic interactive information retrieval context, distinct queries are not independent. Rather, a typical user executes a sequence of queries in pursuit of an information need, each query informed by the results of the previous ones.

    In a batch test, we must decide the query sequence in advance, and cannot model how the user’s queries depend on system response. Hence, we are limited to computing measures that can be evaluated for each query independently. Nonetheless, we can choose measures which correlate to effectiveness in realistic settings. Hopefully these measures are still meaningful, even when we remove the test queries from their realistic context.

    The second challenge is that we do not envision a way to compare different interfaces in a batch setting. It seems that testing the relative merits of different interfaces requires real—or at least simulated—users.

    If, however, we hold the interface constant, then we can define performance measures that apply to those interfaces. For example, we can develop standardized versions of well-studied interfaces, such as faceted search and clustering. We can then compare the performance of different systems that use these interfaces, e.g., different clustering algorithms.<

    AN ALTERNATIVE APPROACH

    An alternative way to tackle the evaluation problem leverages the “human computation” approach championed by Luis Von Ahn [10]. This approach uses “games with a purpose” to motivate people to perform information-related tasks, such as image tagging and optical character recognition (OCR).

    A particularly interesting “game” in our present context is Phetch, in which in which one or more “Seekers” compete to find an image based on a text description provided by a “Describer” [11]. The Describer’s goal is to help the Seekers succeed, while the Seekers compete with one another to find the target image within a fixed time limit, using search engine that has indexed the images based on tagging results from the ESP Game. In order to discourage a shotgun approach, the game penalizes Seekers for wrong guesses.

    This game goes quite far in capturing the essence of interactive information retrieval. If we put aside the competition among the Seekers, then we see that an individual Seeker, aided by the human Describer and the algorithmic–but human indexed–search engine–is pursuing an information retrieval task. Moreover, the Seeker is incented to be both effective and efficient.

    How can we leverage this framework for information retrieval evaluation? Even though the game envisions both Describers and Seekers to be human beings, there is no reason we cannot allow computers to play too–in either or both roles. Granted, the game, as currently designed, focuses on image retrieval without giving the human players direct access to the image tags, but we could imagine a framework that is more amenable to machine participation, e.g., providing a machine player with a set of tags derived from those in the index when that player is presented with an image. Alternatively, there may be a domain more suited than image retrieval to incorporating computer players.

    The main appeal of the game framework is that it allows all participants to be judged based on an objective criterion that reflects the effectiveness and efficiency of the interactive information retrieval process. A good Describer should, on average, outscore a bad Describer over the long term; likewise, a good Seeker should outscore a bad one. We can even vary the search engine available to Seekers, in order to compare competing search engine algorithms or interfaces.

    CONCLUSION

    Our goal is ambitious: we aspire towards an evaluation framework that satisfies information scientists as relevant to real-world information seeking, but nonetheless offers the practicality of the Cranfield paradigm that dominates information retrieval. The near absence of collaboration between the information science and information retrieval communities has been a greatly missed opportunity not only for both researcher communities but also for the rest of the world who could benefit from practical advances in our understanding of information seeking. We hope that the approach we propose takes at least a small step towards resolving this battle royale.

    REFERENCES

    Belkin, N. J., 2008. Some(What) Grand Challenges for Information Retrieval. ACM SIGIR Forum 42, 1 (June 2008), 47-54.

    Spärck Jones, K. 1988. A look back and a look forward. In: SIGIR ’88. In Proceedings of the 11th Annual ACM SIGIR International Conference on Research and Development in Information Retrieval, 13-29.

    Saracevic, T. 1997. Users lost: reflections of the past, future and limits of information science. ACM SIGIR Forum 31, 2 (July 1997), 16-27.

    Ingwersen, P. and Järvelin, K. 2005. The turn. Integration of information seeking and retrieval in context. Springer.

    Voorhees, E. 2006. Building Test Collections for Adaptive Information Retrieval: What to Abstract for What cost? In First International Workshop on Adaptive Information Retrieval (AIR).

    Turpin, A. and Scholer, F. 2006. User performance versus precision measures for simple search tasks. In Proceedings
    of the 29th Annual international ACM SIGIR Conference on Research and Development in information Retrieval
    , 11-18.

    Saracevic, T. (2007). Relevance: A review of the literature and a framework for thinking on the notion in information science. Part II: nature and manifestations of relevance. Journal of the American Society for Information Science and Technology 58(3), 1915-1933.

    Cutting, D., Karger, D., Pedersen, J., and Tukey, J. 1992. Scatter/Gather: A Cluster-based Approach to Browsing Large Document Collections. In Proceedings of the 15th Annual ACM SIGIR International Conference on Research and Development in Information Retrieval, 318-329.

    Workshop on Faceted Search. 2006. In Proceedings of the 29th Annual ACM SIGIR International Conference on Research and Development in Information Retrieval.

    Von Ahn, L. 2006. Games with a Purpose. IEEE Computer 39, 6 (June 2006), 92-94.

    Von Ahn, L., Ginosar, S., Kedia, M., Liu, R., and Blum, M. 2006. Improving accessibility of the web with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 79-82.

    Categories
    General

    Clarification before Refinement on Amazon

    I just noticed today that a search on Amazon (e.g., this search for algorithms) does not provide the options to sort the results or to refine by anything other than category. Once you do select a category (e.g., books), you are given additional refinement options, as well as the ability to sort.

    While I find this interface less than ideal (e.g. even if all of your search are in a single category, it still makes you select that category explicitly), I do commend them for recognizing the need to have users clarify before they refine. The implication–one we’ve been pursuing at Endeca–is that it is incumbent on the system to detect when its understanding of the user’s intent is ambiguous enough to require a clarification dialogue.

    Categories
    General

    Back from ISSS Workshop

    My apologies for the sparsity of posts lately; it’s been a busy week!

    I just came back from the Information Seeking Support Systems Workshop, which was sponsored by the National Science Foundation and hosted at the University of North Carolina – Chapel Hill. An excerpt from the workshop home page nicely summarizes its purpose:

    The general goal of the workshop will be to coalesce a research agenda that stimulates progress toward better systems that support information seeking. More specifically, the workshop will aim to identify the most promising research directions for three aspects of information seeking: theory, development, and evaluation.

    We are still working on writing up a report that summarizes the workshop’s findings, so I don’t want to steal its thunder. But what I can say is that participants shared a common goal of identifying driving problems and solution frameworks that would rally information seeking researchers much the way that TREC has rallied the information retrieval community.

    One of the assignments we received at the workshop was to pick a problem we would “go to the mat” for. I’d like to share mine here to get some early feedback:

    We need to raise the status of evaluation procedures where recall trumps precision as a success metric. Specifically, we need to consider scenarios where the information being sought is existential in nature, i.e., the information seeker wants to know if an information object exists. In such cases, the measures should combine correctness of the outcome, user confidence in the outcome, and efficiency.

    I’ll let folks know as more information is released from the workshop.

    Categories
    General

    What is (not) Exploratory Search?

    One of the recurring topics at The Noisy Channel is exploratory search. Indeed, one of our readers recently took the initiative to upgrade the Wikipedia entry on exploratory search.

    In the information retrieval literature. exploratory search comes across as a niche topic consigned to specialty workshops. A cursory reading of papers from the major information retrieval conferences would lead one to believe that most search problems boil down to improving relevance ranking, albeit with different techniques for different problems (e.g., expert search vs. document search) or domains (e.g., blogs vs. news).

    But it’s not just the research community that has neglected exploratory search. When most non-academics think of search, they think of Google with its search box and ranked list of results. The interaction design of web search is anything but exploratory. To the extent that people engage in exploratory search on the web, they tend to do so in spite of, rather than because of, the tools at their disposal.

    Should we conclude then that exploratory search is, in fact, a fringe use case?

    According to Ryen White, Gary Marchionini, and Gheorghe Muresan:

    Exploratory search can be used to describe an information-seeking problem context that is open-ended, persistent, and multi-faceted; and to describe information-seeking processes that are opportunistic, iterative, and multi-tactical. In the first sense, exploratory search is commonly used in scientific discovery, learning, and decision making contexts. In the second sense, exploratory tactics are used in all manner of information seeking and reflect seeker preferences and experience as much as the goal (Marchionini, 2006).

    If we accept this dichotomy, then the first sense of exploratory search is a niche use case, while the second sense characterizes almost everything we call search. Perhaps it is more useful to ask what is not exploratory search.

    Let me offer the following characterization of non-exploratory search:

    • You know exactly what you want.
    • You know exactly how to ask for it.
    • You expect a search query to yield one of two responses:
      – Success: you are presented with the object of your search.
      – Failure: you learn that the object of your search is unavailable.

    If any of these assumptions fails to hold, then the search problem is, to some extent, exploratory.

    There are real non-exploratory search needs, such as navigational queries on the web and title searches in digital libraries. But these are, for most purposes, solved problems. Most of the open problems in information retrieval, at least in my view, apply to exploratory search scenarios. It would be nice to see more solutions that explicitly support the process of exploration.

    Categories
    General

    Enterprise Search Done Right

    A recent study from AIIM (the Association for Information and Image Management, also known as the Enterprise Content Management Association) reports that enterprise search frustrates and disappoints users. Specifically, 49% of survey respondents “agreed” or “strongly agreed” that it is a difficult and time consuming process to find the information they need to do their job.

    Given that I work for a leading enterprise search provider, you might think I’d find these results disconcerting, even if the report points the blame at clients rather than vendors:

    But fault does not lie with technology solution providers. Most organizations have failed to take a strategic approach to enterprise search. 49% of respondents have “No Formal Goal” for enterprise Findability within their organizations, and a large subset of the overall research population state that when it comes to the “Criticality of Findability to their Organization’s Business Goals and Success”, 38% have no idea (“Don’t Know”) what the importance of Findability is in comparison to a mere 10% who claim Findability is “Imperative” to their organization.

    As I’ve blogged here before, there is no free lunch, and organizations can’t expect to simply plug a search engine into their architectures as if it were an air freshener. But that doesn’t let Endeca or anyone else off the hook. It is incumbent on enterprise search providers, including Endeca, both to set expectations around how it is incumbent on enterprise workers to help shape the solution by supplying their proprietary knowledge and information needs, and to make this process as painless as possible.

    Enterprise search, done right, is a serious investment. But it is also an investment that can offer extraordinary returns in productivity and general happiness. Enterprises need to better appreciate the value, but enterprise search providers need to better communicate the process of creating it.

    Categories
    General

    A Game to Evaluate Browsing Interfaces?

    I’ve mused a fair amount about to apply the concept of the Phetch human computation game to evaluate browsing-based information retrieval interfaces. I’d love to be able to better evaluate faceted navigation and clustering approaches, relative to conventional search as well as relative to one another.

    Here is the sort of co-operative game I have in mind. It uses shopping as a scenario, and has two roles: the Shopper and the Shopping Assistant.

    As a Shopper, you are presented with an shopping list and a browsing interface (i.e., you can click on links but you cannot type free text into a search box). Your goal is to find as many of the items on your shopping list as possible within a fixed time limit. In a variation of this game, not all of the items on the list are findable.

    As a Shopping Assistant, you know the complete inventory, but not what the Shopper is looking for. Your goal is to help the Shopper find as many of the items on his or her shopping list as possible within a fixed time limit. On each round of interaction, you present the Shopper with information and links within the constraints of a fixed-size page. The links may include options to select items (the Shopper’s ultimate goal), as well as options that show more items or modify the query.

    Either role could be played by a human or a machine, and, like Phetch, the game could be made competitive by having multiple players in the same role. I’d think the interesting way to implement such a game would be with human Shoppers and algorithmic Shopping Assistants.

    Is anyone aware of research along these lines? I’m hardly wed to the shopping list metaphor–it could be some other task that seems suitable for browsing-oriented interfaces.

    Categories
    General

    How Google Measures Search Quality

    Thanks to Jon Elsas for calling my attention to a great post at Datawocky today on how Google measures search quality, written by Anand Rajaraman based on his conversation with Google Director of Research Peter Norvig.

    The executive summary: rather than relying on click-through data to judge quality, Google employs armies of raters who manually rate search results for randomly selected queries using different ranking algorithms. These manual ratings drive the evaluation and evolution of Google’s ranking algorithms.

    I’m intrigued that Google seems to wholeheartedly embrace the Cranfield paradigm. Of course, they don’t publicize their evaluation measures, so perhaps they’re optimizing something more interesting than mean average precision.

    More questions for Amit. 🙂