Categories
General

Guided Exploration = Faceted Search, Backwards

Information Scent

In the early 1990s, PARC researchers Peter Pirolli and Stuart Card developed the theory of information scent (more generally, information foraging) to evaluate user interfaces in terms of how well users can predict which paths will lead them to useful information. Like many HCIR researchers and practitioners, I’ve found this model to be a useful way to think about interactive information seeking systems.

Specifically, faceted search is an exemplary application of the theory of information scent. Faceted search allows users to express an information need as a keyword search, providing them with a series of opportunities to improve the precision of the initial result set by restricting it to results associated with particular facet values.

For example, if I’m looking for folks to hire for my team, I can start my search on LinkedIn with the keywords [information retrieval], restrict my results to Location: San Francisco Bay Area, and then further restrict to School: CMU.

Precision / Recall Asymmetry

Faceted search is a great tool for information seeking systems. But it offers a flow that is asymmetric with respect to precision and recall.

Let’s invert the flow of faceted search. Rather than starting from a large, imprecise result set and progressively narrowing it; let’s start from a small, precise result set and progressively expand it. Since faceted search is often called “guided navigation” (a term Fritz Knabe and I coined at Endeca), let’s call this approach “guided exploration” (which has a nicer ring than “guided expansion”).

Guided exploration exchanges the roles of precision and recall. Faceted search starts with high recall and helps users increase precision while preserving as much recall as possible. In contrast, guided exploration starts with high precision and helps users increase recall while preserving as much precision as possible.

That sounds great in theory, but how can we implement guided exploration in practice?

Let’s remind ourselves why faceted search works so well. Faceted search offers the user information scent: the facet values help the user identify regions of higher precision relative to his or her information need. By selecting a sequence of facet values, the user arrives at a non-empty set that consists entirely or mostly of relevant results.

How to Expand a Result Set

How do we invert this flow? Just as enlarging an image is more complicated than reducing one, increasing recall is more complicated than increasing precision.

If our initial set is the result of selecting multiple facet values, then we may be able to increase recall by de-selecting facet values (e.g., de-selecting San Francisco Bay Area and CMU in my previous example). If we are using hierarchical facets, then rather than de-selecting a facet value, we may be able to replace it with a parent value (e.g., replacing San Francisco Bay Area with California). We can also remove one or more search keywords to broaden the results (e.g., information or retrieval).

Those are straightforward query relaxations. But there are more interesting ways to expand our results:

  • We can replace a facet value with the union or that value and similar values (e.g., replacing CMU with CMU OR MIT).
  • We can replace the entire query (or any subquery) with a union of that query and the results for selecting a single facet value (e.g., ([information retrieval] AND Location: San Francisco Bay Area AND School: CMU) OR Company: Google)
  • We can replace the entire query (or any subquery) with a union of that query and the results for a keyword search a single facet value (e.g., ([information retrieval] AND Location: San Francisco Bay Area AND School: CMU) OR [faceted search]).

As we can see, there are many ways to progressively refine a query in a way that expands the result set. The question is how we provide users with options that increase recall while preserving as much precision as possible.

Frequency : Recall :: Similarity : Precision

Developers of faceted search systems don’t necessarily invest much thought into deciding which faceted refinement options to present to users. Some systems simply avoid dead ends, offer user all refinement options that least to a non-zero result set. This approach breaks down when there are too many options, in which case most systems offer users the most frequent facet values. A chapter in my faceted search book discusses some other options.

Unfortunately, the number of options for guided exploration — at least if we go beyond the very limited basic options — is too vast to apply such a naive approach. Unions never lead to dead ends, and we don’t have a simple measure like frequency to rank our options.

Or perhaps we do. A good reason to favor frequent values as faceted refinement options is that they tend to preserve recall. What we need is a measure that tends to preserving precision when we expand a result set.

That measure is set similarity. More specifically, it is the asymmetric similarity between a set and a superset containing it, which we can think of as the former’s representativeness of the latter. If we are working with facets, we can measure this similarity in terms of differences between distributions of the facet values. If the current set has high precision, we should favor supersets that are similar to it in order to preserve precision.

I’ll spare readers the math, but I encourage you to read about Kullback-Leibler divergence and Jensen-Shannon divergence if you are not familiar with measures of similarity between probability distributions. I’m also glossing over key implementation details  — such as how to model distributions of facet values as probability distributions, and how to handle smoothing and normalization for set size. I’ll try to cover these in future posts. But for now, let’s assume that we can measure the similarity between a set and a superset.

Guided Exploration: A General Framework

We now have the elements to put together a general framework for guided exploration:

  • Generate a set of candidate expansion options from the current search query using operations such as the following:
    • De-select a facet value.
    • Replace a facet value with its parent.
    • Replace a facet value with the union of it and other values from that facet.
    • Remove a search keyword.
    • Replace a search keyword with the union of it and related keywords.
    • Replace the entire query with the union of it and a related facet value selection.
    • Replace the entire query with the union of it and a related keyword search.
  • Evaluate each expansion option based on the similarity of the resulting set to the current one.
  • Present the most similar sets to the user as expansion options.

Visualizing Drift

It’s one thing to tell a user that two sets are distributionally similar based on an information-theoretic measure, and another to communicate that similarity in a language the user can understand. Here is an example of visualizing the similarity between [information retrieval] AND School: CMU and [information retrieval] AND School: (CMU or MIT):

As we can see from even this basic visualization, replacing CMU with (CMU OR MIT) increases the number of results by 70% while keeping a similar distribution of current companies — the notable exception being people who work for their almae matres.

Conclusion

Faceted search offers some of the most convincing evidence in favor of Gary Marchionini‘s advocacy that we “empower people to explore large-scale information bases but demand that people also take responsibility for this control”. Guided exploration aims to generalize the value proposition of faceted search by inverting the roles of precision and recall. Given the importance of recall, I hope to see progress in this direction. If this is a topic that interests you, give me a shout. Especially if you’re a student looking for an internship this summer!

By Daniel Tunkelang

High-Class Consultant.

11 replies on “Guided Exploration = Faceted Search, Backwards”

Interesting post! I was wondering though if you would run into problems based on the cognitive load placed on the user.
In the case of faceted search scenario, the original facet “CMU” might result in several subfacets (CMU IR Lab, CMU Mechanical Engineering, etc.). It is immediately clear how CMU was further partitioned.
In the guided exploration scenario however, the user will need to think about why MIT was chosen together with CMU. Maybe all the presented choices remain unclear to the user? Did you do any user experiments? Would that be an issue?

Like

Claudia, thanks for commenting! I haven’t done any user experiments — I should have made clear that this is a vision rather than an actual implementation. Regardless, cognitive load is a concern that such an approach will have to address. Refinements that reduce the result set are probably more intuitive than those that expand it. The presentation would have to make clear that the refinements are being selected based on their retrieval of results similar to the current ones.

Like

Well, I managed to find 30 minutes to prototype an idea your posting brought to mind.

The URL below contains a screenshot of my (old) OO Query Editor in which I mocked up three different variations on your query above with illustrative similarity metric comparison values shown between each of the query output result sets.

Would such a tool (plugged into LinkedIn’s graph store) be of use for evaluating and evolving candidate frameworks such as the framework in your posting, above?

Would an app similar to this be helpful to recruiters trying to formulate a candidate search?

(The actual GUI app uses reflection-derived metadata to guide the user through filling out each decision tree node with popups w/available attributes/predicates.)

Thanks again for writing the article.

Rick

PS: Here’s the URL: http://goo.gl/gZvcj

Like

Rick, thanks for the comments and link! I think there are several key challenges to making this vision a reality, but the two main ones are rapidly generating and evaluating candidate expansions, and presenting those expansions in a way that creates a satisfying and productive user experience. It’s a great combination of algorithm and design challenges!

Like

Daniel, thanks for your follow-up.

First, I wanted to place my screenshot on your blog, not post it as a hidden link on my website. I seem to recall in the past your explaining how to insert an image into a comment on TheNoisyChannel.com, but I couldn’t find your instructions in my e-mail, by searching your blog site, or via Google queries targeting your site’s content. Would you mind repeating those steps if this can be done?

Second, I really like the simplicity of checking or un-checking a check box to change or remove a criterion from a query. The older OO Query app displayed in my screenshot does not have this level of simplicity… something to think about if I ever decide to update and adapt that framework to work with distributed graphs, finish it, and make it available on the market.

Also, I initially was thinking that a similar decision tree-oriented tool might be helpful to you, the algorithm designer, in experimenting with your backwards faceted search framework – not end users (recruiters). My tool was designed for people such as FAA airspace monitoring personnel and it offered a lot of power and flexibility, albeit, at the cost of about a one hour learning curve. But this is off topic…

Thanks again, and if you would like some informal help on this project, I would be happy to offer some of my time. My thinking has evolved quite a bit since 2003 when I developed the OO Query tool and I might be able offer some ideas both in the UX and computational speed /storage areas.

Like

Thanks for the very interesting post Daniel.

It’s really fascinating how different two implementations of the same idea can be when developed in isolation from each other.

We incorporated the same guided expansion concept, but looked at it from the opposite angle. That being a bottom-up from the record, rather than testing alternate filter parameters. The algorithm that I came up with is (at a very high level):

1) Compute nearest-neighbors for every record in the full corpus. The distance (semblance) metrics are project specific. A huge number of weights can be taken into account. Index pointers from every record to the top X records above a fixed threshold on the computed distance value. Let’s call these the semblance indexes. This is an expensive but off-line computation.

2) When a user search is executed, create the result set as usual. Also create the faceted navigation results as usual. Let’s call these the core results.

3) Create a second set of results which is the union of the core result set, and all the records that are in the semblance indexes to the records in the result set.

4) Compute the faceted navigation results for the expanded set.

5) You can now compare the faceted navigation results from the core result set to the expanded results. There is a multitude of ways we’ve come up with for presenting that.

Step 3 is the expansion set. This was probably brought to mind by reading all of those fuzzy logic books in the 90s.

I think the net effect would be similar. Ya?

Like

I’m currently building a system for example-based exploration, but it’s a litte different. In my case, the problem is finding relevant snippets of text within large text collections without much metadata.

So when you wrote about the difficulty of guided exploration for LinkedIn, I immediately thought -why not relevance feedback? With LinkedIn data, you have so many informative features for each person. You could easily have the recruiter select examples of the right kind of candidate, and have the system use relevance feedback to suggest more. If you did this within the framework of faceted navigation — continuing to show the distributions of companies, universities, and whatnot, the recruiter might themselves learn, “Oh, CMU and MIT seem to be really similar, maybe I can start there next time”.

Like

Carl, great to see you here! What you describe is a lot like what I did with Tower Records data back in the day, using the similarity measure described here. But that experiment never made it to production. Great to see that you’re using a similar approach in production!

Aditi, relevance feedback is also a possible way to guide expansion — basically allowing the user to help define the similarity metric. But I’m inclined to have that feedback be in the form of facet values rather than people, so that the communication is more transparent. “More like this person” doesn’t necessarily communicate what the user likes about a person.

Like

Comments are closed.