Categories
Uncategorized

Blogs I Read: UXmatters

According to Wikipedia, user experience is “the overarching experience a person has as a result of their interactions with a particular product or service, its delivery, and related artifacts, according to their design.” While I’ve never labeled myself a designer, I have always cared deeply about user experience, even back before my information retrieval days, when I was working on graph drawing. Indeed user experience is the defining problem for HCIR.

One of my favorite resources for learning about user experience is the UXmatters blog. This group blog boasts a set of authors that represent a diverse collection of industry practitioners (and one academic) and offer concrete case studies and recommendations.

For example, in “Best Practices for Designing Faceted Search Filters“, Greg Nudelman offers a constructive critique of the Office Depot search user interface. Some of his material will be familiar to those who have read my faceted search book (particularly the chapter on front-end concerns), but the focus on a single example makes for a compelling read. I also liked Greg’s most recent post, entitled “Cameras, Music, and Mattresses: Designing Query Disambiguation Solutions for the Real World“. I was amused that he and I use the same “canonical” example for the need to offer clarification before refinement. 🙂

Here are a few more posts from other authors to give you a taste for the blog:

If you are a user experience professional, in name or in deed, then you should be reading the the UXmatters blog — or perhaps even contributing to it. Of course, you’re always welcome to contribute a guest post here too.

By Daniel Tunkelang

High-Class Consultant.

10 replies on “Blogs I Read: UXmatters”

From the Nudelman article: The only way people could improve their search results was by typing more keywords into the search box, which takes both thought and work—two things any busy, distracted Internet user can do without.

I think this is only one of about three or four times that I’ve ever heard this opinion expressed in the search industry.

I frankly subscribe to this opinion.. that is it more work to think up new terms out of thin air than it is to look over a list of 3 or 10 or 20 terms and pick the one that most resonates with you.

But almost everyone else disagrees. Why is that?

Like

If everyone else disagrees, then why is auto-completion for search queries popular to the point of ubiquity? I know it’s only a thin slice of the HCIR pie, but it’s also one that many search engine developers see as low risk, the occasional controversy notwithstanding.

Like

Think about what auto-completion is, and how it arises: Auto-completion works because you’ve already formulated the query that you’re going to use, and have started typing it. Google saves you from having to finish typing the whole thing. That’s all.

For example, I can start typing “exploratory se…” and it will suggest “exploratory search”. But I have to already know what it is that I am looking for (i.e. what query terms I need to use to describe that I need) in order to get that suggestion in the first place.

I argue that auto-complete is not an instance of what I am talking about. It’s not even that it’s a lightweight, low-risk version. I see it as qualitatively different.

Not that I don’t like it. I find auto-complete very helpful. It’s just that by the time auto-complete takes action, all the difficult thought has already been done by the user. Natch?

BTW, get some sleep! 🙂

Like

Sometimes, but not always. Auto-completion sometimes suggests multi-word queries that aren’t fully formed in the searcher’s mind, thus contributing at least a little to the query elaboration process. It’s a small step, for sure. But I think it’s more than just a convenience.

Of course, support for query elaboration can and should be so much more–and it’s not like people aren’t trying. All of the major search engines have some kind of “related searches” functionality. But I think auto-completion is the biggest success story of getting users to select rather than enter query terms.

Like

Auto-completion sometimes suggests multi-word queries that aren’t fully formed in the searcher’s mind, thus contributing at least a little to the query elaboration process.

Grr.. you’re making me think and reevaluate my views. Darn you! 😉 Ok, I’ll concede that, even though autosuggest requires to you already be well on your way in thinking up and choosing the query terms that you want to use, for some queries, for some small percentage of the time for those small number of queries, auto-completely probably does help you find vocabulary that you wouldn’t have otherwise thought to use, on your own.

But even if that does happen, from time to time, I think it is incidental to the main purpose of the tool, which is.. well.. auto-complete. Meaning that you’ve already completed the thought (the formulation of the query) in your head, and you’re just using the tool to save some typing. Even spelling correction in MS Word sometimes helps me learn new vocabulary terms. But that is by chance, not by design.

The point is, most everyone does agree that the way to get better search is to type more words into a box. Even Norvig says so. I think this comment sums up the attitude of most of the industry:

The Unreasonable Effectiveness of Data

The only way to get into the tail is to think hard and type more terms into the search box.

I thought Nudelman was offering a different perspective.. I thought he was saying that, through better UX design, you can get people into the tail without forcing them to type more terms, whether those terms are auto-completed or not.

And I think it’s still true that most of the industry agrees with Norvig.

Like

I’ll concede that auto-completion was probably designed for recovery rather discovery. But it is a happy accident, and I suspect that the trend towards longer queries at least partly reflects that people are taking advantage of auto-completion. Granted, some of that is laziness, i.e., people being willing to enter longer queries that they had in mind but might not have been willing to type in. But I suspect a fair amount of it is from people selecting pre-elaborated queries they would not have thought to enter themselves.

As for Novig’s comment, I don’t see how it’s incompatible with users selecting terms vs. typing them in. I see his point as being that, once the tail is too thinly spread, the user has to supply more information to access it. I think there’s some truth to that, even when the system does its best to offer guidance. Ideally a progressive refinement interface (like faceted search) can find any sliver of the corpus, but the refinement options themselves need to be prioritized by their expected utility to the user (since interface real estate and user attention are finite). That means that finding a thin slice is likely to involve many refinements–which also means many opportunities to introduce noise into the process.

I do think this is doable on an e-commerce site–or anywhere that the facets are reasonably rich and trustworthy. It’s a bit harder on the open web. So I’d cut the search engines a bit of slack for expecting more initial signal from users who are asking higher-information queries.

Indeed, moving from Nudelman to Needleman (with apologies to Buffon), there’s something to be said for optimizing for the short snout.

Like

I had started to write a whole big’ol point by point response, but thought better of it.

Let me boil it down to this: There is a difference between the underlying technology (probabilistic n-grams) and the applications of that technology (auto-complete vs. exploration).

I’m all over using n-grams in search. Did my dissertation on the topic of determining the similarity between two pieces of music based on note and chord n-gram completion probabilities. Think of it as massively parallel auto-complete, on a pairwise song basis. If that makes sense.

But there is a difference between the technology (n-gram probabilities) and the application of that technology (song similarity / music information retrieval).

Same with autocomplete. You can use the same underlying technology to do navigation, or you can use it to do exploration. Viegas and Wattenberg are thinking exploratorily, though some navigation is bound to happen. Google is thinking navigationally, though some exploration is also bound to happen. You said something similar, above. To me, that’s the bottom line.

I probably should clarify a bit more on the Norvig thing, but I’m tired. Perhaps a good chat during SSM?

Like

My suspicion – which is wholly intuitive and not backed up with user research – is that autocomplete is probably not as useful as it seems. I suspect that most users know what they are searching for and just type the upto 2.x keywords. I further suspect that users who can type with all fingers would not stop midway to make an autocomplete selection.

There are a minority of times where an autocomplete suggestion for keywords helps. Most times, I type what I want and then use the ‘did you mean’ facility to correct the keywords if one or more were typed in incorrectly.

Btw, very interesting set of posts.

Like

Sadly, I’m not aware of any published research on the effectiveness (or ineffectiveness) of auto-completion for improving information seeking. But I’m not the only one to speculate that auto-completion is a significant factor in the upward trend in query length.

Like

Comments are closed.