A guiding principle in information technology has been to enable people to perform tasks at the “speed of thought”. The goal is not just to make people more efficient in our use of technology, but to remove the delays and distractions that make us focus on the technology rather than the tasks themselves.
For example, the principle motivation for the faceted search work I did at Endeca was to eliminate hurdles that discourage people from exploring information spaces. Most sites already offered user the ability to perform this exploration through advanced or parametric search interfaces–indeed, I recall some critics of faceted search objecting that it was nothing new. But there’s a reason that most of today’s consumer-facing sites place faceted search front and center while still relegating advanced search interfaces to an obscure page for power users. Faceted search offers users the fluidity and instant feedback that makes exploration natural for users. Once you’re used to it, it’s hard to live without it, whether your looking for real estate (compare Zillow.com to housing search on craigslist), library books (compare the Triangle Research Libraries Network to the Library of Congress), or art (compare to art.com to artnet).
Why is faceted search such a significant improvement over advanced or parametric search interfaces? Because it supports exploration at the speed of thought. If it takes you several seconds–rather than a single click–to refine a query, and if you have to repeatedly back off from pages with no results (aka dead ends), your motivation to explore a document collection fades quickly. But when that experience is fluid, you explore without even thinking about it. That is the promise (admittedly not always fulfilled) of faceted search.
Microsoft Live Labs director Gary Flake offered a similar message in his SIGIR 2010 keynote. He argued that we needed to replace our current discrete interactions with search engines into a mode of continuous, fluid interaction where the whole of data is greater than sum or parts. While he offered Microsoft’s Pivot client as an example of this vision, he could also have invoked the title of a book that Bill Gates wrote in 1999: Business @ the Speed of Thought. Indeed, anyone who has ever worked on data analysis understands that you ask fewer questions when you know you’ll have to wait for answers. Speed changes the way you interact with information.
And at Google, speed has been an obsession since day one. It makes the top 3 on the “Ten things we know to be true” list:
3. Fast is better than slow.
We know your time is valuable, so when you’re seeking an answer on the web you want it right away – and we aim to please. We may be the only people in the world who can say our goal is to have people leave our website as quickly as possible. By shaving excess bits and bytes from our pages and increasing the efficiency of our serving environment, we’ve broken our own speed records many times over, so that the average response time on a search result is a fraction of a second. We keep speed in mind with each new product we release, whether it’s a mobile application or Google Chrome, a browser designed to be fast enough for the modern web. And we continue to work on making it all go even faster.
People have made much of Google VP Marissa Mayer’s estimate that Google Instant will save 350 million hours of users’ time per year by shaving two to five seconds per search. That’s an impressive number, but I personally think it understates the impact of this interface change. Rather, I’m inclined to focus on a phrase I’ve seen repeatedly associated with Google Instant: “search at the speed of thought”.
What does that mean in practice? I see two major wins from Google Instant:
1) Typing speed and spelling accuracy don’t get in the way. For example, by the time you’ve typed [m n], you see results for M. Night Shyamalan, a name whose length and spelling might frustrate even his fans. A search for [marc z] offers results for Facebook CEO Mark Zuckerberg. Admittedly, the pre-Instant type-ahead suggestions already got us most of the way there, but the feedback of actual results offers not just guidace but certainty.
2) Users spend less–and hopefully no time–in a limbo where they don’t know if the system has understood the information-seeking intent they have expressed as a query. For example, if I’m interested in learning more about the Bob Dylan song “Forever Young“, I might enter [forever young] as a search query–indeed, the suggestion shows up as soon as I’ve typed in “fore”. But a glance at the first few instant results for [forever young] makes it clear that there are lots of songs by this title (including those by Rod Stewart and Alphaville–as well as the recent Jay Z song “Young Forever” that reworks the latter). Realizing that my query is ambiguous, I type the single letter “d” and instantly see results for the Dylan song. Yes, I could have backed out from an unsuccessful query and then tried again, but instant feedback means far less frustration.
Google Instant also makes it a little easier for users to explore the space of queries related to their information need, but exploration through instant suggestions is very limited compared to using related searches or the wonder wheel–let alone what we might be able to do with faceted web search. I’d love to see this sort of exploration become more fluid, but I recognize the imperative to maintain the simplicity of the search box. Good for us HCIR folks to know that there’s still lots of work to do on search interface innovation!
But, in short, speed matters. Instant communication has transformed the way we interact with one another–both personally and professionally. Instant search is more subtle, but I think it will transform the way we interact with information on the web. I am very proud of my colleagues’ collective effort to make it possible.
74 replies on “Search at the Speed of Thought”
And, will add that consumers (ie. Joe Public) know that they can rely on Google’s results and it isn’t the results per se that are the focus of Instant but the query formulation.
So Google subtly prompts the user to be more specific, when the shorter query isn’t disambiguating enough? (Not disambiguation in terms of jaguar/jaguar. But disambiguation in terms of typing in “49ers”.. and then clarifying that you meant tickets, schedule, roster, etc.) Is that what you mean by query formulation?
It’s important to keep in mind that “exploratory search”, as Marchionini defines it, is not just about facets. It’s about learning and discovery.
One thing that I would *really* like Google (and Bing and Yahoo) to do is give us a way of turning off various ranking signals, such as popularity. Or “sort by recency”. I want to be able to type “49ers”, but not have a directed goal, such as tickets or roster. I want to see what people have said about the 49ers in years past, versus in this year. And typing “49ers what people are saying” isn’t going to cut it. Instead, adding and removing other, non-textual cues to the query, could very well get me to web pages that I might not have otherwise found, by adding *any* more terms to the formulated query.
That’s exploration. *That* would be valuable to me, if it were made instant.
I am on your side with your points but I’m looking at Instant from Google’s motivation and its impact down the line on other services. For example, it is obvious that if the query logs become finer-grained then it becomes ever easier to train voice recognition systems (across multiple languages). The Instant query logs should also help with Translate.
I don’t quite follow.. how would Instant help Translate? Would user’s decisions about whether to continue typing or to stop, when using Instant, serve to give Google better feedback on the veracity of their n-gram conditional probabilities? Which then translate to better Translate? Hmm. I could see that.
To me, this Google blogpost sums up the problem in a nutshell:
Sure, Google is correct when they say that many image formats are outdated, and can be improved upon. But then how do they go about improving them? Do they offer a nice color management system in Chrome? Far from it:
Who cares if the images load a few milliseconds faster.. I want the images to look better.
As usual, it’s speed over quality. And while it’s true that some fancy new compression algorithm might not hurt the relative placement of the pixels in an image (introduce artifacts, etc.), it comes at a cost. All the time spend optimizing size means that the colors won’t be right, because there is no color management built into Chrome. All the engineering talent has been focused elsewhere.
So going back to my point about separate markets vs. two aspects of the same market.. this is yet another example. It’s not enough just to get images to load faster. You also have to get the colors right, once they do load. The overall quality of the image is a function both of pixels/artifacts.. AND color profiles. And to only focus on one aspect, and ignore the other, keeps the overall quality of the image as a whole from improving.
I see the same patterns of lopsided optimization in their web search arena. I know, I’ve said this a dozen times above. I felt the need to share the image blogpost/example, though, to drive the point home.
I switched to Chrome for its speed. On the occasions I use Firefox for sites that require it, it feels painfully slow. And I still notice slow page loads for some image-heavy sites–especially when I’m tethering to a non-broadband connection. I don’t perceive a problem with image quality. Maybe I’m just an impatient philistine. 🙂
Am calling you no such thing, nor am I claiming to be some great artist or intellectual, myself. 🙂 It’s just that when I read that Chrome blog post, I perceived yet another example of the same sort of Google mantra, only this time in an area unrelated to web search. The idea that you want images to load slightly faster (new compression/image format), vs. wanting the images to be more beautiful/accurate/true (color management). It’s the same pattern of behavior, expressed in a different domain.
It’s frustrating, because Google claims that they focus on the user and all else will follow (quote: “Whether we’re designing a new Internet browser or a new tweak to the look of the homepage, we take great care to ensure that they will ultimately serve you, rather than our own internal goal or bottom line.“).
But that always seems to translate out the same way. It focuses on only one kind of user. And leaves the rest of us in the dust. Yes, that was a speed-related pun.
I should have said “could also help with Translate” and was referring to the translation of languages through their voice recognition service.
Google has always focused on ‘fast’ as a core tenant and so no surprise there wrt Instant. This would permeate other products and services and when a call has to be made between fast image loading vs image display quality, then I’d assume fast wins.
A few months back, Apple acquired a tiny UK company that provides high dynamic range (HDR) imaging (http://en.wikipedia.org/wiki/High_dynamic_range_imaging) which appeared in the recent iOS4.1 release. Being fast isn’t Apple’s focus but aesthetics is pretty high.
Doesn’t answer your question but it is very likely a cultural thing.
Doesn’t answer your question but it is very likely a cultural thing.
My point is simply that you can’t have it both ways. You can’t say that your corporate culture is to make everything fast, because fast is categorically better than slow.. while simultaneously saying that you’re going to focus on the user exclusively, and what the user wants, rather than on what Google wants (again: “Whether we’re designing a new Internet browser or a new tweak to the look of the homepage, we take great care to ensure that they will ultimately serve you, rather than our own internal goal or bottom line.“)
It’s their internal goal to make things fast, while it’s my goal as a user to have beautiful (color managed) photos and web search results that let me look deep into the web collection: understand, synthesize, learn, discover.
Looks like those internal goals are winning.
(And please note that I say this not to criticize Google for the sake of criticizing Google.. but because I’d really like to see some of that Google quality engineering applied to *my* user needs.)
Chrome is reported as having achieved 8% market share in its first two years:
That looks to me like serving user’s goals.
Again, I understand that pattern you’re trying to generalize, but I actually think Chrome isn’t a good example to make your case. The importance of speed in web search may be controversial, but browser speed (especially startup time) was a problem for a lot of users, and Chrome has attracted a very passionate following in large part because of how far it has gone to address that problem.
Bah, startup time. That’s why you set firefox to load when your OS loads. It has that option.
And 8% market share? Firefox came out in 2004, right? By it’s second year, it was at 16% market share: http://techcrunch.com/2006/07/11/firefox-surgest-to-15-market-share-in-us/
So maybe Google isn’t serving the user as well as it should? 😉
But, alright, then let’s not talk Chrome.. let’s again talk about Google’s attempt at yet another image format for the web: WebP. That was the main subject (rather than the Chrome browser) of that Google blogpost that I cited to kick off this non-search subthread:
There, the goal is still speed (x% reduction in image size) rather than quality (embedded color profiles, etc.) In fact, take a look at this other blogpost:
“Thus we come back to the conclusion I’ve made over and over on this blog — the encoder matters more than the video format, and good psy optimizations are more important than anything else for compression. libvpx, a much more powerful encoder than ffmpeg’s jpeg encoder, loses because it tries too hard to optimize for PSNR. These results raise an obvious question — is Google nuts? I could understand the push for “WebP” if it was better than JPEG. And sure, technically as a file format it is, and an encoder could be made for it that’s better than JPEG. But note the word “could”. Why announce it now when libvpx is still such an awful encoder? You’d have to be nuts to try to replace JPEG with this blurry mess as-is. Now, I don’t expect libvpx to be able to compete with x264, the best encoder in the world — but surely it should be able to beat an image format released in 1992? Earth to Google: make the encoder good first, then promote it as better than the alternatives. The reverse doesn’t work quite as well.”
I sense a high degree of irony here. This is the same thing that folks have been saying about Yahoo and Bing web search. You can’t just have a search engine that is as good as Google’s… or even quantitatively 10-20% better, otherwise people aren’t going to switch. You have to have a search engine that is qualitatively different or better.
And with this whole new image format, the blog writer makes the point that what matters, qualitatively, is not the image format itself. It’s also not the entropy characteristics of the encoder. It’s the psychological characteristics of the encoder. It’s that “good” is better than “fast”.
And yet Google still says “fast” is better than “good”.
Whatever example we want to use, am I not correct in saying that there is a problem with categorically declaring these two things at the same time (a) we serve the user, not our own internal goals, and (b) fast is better than slow? When you’re lucky, “fast” will serve the user best. But there are many information need, image viewing need, etc. scenarios in which the fastest solution is not the one that serves the user best.
Please, disabuse me of this notion if I’m wrong.
BTW, 62 comments. Woohoo. Not that anyone else is reading anymore, though…
This is a crazy list of comments.
@Daniel: I was offline for about 4 weeks. It builds up. 😉
I suspect that making individual precision-oriented queries faster increases exploration for those who have any interest in it, since it lowers the cost. just my 2 cents
Exploration may be affected adversely by the auto-complete feature of Google Instant, as it will try to route searchers to similar previously found results. This may lead the searcher to accept these “popular” results, thereby undermining the exploratory aspects of search. Increasing speed doesn’t kill exploratory search, decreasing diversity does.
Agreed. Catering to lazy users is great when there is a clear-cut best answer, but it is likely to reduce diversity in other cases. But the alternative is to force users to do more work when we could let them do less work and satisfice. It’s an interesting trade-off.
No, this whole idea of forcing users to do more work is a false dichotomy. Because if the user has an “explorational” type of information need (the Gary Marchionini sense of exploratory search, by which is meant things like learning, synthesis, comparison, analysis, discovery, etc.) then the user already has to type in 100 different queries to get what they want, even if auto-complete is turned on. If auto-complete saves the user 0.7 seconds per query, that’s a meager improvement, and does not fundamentally change the overall time (and effort) that it takes to complete the task as a whole.
No, the alternative is for the system to do something more than just return 10 results. The alternative is to provide explicit query modes for automatically synthesizing or comparing different queries. For example, imagine if you had the ability to type in two queries side by side. Two columns, two query boxes. And then what the system could do is give you an intelligent “diff” between the two sets of results.
Just knowing how two concepts were similar or different might save you from having to type in 30 more queries, until you got enough understanding to give you the same level of information. Again, here are the choices and consequences.
(a) user types 100 queries, no auto-complete. Diversity is larger, but effort is great
(b) user types 100 queries, with auto-complete. Diversity is smaller, but effort is 10% less per query, because of auto-complete.
(c) user types 20 queries, but types them as side-by-side comparative queries. user gets as much information as from 100 queries in (a), but has to expend 80% less effort, because only 20 queries are typed.
What I dislike in these discussions is that the argument is always framed as a choice between (a) and (b), when really (b) is a red herring. The choice should be between (a) and (c). And (c) blows both (b) and (a) out of the water.
However, it costs the search engine much more in terms of processing power. And it’s not always clear what sort of ads to show next to a “comparative query”. The user saves immensely, though. Which is why it frustrates me that we never see type (c) queries offered by mainstream web search engines.
Forcing is a strong word. But I think there’s no escaping that, at least in some cases, encouraging users to explore all options vs. satisficing is a trade-off. In other cases, there is no doubt that supporting exploration can save time. That is one of the reasons I love working on systems that exploratory search!
By not providing a certain type of functionality, I think you are indeed “forcing” the users to use only what is left, what is available.
And if what is available won’t do what they’re trying to do, won’t explicitly support the exploration that they’re engaged in, then yes, I think it is fair to say that the search engine is “forcing” them to do more work.
And it has nothing to do with exploring all options vs. satisficing. Going back to my example, about how it would take 100 queries on a standard search engine (even with auto-complete turned on), vs. 20 queries on an exploratory search engine. Whichever one the user is using, they still have the choice to “explore all” vs. “satisfice”.
Translation (and remember from comment #68 above that search engine (b) is the auto-complete precision-oriented engine, whereas engine (c) is the comparative, exploratory engine):
– “Explore all” using search engine (b) = 100 queries
– “Satisfice” using search engine (b) = 20 queries
– “Explore all using search engine (c) = 20 queries
– “Satisfice using search engine (c) = 4 queries
Do you see what I’m getting at? Doesn’t matter if you’re satisficing or going full-tilt. A search engine purpose-built to your type of information need is going to allow you to do less work, overall. And auto-complete is not that type of engine. Even when satisficing and using precision-oriented autocomplete, you have to do more work (nay, are “forced” to do more work!) than you do when satisficing and using an exploratory engine.
I think you see that, Daniel, but I think 97% of the rest of the search community doesn’t. I think that they think that if you add up those 0.7 second savings across 20 satisficing queries, you save more time than if you only have to do 4 satisficing queries.
I think that they think that if you add up those 0.7 second savings across 20 satisficing queries, you save more time than if you only have to do 4 satisficing queries.
And frankly, given the amount of time to do a single query, that math just doesn’t add up.
Well, I no longer work for Google, but I do appreciate the benefits of Google’s auto-complete functionality — including instant results. Of course, I’d also like rich support for exploratory search. But I’ll take what I can get. 🙂
I think we have no choice but to be forced to take what we can get 🙂
But folks like “cute love quotes gal” (which again, represent 97% of the community), above in comment #65 need to understand why 0.7 seconds faster on a single query does not necessarily translate to faster overall task completion time — and in fact might be not only slower overall (because you’re having to do 5x the number of queries to get the same information, even if you’re only satisficing) but also require more work from the user overall (because you’re having to do 5x the number of queries to get the same information, even if you’re only satisficing). When you have an exploratory information need.
[…] This means that having top billing on the search engine is more important than ever. Here is a fun test: type in search terms that you would like your company to rank for. See yourself? Now type in related but different keywords. How about now? Google Instant allows you to test your SEO instantaneously!
In the coming weeks, Google is likely to offer some tips for webmasters – either directly through Webmaster Tools and/or on their blog. It may pay to visit these sites more frequently than you normally would – at least for the next 4-6 weeks. […]