Categories
Uncategorized

Google Markets Itself

I still don’t buy that Google is “gripped with fear“, but I agree with Danny Sullivan’s analysis that Google’s new “Explore Google Search” page (with a link in the usually sacrosanct real estate on the home page) is a reaction to Microsoft’s campaign to market Bing. I’d be curious to know what fraction of Google’s users are aware of the features Google enumerates on that page–perhaps users will actually benefit from the education. But more likely Google is simply disgruntled to see Microsoft getting press for an allegedly different approach that, in most cases, looks a lot like what Google (and, as Sullivan points out, Yahoo) already does.

I’d like to see competition over actual innovation, rather than over perceived innovation through marketing. I suppose I can’t blame Google for tooting its own horn. But the timing does make Google look a bit defensive–or at least reactive. I’m sure it at least put smiles on the faces of the Bing team to have drawn out a reaction.

By Daniel Tunkelang

High-Class Consultant.

24 replies on “Google Markets Itself”

But more likely Google is simply disgruntled to see Microsoft getting press for an allegedly different approach that, in most cases, looks a lot like what Google (and, as Sullivan points out, Yahoo) already does.

Here’s what I see is the key difference: Even if Google has lots of the same options as Bing, they don’t get utilized the same way. Google hides the options. You have to look really hard to find a lot of their stuff. With Bing, they’ve taken more of a user approach, and make the information, the query refactorization, the facets, etc. much more integral to the search and search results itself.

So that’s it. Google hides what it has. Bing makes what it has an integral part of the process. That’s not a small difference.

I frankly prefer the latter approach. Even now, Google isn’t putting any of the search options on the page itself, or as options in a column on the results page. No. It’s still at least one more click away from what the user is doing, taking them away from their task, and onto a separate page.

That doesn’t feel right. That doesn’t feel good. I don’t want to have to learn (and remember!) some complex, arcane syntax to make my searches work. I want the engine to expose to me, automatically, my exploratory options. Bing does it. Google refuses to. Again, it’s more than an “allegedly” different approach. It *is* a different approach.

Like

@Daniel L: What is that reason? Because they tested one feature for three weeks on some subset of users, and it didn’t work? And then they tested another feature, and it didn’t work? And then they tested a third feature, and it didn’t work? So they concluded that it’s not good to show any features?

I dunno.. to me that sounds like giving a teenager a wheel, and seeing if he uses it, and taking it away when he doesn’t. Then giving him brake pedals, and taking them away when he doesn’t use those. Then giving him a muffler, and taking that away when it goes unused, too. And then concluding that the teenager really doesn’t want to drive his own car.

These HCIR interfaces work not because of any one feature or option. But because of the combined power of the suite of options. Just like you need the wheels, the brakes, the muffler, the engine, etc. *all* to be there, in order for the car to work. Take a piece away, like the wheels or the brakes, and no one will want to use it.

Or am I completely wrong?

Like

“The best feature is the one you use without even knowing.”

Great accurate quote.

Even HCIR interfaces should just work, one shouldn’t have to think about how they work or why they are better.

A great interface (or feature) helps move along the “narrative” of whatever task one is trying to accomplish.

Like

The best feature is the one you use without even knowing.

Suppose I’m doing a recall-oriented search, for example a literature search for a paper that I am writing.

I want to know when my coverage is adequate, when I’ve found the right amount of information to do a proper literature review. I want to know when I should stop looking further, because there probably isn’t any earlier work, any more relevant references.

What feature am I using on Google, that I don’t even know that I am using? And given that I don’t even know that I am using it, how am I becoming aware that my literature search is complete, or that I’ve just asked a bad query?

And speaking of finding the earliest reference, why doesn’t Google let me “sort by most relevant earliest date”? Google lets me sort by most recent. But not by least recent.

Or is somehow Google magically reading my mind, and inferring that I am doing a literature search, and automatically activating that “least recent” sorting algorithm on the backend, without me knowing? Is that what’s happening? I don’t believe it.

And even if that is what is happening, how do I really know that Google has decided, on this round, to give me the “least recent” rather than the “most recent” ordering? Since they’re not explicitly telling me what they’re doing, giving me the feedback to orient myself in the search space, how do I even know if going further down the ranked list is going to give me an earlier, or a later, paper?

I still stand by my earlier frustration:

http://irgupf.com/2009/03/04/ranked-lists-and-the-paradox-of-choice/

Or am I really just not understanding something, here? Are you just talking about Google popping up a “maps” onebox when I search for “restaurants in Montreal”? Are you talking about the extra 0.33 seconds of time that it saves me, to have the maps onebox pop up automatically, instead of me having to first click “maps” and then run my query?

That’s not the sort of feature I’m talking about. I’m talking about relevance-oriented slicing and dicing of your search results, tearing them apart and putting them together again, to know for sure that your literature search is as complete as you can make it.

Like

Wow, it’s nice to have a good debate here again! Let me see where I can chime in.

No one seems to be arguing that Bing offers substantially new functionality relative to Google and Yahoo. So I’ll treat that point as conceded, at least for the sake of argument.

Instead, the point of contention is whether Bing exposes HCIR features to users more prominently than Google and Yahoo–and whether this is notable.

For related searches, I’ll concede that Bing does present them in the results more prominently than Google (though no more so than Yahoo). Though I wonder how much Google makes up for this in their type-ahead suggestions. But are there any other examples of Bing being more aggressive with HCIR than Google?

Also, while I agree with Jeremy’s complaints about Google (e.g., not letting me sort be least recent), I don’t see Bing addressing any of them. In fact, Bing offers less advanced search functionality than Google (e.g., Bing doesn’t even offer recency filtering).

Kosmix, Cuil, Duck Duck Go–all of these guys have shown that you can offer a search experience that is clearly different than Google. They also haven’t made a dent in the search market. But I point at them to show that being different isn’t just about marketing.

Like

Also, while I agree with Jeremy’s complaints about Google (e.g., not letting me sort be least recent), I don’t see Bing addressing any of them.

When I made that particular point, that was not an argument about Google vs. Bing. It was an argument against the more general claim that the “best feature is one that you use without even knowing”. While I agree with the point that it’s nice, even helpful, when the search engine does very minor things, like automatically pop up a map when you do the [restaurants in Montreal] query, it becomes problematic when the search engine is trying to do other things for you, without you even knowing, such as time-based sorting. If I search for a topic without specifying a time-ordering preference, and the search engine decides what time-ordering is “best” for me, without giving me a way of turning the engine’s decision off, then that is not only not helpful, but it is harmful.

Why is it harmful? Because when I enter query after query after query looking for the earliest literature to cite in my paper, and Google (or whatever search engine) automatically, without my even knowing, turns on its “sort by recent” filter, then I might come to believe that there are no papers on the topic older than 2003 — when in fact there might be a whole body of literature from the early 90s. My lack of awareness is an extreme liability, aggravated by a feature that the search engine is using without my even knowing it. That’s my only point.

Like

..because if a search engine is turning on or of some feature, making its decision to sort the results this way or that way, and not actually telling me what decisions that it is making, I will never know if looking at “just one more” page will get me to the information that I seek.

That’s where the paradox of choice comes in. I’m constantly having to decide whether or not to look at the next, and the next, and the next, result, because the right information might be always just around the next corner.

So I strongly dislike features that get used, without my awareness. Rule #1 is don’t make the user think too hard.. and I’m constantly having to think too hard, because so much is made unaware, hidden from my view.

It’s time for explanatory search, too: http://irgupf.com/2009/03/09/exploration-and-explanation/

Like

No secret that I’m a big fan of transparency and user control. I don’t mind a system pro-actively suggesting something clever (e.g., interpreting a search for “weather” as a request to get a weather forecast in my default or current location) as long as it tells me what it’s doing and lets me control the experience.

Like

No secret that I’m a big fan of transparency and user control.

Me too.

Ultimately, the search (or decision) engine should be an extension of my own brain just like my car is an extension of my body.

As I often say, I want tools that make me smarter, I don’t want smart tools.

We are not there yet, and I am thankful for that. I don’t have to worry about being left in the cold without any purpose, not in my life time.

Like

I don’t mind a system pro-actively suggesting something clever (e.g., interpreting a search for “weather” as a request to get a weather forecast in my default or current location) as long as it tells me what it’s doing and lets me control the experience.

Yes, I think those two points are both necessary: (1) Explanation, and (2) Control / Reconfiguration. One without the other leads to a crippled system, and one that is harder to use, and requires more effort.

Like

As I often say, I want tools that make me smarter, I don’t want smart tools.

But.. Daniel L.. you also say that the best feature (tool) is one that you aren’t even aware that you’re using. How is it making you smarter, if you don’t even know what is happening.. if all you get is the end result?

At the risk of getting too hokey, it’s like the old folk saying of giving a man a fish (and he’ll eat for a day), versus teaching the man to fish (and he’ll eat for a lifetime). A feature that gets used without you even knowing it is like giving you a fish. It’s nice, and you get your fish, but the next day, you need to be given another fish or else you’ll go hungry again.

Or do you see it a different way?

Like

How is it making you smarter, if you don’t even know what is happening.

I don’t know how the neurons in my work form thoughts. If each time I had to form a thought, I had to order my neurons around, I would never get any work done.

(Not that I get any work done these days anyhow.)

Like

Uh oh, we’re starting to converge–got to keep arguing, or the ratings will plummet!

But. yes, I also like “conversation”, though I worry that some people still associate the word with natural-language interfaces. Still, it’s a more mainstream term than HCIR!

Like

I also like “conversation”, though I worry that some people still associate the word with natural-language interfaces.

Yes, I just got back from the Semantic Technology conference (http://www.semantic-conference.com/). And in a search panel session (which I plan to write up in a blogpost later today), “conversation” was indeed interpreted as NLP, rather than interaction. So that “false” association is indeed problematic.

Like

I don’t know how the neurons in my [brain] form thoughts. If each time I had to form a thought, I had to order my neurons around, I would never get any work done.

Let’s draw a distinction between how the neurons work, vs. transparently knowing what it is that the neurons are telling you.

You might not be aware of every single firing synapse, but you are at least aware of when those neurons are telling you different things, such as “eat three chocolate ice cream bars!” versus “only eat one, because of your expanding waistline”. Your neurons are at least making you aware of different ways you can navigate the world, and you can choose between your options. Then, with a little observation, you can see which decision produced the best results.

Like

“eat three chocolate ice cream bars, because it tastes really good!” versus “only eat one, because of your expanding waistline”

And let me point out: In each of those cases/options/facets, you are receiving explanatory (transparent) feedback from your neurons as to why each option was presented. I.e. in case one, the explanation is “tasted good”, and in case two, the explanation is “gonna get fat!”

🙂

Like

Comments are closed.