Categories
General

Why Publishers Don’t See Google As A Friend

The raging battle between publishers–particularly the newspapers–and Google has been so overplayed lately that I’m tempted to stop blogging about it until something actually happens beyond the war of words. Still, I recently read two paragraphs, in my view, neatly summarize the terms of conflict, and I felt compelled to share them.

The first is from Nick Carr, someone I rarely agree with but who in this case strikes me as spot-on:

What Google doesn’t mention is that the billions of clicks and the millions of ad dollars are so fragmented among so many thousands of sites that no one site earns enough to have a decent online business. Where the real money ends up is at the one point in the system where traffic is concentrated: the Google search engine. Google’s overriding interest is to (a) maximize the amount and velocity of the traffic flowing through the web and (b) ensure that as large a percentage of that traffic as possible goes through its search engine and is exposed to its ads.

The second is from Scott Karp, who actually cited the above paragraph in his own post:

Those who argue that Google is a friend to content owners because it sends them traffic overlook the basic law of supply and demand. The value of “traffic” is entirely relative. The more content there is on the web, the less value that content has — because of the surfeit of ad inventory and abundance of free alternatives to paid content — and thus the less value “traffic” has.

There is no doubt that Google is sending lots of traffic to publishers. The problem is that Google has also helped devalue that content, while at the same time taking a plum spot in the value chain. As I’ve said repeatedly,  publishers are complicit in their own malaise–in particular, they collectively made the choice to give Google so much leverage over them. Now, of course, they’re trying to renegotiate their relationship, the derision of the blogosphere notwithstanding.

But, as Carr points out (I’m agreeing with him again!):

When it comes to Google and other aggregators, newspapers face a sort of prisoners’ dilemma. If one of them escapes, their competitors will pick up the traffic they lose. But if all of them stay, none of them will ever get enough traffic to make sufficient money. So they all stay in the prison, occasionally yelling insults at their jailer through the bars on the door.

Critics like Jeff Jarvis contend that newspapers are trying to restore an economy of scarcity when they should be embracing an economy of abundance. But attention is a scarce resource, and nothing Google does can change that. Rather, Google played has brilliantly into this scarcity economy and seized consumers’ attention from the publishers, while pitting them against one another by facilitating their commoditization. Google may not be evil, but surely Machiavelli would be proud of this strategy.

Categories
General

What Goes With Wine? Facets, Of Course!

Yesterday’s Wall Street Journal features an article about “What’s Wrong With Wine on the Web“. While it mostly complains about problems with online wine shops, it does feature a few who do it right, and I’m proud that two of the four featured sites use faceted search and are powered by Endeca. Hopefully this helps make up for all the times that my colleagues (especially us vintage Endecans) have had to give the “wine demo“.

Here are screenshots of the two featured sites (yes, I’m into Spanish and South American reds):

Wine.com:

Good Values on South American Reds at Wine.com

K&L Wines:

Spanish Reds on a Budget at K&L Wine Merchants

Categories
General

Reference vs. Referral

I just read an interesting “guessay” by designer Joshua Porter entitled “The Slow Erosion of Google Search“–which in turn cites an insightful essay by Brynn Evans, “Why social search won’t topple Google (anytime soon)“. Brynn’s tweet alerted me to the Porter essay, which just shows that social media can have directed cycles!

But what really caught my attention was the following comment by Mike Susz on Porter’s post:

google vs. social media is the difference between reference, and referral.

google is a reference, and to find accurate info relies on you being able to boil down what you’re looking for to a very small amount of words that hold lots of meaning.

there is no other context – it’s like looking something up in the dictionary. unless you can already succinctly describe what you’re looking for in very few distinctive terms, you won’t find what you need.

social media adds incredibly value context to your search – from your ability to elaborate more (within reason) to being able to disambiguate terms.

even your relationships themselves add infinitely valuable context – your contacts know what you do for a living, the techniques or technologies you use, and might already have experience solving the same problems.

What a fantastic and concise explanation of the difference between the way people interact with conventional web search engines and the way they seek information through online communities like Facebook, LinkedIn and Twitter.

Today I even had the opportunity to experience this difference first- hand: I used social search to replace my headphones. I first went to Hunch, which did a solid job of pointing me to a suitable pair of headphones. In fact, I ultimately purchased the headphones recommended as the 3rd of 53 possible choices. After getting recommendations from Hunch, I turned to Twitter, where I received a flurry of advice, as well as requests to share what I learned. Given that advice, I turned to a few ecommerce sites to follow up on a few candidates. And ultimately I bought a pair of headphones that will hopefully arrive before my current ones fall apart.

So, is Twitter a search engine after all? I still say no, but it certainly facilitates social search.

Categories
General

Chillin’ with CHI Attendees

I didn’t get to actually attend CHI this year, but I was fortunate to be able to hang out with attendees during the receptions on Wednesday evening. There was a respectable HCIR representation there, plus I was able to meet folks with whom I’d only corresponded on Twitter. In fact, I was even introduced as “that Noisy Channel guy”. I blog, therefore I am.

I also had a fun dinner conversation that I’ll relate here. None of my fellow diners signed a blogging release form, but the topic is fair game, and I’ll try to reconstruct the thread.

Our starting point was a professor who was relocating and debating whether to keep his papers or throw them. One person–we’ll call him the preservationist–argued in favor of keeping the papers. I–whom we can call the freeloader–advocated throwing the papers, on the grounds that they’d all be available online, and thus easily replaced on demand.

The question arose of what would happen if there weren’t enough preservationists ensuring that freeloaders could depend on the availability of replacement copies. I argued that my position reflected rational self-interest–but that suggests that the need to preserve knowledge can become a tragedy of the commons.

I’m an extreme freeloader, in the sense that I prefer to not keep any copies–analog or digital–of information I know I can obtain for free or at a minimal price. Are people like me setting us up for another cataclysmic event like the  destruction of the Library of Alexandria? I think that the burden of preservation should be resolved by some kind of distributed peer-to-peer storage, but I concede the practical challenges are non-trivial.

In any cae, I enjoyed good food and drink, great company, and entertaining conversation. As always, I entrust its preservation to the cloud.

Categories
General

Google News Cluster Timelines

Just saw a post from Alex Chitu  at the Google Operating System blog about a new feature on Google news letting you see a timeline of articles from a cluster of related articles. Here’s a screenshot, taken from his post:

This strikes me as a cool feature, though it’s frustrating that Google limits you to its predetermined “related articles” clusters. The result is that the entire timeline only spans a couple of days, with most of the posts concentrated in less than a day. It would be nice, at least in my view, to be able to see such timelines across arbitrary ranges, in order to see the arc for a long-running story like a high-profile trial.

You can do that, to some degree, in the timeline view for the Google News archive search, but the archive search stops a few months short of the present day’s news.

I wish Google would take a unified approach to offer what strikes me as a very useful feature–at least for anyone trying to understand and contextualize a news story. But I suppose I can’t complain too much about a free service.

Categories
General

Something Jeff Jarvis and I Agree On

Recently I had a bit of a spat with Jeff Jarvis over how he characterizes Google’s transparency. Jarvis has positioned himself as the standard-bearer for all things Googley and I’ve taken on the un-Googley task of championing exploratory search, so it’s not surprise that we often find ourselves disagreeing.

But, in the same Steve Rubel interview I cited in my previous post, Jarvis said something I agree with completely, and I’d like to take the opportunity to quote it here:

Advertising is failure.

If you have a great product or service customers sell for you and a great relationship with those customers, you don’t need to advertise.

OK, that’s going too far. There is still a need to advertise — because customers don’t know about your product or a change in it or because, in the case of Apple, you want to add a gloss to the product and its customers. But in the book, I suggest that marketers should imagine stopping all advertising and then ask where they would spend their first dollar.

In an age when competition and pricing are opened up online and when your product is your ad, you need to spend your first dollar on the quality of your product or service. If you’re Zappos, you spend the next dollar on customer service and call that marketing. If the next dollar goes to advertising, there has to be a reason — and if the product is good enough, that reason may fade away.

Those are strong words, especially considering that they also appeared in Advertising Age. And they ring true. In fact, they complement my argument that advertising isn’t search. Of course there’s a need to make prospective customers aware that your product or service exists. But if you should be investing the lion’s share of money, time, and effort into making the product worth buying, rather than in persuading people to buy it. I realize that’s about as idealistic as “if you build it, they will come“, but that ideal is increasingly achievable in a world where information travels at the speed of Twitter.

Yes, I am well aware of the irony that Google’s business depends almost entirely on advertising, and that Jarvis has just made a case that advertising should be much less important. I’ll give him the benefit of the doubt and hope that he is with me in aspiring towards a world–and a Google–where advertising is not the foundation for information access.

Categories
General

Danny Sullivan vs. Newspapers

Danny Sullivan has a delightful rant on Daggle, his personal blog, entitled “Google’s Love For Newspapers & How Little They Appreciate It“.  It’s a fun read–though like all rants it could use an editor–and there’s even a fair amount I agree with.

Still, as I noted in a comment, he does steal a few bases. Specifically, he seems to see Google’s engagement with the newspapers as a big favor Google is bestowing on them, when it’s quite clear that Google benefits financially from aggregating brand-name content.

My take: Google and the newspaper industry are in a dysfunctional, co-dependent relationship. The newspaper industry is crying out that the relationship is abusive, but is afraid of breaking up because it no longer knows if it can survive on its own without Google bringing home the traffic.

I agree with Sullivan is that the newspapers need to quit whining and take responsbility for their fate. But it would be nice if the blogosphere didn’t mock them en masse when they’re finally showing signs of trying to do so.

Categories
General

Wolfram Talks About Wolfram Alpha

I felt pretty lucky to get an early preview of Wolfram Alpha, but real insiders like Rudy Rucker get personal demos from Wolfram himself. Rucker not only published a post about his conversation but also posted a “slightly condensed” hour-long podcast of the interview.

I listened to the whole hour. My biggest surprise is at how much Wolfram emphasizes the natural language interface, which I’d thought from my own preview was less of a focus–and even the system’s weakest link. In fact, when  Rucker asks if there will be a manual offering advanced users a list of expressions akin to those in Mathematica, Wolfram says no, that users are lazy, ontologies never line up, and that the system will figure out what the user means. Of course, I now question my assumption that Wolfram Alpha’s real goal or potential is to act as a service to use in other applications.

At the same, Wolfram says that he envisions “a new field of knowledge-based computing.  Imagine a spreadsheet that can pull in knowledge about the entries.” I heard that and then expected him to explain a vision around an API, but he continued to explain that the interface from a spreadsheet would  use natural language. I don’t know whether Wolfram has actually thought this through, or if he can appreciate the perspective of an application developer depending on consistency from a software service. Maybe I underestimate him. Needless to say, I’m a lot closer to my initial skepticism again.

What is interesting from a non-technical perspective is that Wolfram sees Wolfram Alpha as doing for NKS what word processors and databases did for Turing’s theory of computation, i.e., proving the value of his grand opus through its consumerization. I’m give him credit for putting his credibility on the line this way, but I think he’s taking a big risk here.

The podcast is a bit rambling, but it’s primary source material–and I found it enlightening enough to devote an hour to it (while writing this blog post).  I suggest skimming Rucker’s write-up first, and then deciding whether you have the patience to listen to the whole podcast.

Categories
General

Guy Kawasaki, I’ll Say It

I just saw this post from a week ago by Andrew Goodman on Traffick asking “Is Guy Kawasaki Singlehandedly Ruining Twitter?“. Some context: Guy Kawasaki gave a keynote at the New York Search Engine Strategies conference last week in which he discussed the tactics he uses to “use Twitter as a twool“.

Of course, what galls me, at least if Goodman is reporting his speech accurately, is this:

he castigates people who don’t follow everyone back because they’re arrogant. By not “reciprocating,” non-followers are showing they “don’t care about their followers.”

Well, Kawasaki follows over 100,000 users, so he practices what he preaches. But, as Goodman points out:

The thing about Kawasaki’s follow-back habit is: it’s fake reciprocity. He isn’t actually following. Following everyone back is like the old idea of exchanging links with everyone and anyone, in the hopes of gaming Google. You don’t actually have any hope of really following 100,000 people, so instead, you hide behind TweetDeck and other apps. As Kawasaki points out, he does read all @replies and Direct Messages. But don’t believe that the “purpose of following everyone back is so people can direct message me.” The purpose is to get people used to the idea that a follow should be reciprocated with a follow. That way, folks who go out and follow 200,000 people have a greater chance of being followed by, say, 160,000.

Can you say “attention Ponzi scheme“? I sure can. I may have criticized A-list blogger Loic Le Meur in the past for suggesting that follower count implies authority, but at least he doesn’t play this fake reciprocity game–the 500 people he follows may a bit more than Dunbar recommends, but are at least within the bounds of plausbility.

According to Goodman, Kawasaki kept trying to ingratiate himself by saying “well someone out there is going to say I’m a dick for saying this, but…”. Well, Guy, I’ll be the blowhard and say it, you’re being a dick. Every Ponzi scheme has its winners, and you’ve clearly cashed in on this one. I don’t begrudge you the attention you’ve accumulated. But please have the decency not to give advice that, as Goodman puts it, would turn Twitter into a “digital trailer park”.

Categories
General

Google Already Knows What You’re Thinking

An unsubstantiated assertion I’ve seen repeatedly over the last months is that Google needs to acquire Twitter because Twitter knows what is happening (or what we’re thinking about) now, while Google can only look backwards. The latest version I’ve seen of this argument is from Jeff Jarvis’s post today, entitled “Why Google should want Twitter: Currency“:

Google isn’t good at currency. It needs content to ferment; it needs links and clicks to collect so PageRank can determine its value.

I grant that PageRank isn’t good at currency. But Google doesn’t need to perform link analysis to know what people are thinking about in real time. Google can simply look at its logs to determine what people are searching for–and, in particular, which search terms and phrases are appearing with statistically significant frequency. And Google’s search volume is much higher (and more representative of the online population) than Twitter’s update and search traffic combined.

To be clear, you and I can’t perform that analysis using the tools Google makes available to the general public. But Google can–and I don’t see any reason, other than the fear of raising public concerns about privacy, that Google can’t exploit this data themselves.

What is different about Twitter is that it *does* make the data available to the general public. Twitter exposes Trends as part of its own offering, but it also enables services like Tweetmeme to perform their own analyses to track the hot stories in near-real time. But Google could do something similar and probably better if it wanted to.

I’ve said this before: Twitter is a community (a social network if you prefer), not a search engine. And, if there’s a good reason for Google to entertain acquiring Twitter, it’s probably that Google has a less than stellar track record when it comes to community. But let’s not delude ourselves to think that Google needs Twitter to know what’s on our minds now. They already know.