Categories
Uncategorized

Common Sense 2.0

The Wall Street Journal, reeling from its controversial coverage about Google, Larry Lessig, and net neutrality, published an article about “The Secrets of Marketing in a Web 2.0 World“. Here are the “secrets” they enumerate:

  • Don’t just talk at consumers — work with them throughout the marketing process.
  • Give consumers a reason to participate.
  • Listen to — and join — the conversation outside your site.
  • Resist the temptation to sell, sell, sell.
  • Don’t control, let it go.
  • Find a ‘marketing technopologist.’
  • Embrace experimentation.

I suppose I shouldn’t be so harsh; common sense isn’t always as common as the phrase would imply.  And the “let it go” advice is probably controversial outside of social media circles. Still, it boggles my mind that these principles aren’t just considered part of online marketing 101.

Categories
Uncategorized

Facebook More Trusted Than Google (!)

According to a Ponemon Institute report on the 20 most trusted companies in the United States, Americans trust Facebook more than they trust Google. A look at Ponemon’s press release shows Facebook ranked at #15, while Google fell out of the top 20 from its rank of #10 in 2007.

I don’t know enough about the study to evaluate its methodological soundness, but I found it astounding that “do no evil” Google would score lower than the company that tried to bring us Beacon.

But perhaps there’s a method to the popular madness. While Facebook may raise more privacy concerns than Google, it seem to be pursuing a policy of transparency in how it exposes and uses the data it collects. Google, in contrast, tends to be a bit more cagey about its policies. I may be reading too much into this one data point, but I’m tempted to see a lesson that users value honesty and transparency over privacy.

Categories
General

Is SOA Enabling Intelligent Agents?

I recently blogged about sofware agents, mostly musing about how to reconcile their inherent rationality with our lack thereof as human beings.

But today I noticed an article by John Markoff in the New York Times entitled “A Software Secretary That Takes Charge “, which considers some companies trying to build services based on such agents. The article called my attention to the recent death of AI pioneer Oliver Selfridge, who coined the term “intelligent agents” and devoted much of his career to trying to make them a reality.

Markoff notes that “efforts to build useful computerized assistants have consistently ended in failure”, which raises the question of why any student of history is still investing in this area. Markoff quotes an answer from Rearden Commerce founder / CEO Patrick Grady:

The promise of the Web 2.0 era of the Internet has been the interconnection of Web services. Mr. Grady says he has a far easier task today because the heavy lifting has been done by others.

“This is the connective tissue that sits on top of the Web and brings you more than the sum of the parts,” he said. “I set out to deliver on the longstanding ‘holy grail of user-centric computing,’ a ‘personal Internet assistant.’”

In other words, intelligent agents are possible now because of Web 2.0 and service-oriented architecture. An interesting theory–and I can certainly accept it in theory. But I’m curious how it plays out in practice. It seems to me that there’s a lot of “heavy lifting” still waiting to be done.

Categories
General

Should We Donate Attention To Support Bloggers?

In a post today entitled “The joke of advertising on social media“, Steve Hodson goes through some familiar territory in the challenges social media companies faces in coming up with a viable business model:

  • Social media is built around the idyllic concept that content should be free.
  • Social media companies insist to advertisers that they have a willing flock to make their millions off of.
  • But early adopters of social media behave like a swarm of pissed off wasps at the mere mention of advertising.
  • Case in point: many people (myself included) rejected Google’s Chrome browser because it didn’t support an ad blocking plug-in.

So far his argument is–or should be–uncontroversial. And I agree with his prediction that “Social media in all its goodness will only survive if people like you and me can contribute but know that we can pay our bills at the same time.” There’s no such thing as a free lunch, and bloggers cannot live by page views alone.

But then he continues: “users of social media…have to stop being so greedy with their attention span.” This is a slightly different take on the “no free lunch” argument than I expected. If I understand him correctly, he is suggesting that we click on ads out of a sense of obligation, to make up for the fact that we are receiving content for free.

If my understanding is accurate, then I have to part ways with Hodson. If bloggers want to put out a tip jar and encourage readers to leave tips, that’s fine. And if they want to make it clear that clicking on an ad is, from their perspective, equivalent to leaving a few pennies as a tip, that’s fine too. There’s nothing wrong with asking users to be generous. 

But the whole point of tipping is that it isn’t out of a sense of obligation. Tips are supposed to be on top of paying a fair market value for services rendered. I know that isn’t always the case; in the United States, the minimum wage law calls out the existence of occupations where tips customarily represent a substantial portion of an employee’s income. But that doesn’t make it a good idea, let alone an example for new markets.

I realize that individual bloggers are hardly in a position to unilaterally change the prevailing business model accepted by a culture where people expect information to be free. And so we run through the sequence Hodson describes, and everyone is frustrated: underpaid writers, advertising-inundated readers, and profitless investors. But at least reality is finally sinking in, and I am personally optimistic that the days are numbered for the dominance of the advertising-supported model. Call it an audacity of hope.

Categories
Uncategorized

Micro Economies of Attention

Oscar Berg just alerted me to a nice post on the Connectbeam Social Computing Blog about “Micro Economies of Attention” that, in turn, discusses a work by researchers in Hewlett Packard’s Social Computing Lab evaluating the motivations of employees to participate in organizations’ social software applications.

I’ve been preparing a series of posts about the macroeconomics of information and attention, so these subjects are top of mind. This article addresses the complementary microeconomic side of the discussion.

Categories
Uncategorized

The Attention Arms Race

I was just reading an article in the New York Time about how “Advertisers Face Hurdles on Social Networking Sites” and saw this brilliant quote from SocialMedia Networks co-founder Seth Goldstein:

“Advertisers distract users; users ignore advertisers; advertisers distract better; users ignore better.”

I think that sums up the problem of having advertising as the foundation for almost all online information and entertaininment. The arms race cannot escalate indefinitely. Eventually we will have to move beyond the tyranny of free.

Categories
General

Transparency vs. Simplicity

As regular readers know, I am strong advocate for transparency in any system where people interact with machines. In fact, such transparency is a core HCIR value, since communication depends on the clarity with which a message traverses the noisy channel of human-computer interaction.

So I was a bit taken aback by a recent blog post in which Stephen Arnold seemed to attach the notion that an effective search engine could be transparent. But a more careful reading led me to believe that he’s reacting, perhaps a bit too cynically, to the increased currency that the word “transparency” has in marketing literature.

Let me try to cut through the marketing hype. Transparency is more than a buzzword to sell software. It is a core value than imposes significant constraints on how a system can act. If a system is not bound by transparency, then it is free to respond to user inputs arbitrarily, unconstrained by any requirement to offer users insight into the basis for its response. In contrast, a transparent system must produce user-consumable explanations of its output. A transparent system can’t get away with saying “if I told you, I’d have to kill you.”

In fact, a transparent system might have to reject a possible response to a user because it can’t present an explanation for the response that the user will understand. For this reason, some machine learning purists reject transparency as overly constraining, and prefer approaches that simply optimize an objective function that, in all likelihood, is completely opaque to the user–and possibly even to the system developer.

Why is transparency so important in systems that support information seeking, i.e., search and information retrieval systems? Because any systems that requires people to interact non-trivially with machines are fraught with communication challenges. Best-effort attempts to extrapolate user intent from a query–often a query comprised of only a couple of words–are beyond AI-hard; they’re ESP-hard. While all systems have to accept that they’ll misread users’ intention a significant fraction of the time, transparent systems at least offer users the opportunity to worth with the system to get back on track.

To be clear, implementing transparency isn’t simple. It’s like Mark Twain said: short letters are often harder to write than long ones. In a related vein, the world’s greatest minds aren’t always the world’s greatest communicators. And what holds true for people holds even more so for machines (or the people for program them): it’s hard to develop algorithms that deliver useful results and provide human-consumable explanations for them.

I understand Arnold’s frustration with vendors. And I won’t claim that Endeca always gets it right, though I think (and have been told) that we do better than many in communicating how our technology works. But there is no question in my mind that information seeking support systems have to become more transparent if we want them to work in the real world.

Categories
Uncategorized

The Noisy Channel: Now Better Than Sex!

Well, I’ll admit the evidence is a bit shaky. But an online survey commissioned by Intel reports that about half of women and a third of men would rather go without sex for two weeks than give up the Internet for that long. I’m not quite sure what to make of the survey, or the premise that the survey sought to prove “how essential the Internet has become to people–even during tough economic times.”

All I can say is that, if you are spending your time on the Internet, I hope you are enjoying The Noisy Channel.

Categories
General

Computational Information Design

Tonight I had the good fortune to attend a talk by Ben Fry on Computational Information Design at the Broad Institute of MIT and Harvard. Ben Fry is one of those rare human beings whose work spans from the heart of academia (he’s worked with Eric Lander on visualizing genetic data)  to popular culture (he work appears in Minority Report and The Hulk). And he’s an outstanding speaker.

The content of his talk reflected his dissertation work at the MIT Media Laboratory, his postdoc work the Broad Institute, and some of his more recent work  as a designer and consultant. I can’t do justice to the talk, which unfortunately is not available in any recorded form. But I do suggest you seize the opportunity to hear him speak, should it come your way. He communicates the power of visualization through examples, in a way that conveys both their practical value and their beauty.

The Q&A session was almost as long as the talk, and probably could have gone on indefinitely if the organizers hadn’t finally cut it off. Suspecting that I was one of the few non-academics in the audience, I asked two eminently practical questions: how do you know that a visualization is effective, and how d you guard against a visualization skewing your perception of the data?

Fry’s answers were incisive. He judges the effectiveness of a visualization based on whether people give up their previous tools to use it. And he selects problem areas where he sees a significant opportunity to improve the state of the art. That way, the difference in adoption is so obvious that you don’t need to perform user studies to observe it.

As for concern with visualization skewing perception of the data, he acknowledges it as a valid concern but points out that we don’t seem to raise the same concern with non-visual (e.g., textual) data presentation. Somehow we are especially suspcious of aesthetic representation, a sort of “don’t hate me because I’m beautiful” bias. He adds that the risk of design skewing our perception is dwarfed by the cost of not designing at all.

Visualization is a tricky subject, and I’ll freely admit that I’m underwhelmed by much of the work I’ve encountered. Perhaps my past work in information visualization makes me a particularly harsh critic. But Fry presents a compelling picture–or rather, a compelling video, since his work is full of motion. My only complaint is that he hasn’t explored the world of search and information retrieval. His work seems to beg for application in that domain. Food for thought.

Categories
Uncategorized

Upgraded to WordPress 2.7

Just wanted to let readers know that I’ve upgraded to the latest version of WordPress, 2.7 (“Coltrane”). Please let me know if you experience any technical difficulties.