The Noisy Channel

 

Attention vs. Privacy

July 24th, 2011 · 6 Comments · General

A major feature of the recently released Google+ is Circles, which allows you to “share relevant content with the right people, and follow content posted by people you find interesting.”

Most people seem to look at Circles as a privacy feature — and indeed Google’s official description gives the impression that Circles exist to manage privacy based on real-life social contexts. Of course, re-sharing can result in unintended consequences, and Google even offers a warning that:

Unless you disable reshares, anything you share (either publicly or with your circles) can be reshared beyond the original people you shared the content with. This could happen either through reshares or through mentions in comments.

Privacy is a big deal, especially for Google — and particularly in the context of rolling out a new social network. Still, I’m not persuaded that privacy is the only or even the primary concern motivating the concept of social circles.

Sharing content with someone is not just about giving that person permission to see it. Sharing content with someone asserts a claim on that person’s attention. While it may be a privilege for me to have access to your content, it may be even more of a privilege for you that I allocate my scarce attention to consume it.

What if we focus on routing content to the people who would find it most interesting? Such an approach works best if all of the shared content is public with respect to permissions — that is, people post it without any expectation of privacy. Twitter demonstrates that many people are comfortable with such a sharing model. Imagine if they could learn to trust a system that optimizes (or at least attempts to optimize) the allocation of everyone’s attention. This is not an easy problem by any means, nor is it one that is likely to be solved by algorithms alone. It will take a strong dose of HCIR to get it right. But, at least in my view, optimizing the allocation of human attention is the grand challenge that everyone working with information retrieval or social networks should be striving to address.

Privacy is important, and social networks should offer simple, robust privacy controls that users understand. We all have experienced the problem of filter failure. But sharing isn’t just about privacy. Our attention is our most precious cognitive asset, both as individuals and as a society, Moreover, our attention faces ever-increasing demands as our social lives evolve in an online world relatively free of physical constraints. Social network developers would do well to pay attention…to attention.

6 responses so far ↓

  • 1 I. Nicocles // Jul 25, 2011 at 7:52 am

    Daniel,

    What if we focus on routing content to the people who would find it most interesting, and did that in a way that provided the content originator privacy?

    Routing content to the people who would find it most interesting is what people in the marketing business do, and the techniques they use are what most people would call privacy-invasive.

    Leave aside here moral and legal questions, which aren’t a part of my thinking here at all.

    Imagine a world in which a company like Google made it’s targeting methods and data available to everyone on a black-box basis. And imagine further that people sending out content would be identifiable by some pseudonym – so that both broadcast and point-to-point communication was possible.

    So you would have a kind of modified Chatham House Rule communications system – the content would be known, the actual identities of the participants not known, but a virtual identity of the participants would be known (either to the group, or just to pairs of individuals – as each person could have a unique pseudonym that’s unique to each individual he/she corresponds with – inefficient maybe).

    I just got up to take out the garbage (there’s a raccoon that’s remarkably skilled at opening things) and noted your post. It’s too early for me to respond intelligently, but I couldn’t resist the opportunity to stay in touch.

    What would interest me about the communication system I propose (which I doubt is anything novel, just a reaction to your post) is how one would develop performance evaluation tools for comparison with what we have now.

    I. Nicocles

  • 2 Daniel Tunkelang // Jul 25, 2011 at 8:09 am

    I have nothing against pseudonymity, though apparently Google frowns on it. But it’s difficult to make pseudonymity work in practice.

    But the use of “real” names vs. pseudonyms is orthogonal to the issue at hand. We’re talking about privacy of content, not of personal details. There’s no reason personal details have to be public to make this work from a content perspective, e.g., I can subscribe to topics without publicizing that fact, and I can write about a topic without disclosing who my employer is.

    There are trust concerns — I think that people who publish content have a lot more credibility when they disclose enough information to assure readers of their sincerity. Pseudonyms can address this issue, but it takes time for a pseudonym to accumulate a reputation, especially since it cannot generally bootstrap on real-world credibility.

    But your marketing concerns are about consumer privacy — and I see no reason that consumers need to make privacy-invasive disclosures for the optimization I propose. The current state of affairs is mostly a function of economics: if you want products and services for free, then you have to give something of value in return. And that something is typically information that helps the product and service providers monetize your attention via advertising.

  • 3 I. Nicocles // Jul 25, 2011 at 8:51 am

    Daniel, you write:

    “…I can write about a topic without disclosing who my employer is.”

    Probably yes. Consider for this example, two classes of topic, one topic where to do a first-rank job you will have to disclose information that will enable the reader to identify as a likely employee of a particular company, and another topic where that does not obtain.

    Now, I will be far, far more likely to devote some attention to that topic if I understand that you have worked for that employer, that is especially true if the employer has very high standards for hiring.

    To some extent, that’s an underlying assumption behind your present employer’s success.

    I think I got the tone wrong – when I wrote, “Routing content to the people who would find it most interesting is what people in the marketing business do, and the techniques they use are what most people would call privacy-invasive, ” I wasn’t intend to slam either people who do marketing or marketing as a business activity – I was really just opening up a speculation re what the world might be like if everyone had access to algorithms that would identify people likely to be interested in what they had to say with the power that Google has available for its use.

    I also wasn’t intending to slam Google at all – if anything, they make a decent subset of their algorithms available at very modest charge and on a performance basis that, for an interest/activity that has any economic basis strikes me as a huge public benefit.

    I see that I really didn’t make clear what interested me. I wasn’t concerned about consumer privacy – I was more interested in questions you raise, in particular topics related to what information one discloses to assure readers of their sincerity.

    I am very interested in the question you raised, which I will quote more fully this time:

    ———-
    What if we focus on routing content to the people who would find it most interesting? Such an approach works best if all of the shared content is public with respect to permissions — that is, people post it without any expectation of privacy.
    ———

    What I was wondering and still wonder is whether there is some way of looking at the advantage of one’s being able to bootstrap on real-world credibility vs. the disadvantages that come from being this open.

    I could write a bit on the disadvantages – but generally they fall into the clutter problem – that once one goes “open” one runs the risk of devoting enormous quantities of time/attention on stuff that one may later think was a waste.

    This is much less a problem for people in the academy – but for people in public life or business, well, it can get more complicated.

    I. Nicocles

  • 4 Dinesh Vadhia // Jul 31, 2011 at 11:15 am

    “But, at least in my view, optimizing the allocation of human attention is the grand challenge that everyone working with information retrieval or social networks should be striving to address.”

    Is this personalization or something else?

  • 5 Daniel Tunkelang // Jul 31, 2011 at 2:15 pm

    I think that depends what you mean by personalization. There are lots of ways to optimize allocation of human attention, such as developing an effective vocabulary to represent content and interests. My main point is that there is so much concern about privacy of content but not enough emphasis on getting the right content to the people who would benefit from consuming it.

  • 6 Dinesh Vadhia // Aug 1, 2011 at 1:51 am

    Receiving content of interest and/or value is my meaning of personalization which is similar to your statement “… getting the right content to the people who would benefit from consuming it”.

Clicky Web Analytics