The Noisy Channel

 

Privacy through Difficulty

May 1st, 2008 · 10 Comments · General

I had lunch today with Harr Chen, a graduate student at MIT, and we were talking about the consequences of information efficiency for privacy.

A nice example is the company pages on LinkedIn. No company, to my knowledge, publishes statistics on:

  • the schools their employees attended.
  • the companies where their employees previously worked.
  • the companies where their ex-employees work next.

If a company maintains these statistics, it surely considers them to be sensitive and confidential. Nonetheless, by aggregating information from member profiles, LinkedIn computes best guesses at these statistics and makes them public.

Arguably, information like this was never truly private, but was simply so difficult to aggregate that nobody bothered. As Harr aptly put it, they practiced “privacy through difficulty”–a privacy analog to security through obscurity.

Some people are terrified by the increasing efficiency of the information market and look for legal remedies as a last ditch attempt to protect their privacy. I am inclined towards the other extreme (see my previous post on privacy and information theory): let’s assume that information flow is efficient and confront the consequences honestly. Then we can have an informed conversation about information privacy.

10 responses so far ↓

  • 1 Daniel Tunkelang // May 4, 2008 at 9:48 am

    It occurred to me that some might see a contradiction between this post and the previous week’s post on Accessibility in Information Retrieval. Here, I’m suggesting that difficult-to-access content shouldn’t be considered secure; there I’m suggesting that difficult-to-access content shouldn’t be considered accessible.

    Of course, these are different use cases. Still, it’s worth keeping in mind that different users have different motives. What prevents a casual user from accessing information won’t stop a sufficiently determined one.

  • 2 Quick Bites: Filter Failure // Sep 23, 2008 at 1:13 pm

    […] What I particularly like in his “filter failure” characterization is that it really exposes the human-computer interaction challenges in managing information flow (in both directions). It also reminds me of Danah Boyd’s Master’s Thesis on managing identity in a digital world, and of some earlier discussion here about privacy through difficulty. […]

  • 3 This Conversation Will Be Recorded | The Noisy Channel // Sep 30, 2008 at 10:20 pm

    […] have finally realized a perfect storm where anything can be published and everything can be found. Privacy through difficulty has given way to unintentional […]

  • 4 Ephemeral Conversation Is Dying | The Noisy Channel // Nov 25, 2008 at 11:30 pm

    […] inspired people to voluntarily live in virtual fishbowls. I’ve blogged about the end of privacy through difficulty, but it seems we’re heading in a direction of no privacy at all. It will be interesting to […]

  • 5 The Internet hive and a new kind of privacy - WinExtra // Nov 28, 2008 at 10:04 pm

    […] aren’t. Daniel referred to something in another of his posts of something that he called efficiency of the information market which I think is a key point here Some people are terrified by the increasing efficiency of the […]

  • 6 No Privacy Through Difficulty | The Noisy Channel // Jan 27, 2009 at 10:41 pm

    […] blogged in the past about the futility of increasing futility of pursuing privacy through difficulty, and generally advocate an approach of “when in doubt, make it […]

  • 7 The Internet hive and a new kind of privacy — Shooting at Bubbles // Jun 5, 2009 at 2:02 am

    […] aren’t. Daniel referred to something in another of his posts of something that he called efficiency of the information market which I think is a key point here Some people are terrified by the increasing efficiency of the […]

  • 8 Gene Golovchinsky // Feb 19, 2010 at 2:01 pm

    The true danger lies in federation of multiple sources, with hard-to-predict consequences to the consumer. Can we come up with a technological solution that would give the user control (before or after the fact) about what data about that person can be aggregated? If this aggregation is of value to some, is there a way to monetize that, to have the consumer derive a revenue stream from the reuse of their data?

  • 9 Daniel Tunkelang // Feb 20, 2010 at 11:30 am

    Sure, but I don’t see how anyone can control that. I can’t imagine even a theoretical framework where X and Y are both public, but the aggregation of X and Y is not. I think that we as a society have to start expecting that the information we disclose will be combined, so that those consequences are less hard to predict. The recent “please rob me” story highlights that we have a ways to go before we have rational conversations about privacy.

  • 10 Jim Adler: The Accidental Chief Privacy Officer // Dec 4, 2011 at 2:50 pm

    […] publishes information about people from databases of public records, eroding a history of “privacy through difficulty“. Impressed with Jim’s talk at Strata, I persuaded him to deliver a similar talk at […]

Clicky Web Analytics