The Noisy Channel

 

How Recommendation Engines Quash Diversity

February 24th, 2009 · 16 Comments · Uncategorized

As regular readers here know, I have strong opinions about how recommendation engines should work. So does Daniel Lemire, a regular reader who specifically argues in favor of diversity in recommender systems. Well, this post is for him and all who share his concern.

In “Does Everything Really Sound Like Coldplay?“, Vegard Sandvold explains:

When a lot of people (who may otherwise have very diverse tastes in music) listen to Coldplay, Coldplay becomes very well connected with a lot of other artists, and also becomes a hub in what is known as a small-world network. Such networks are the basis for social recommendations. Oscar shows that these hubs are indeed the most popular artists, who again gets recommended more often than others. That is why all roads lead to Radiohead.

The cited Oscar is Oscar Celma, who recently defended his PhD thesis on “Music Recommendation and Discovery In The Long Tail”. I’ve only had a chance to skim the abstract, but I’m optimistic that people are giving more thought to the limitations of current recommendation systems. Of course, I’d really like it if they embraced transparency and exploration. But emphasizing diversity is certainly a worthy endeavor.

Or maybe  everything really does sound like Coldplay…

16 responses so far ↓

  • 1 Gene Golovchinsky // Feb 24, 2009 at 9:28 pm

    This is the sort of thing we were talking about when contrasting recommender systems with collaborative search in our SIGIR paper. I am sure Jeremy will have more to add!

    See Algorithmic Mediation for Collaborative Exploratory Search.

  • 2 jeremy // Feb 25, 2009 at 12:01 am

    This topic really deserves a full blog post. I’ll get on that.

    But until then, Daniel, check out section 5.2.2. We wanted to see how well two explicitly collaborative users could do in terms of finding relevant information, that no one else had found. In other words, non popular, but still relevant, content. And the short of it is that explicit collaboration really got at relevant information that was not found by anyone else.

    More importantly, this difference was more pronounced for sparse information needs, or queries/topics in which relatively little relevant information was available. It seems like that’s a desirable property.. that you have a technique you can use when relevant information isn’t as plentiful and easy to come by.

  • 3 MarkH // Feb 25, 2009 at 8:27 am

    “Coldplay becomes very well connected “…

    While recommendation engines will always have limitations I think the “posioned by popularity” theory outlined above is an unfair diagnosis.

    Any recommendation engine worth its salt would compensate for popularity ( the same way a decent search engine will not necesarily put any statistical significance in the similarly highly-connected word “the”). Identifying significant correlations requires more analysis than simply counting popularity.

  • 4 Daniel Tunkelang // Feb 25, 2009 at 8:41 am

    I am sure they do, and that’s avoids recommending the #1 song or artist to everyone. But I suspect that, below some threshold, it’s hard to collect enough statistical significance from the data–in which case you have to be in the head to qualify.

    To use your search engine example, no one will push you towards stop words, but the words with the most discriminatory value are ones with medium idf scores that are still in the head rather than the long tail of the vocabulary.

  • 5 Daniel Tunkelang // Feb 25, 2009 at 8:43 am

    Gene, Jeremy, I’ll have to read that paper, since I haven’t looked at your collaborative search work in a while.

  • 6 Vegard Sandvold // Feb 25, 2009 at 8:50 am

    @MarkH
    I admit that my explanation of collaborative filtering, which you’re refering to, is overly simplistic. For sure, social recommenders must compensate for popularity bias. But I believe that feedback loops reinforcing already popular items are difficult to avoid.

    A particularly interesting nuggets of information from Oscar’s thesis (from my post) – it takes on average 5 links/clicks/jumps to reach from the head to the long tail with a social recommender, while it takes just 2 for expert and content-based recommenders.

  • 7 MarkH // Feb 25, 2009 at 8:58 am

    So not “*all* roads lead to Coldplay”.

    There are a large number of very rarely-travelled roads that may have to resort to Coldplay but there are a big wedge of reasonably well-travelled roads that can suggest useful detours.

    Seems a reasonable state of affairs to me. I’m not sure how you realistically expect to build a navigation system that works for the very rarely travelled roads.

  • 8 Daniel Lemire // Feb 25, 2009 at 9:01 am

    Thanks for this great post.

    I’m glad I discovered Jeremy’s Algorithmic Mediation for Collaborative Exploratory Search. That is the right general direction, I believe.

  • 9 MarkH // Feb 25, 2009 at 9:03 am

    Actually, thinking further. An engine should know when it has insufficient evidence to make a recommendation and not resort to the Coldplay effect.
    No one should have to suffer Coldplay unnecessarily :)

  • 10 Daniel Tunkelang // Feb 25, 2009 at 9:06 am

    I think that last point is what’s key. I wonder if a big problem with both recommendation engines and search engines is their lack of humility / self-awareness: they don’t know when do say “I don’t know.”

  • 11 jeremy // Feb 25, 2009 at 10:58 am

    See also Greg Linden’s discussion of avoiding the “Harry Potter” problem:

    http://glinden.blogspot.com/2006/03/early-amazon-similarities.html

    Btw, Daniel, I switched my blog domain name. Try irgupf.com.

  • 12 alltoute // Feb 25, 2009 at 12:48 pm

    When you don’t know what to get you get what everybody get. :-) For the recommendation engine perspective it’s just a question of transparency (as long as the recommendation engine know where are the tradeofs). A best seller list is a kind of recommendation system also. I think when chances are “equal” the recommendation provider should promote best seller stuff way before long tail stuff if he want’s to make money and play safely.

  • 13 Bob Carpenter // Feb 27, 2009 at 12:16 pm

    If you’re playing by relevance, by which I mean give the user the recommendation most likely to be sound, then it in many cases makes sense to give them ColdPlay (or the Jonas Brothers, or Frank Sinatra, depending on their first few ratings).

    I like the finding new items problem. It focuses on recall, which we’ve always been arguing is important for many kinds of search. (It’s very hard to balance with knowing when to stop, though, which on paging interfaces is up to the user anyway.) I’ve often argued for diversity in rankings. I think Amazon does much better than Netflix at this, for instance.

  • 14 Tuning in to Google Music Search | The Noisy Channel // Oct 29, 2009 at 1:10 pm

    [...] my friend (and Princeton sociologist) Matt Salganik and his former advisor Duncan Watts), but even recommendation engines quash diversity. Google really can’t make things that much [...]

  • 15 Google Follow Finder // Apr 14, 2010 at 8:15 pm

    [...] a bit of an “everything sounds like Coldplay” effect (e.g., @fredwilson shows up in a lot of the searches I tried), but overall I’m [...]

  • 16 Beyond Social Currency // Jul 6, 2010 at 4:52 pm

    [...] quality, my own conformity of musical taste, or skew on the part of the recommendation system (cf. does everything sounds like Coldplay?). Still, I’m quite sure that I’m not favoring music based on prior knowledge of its [...]

Clicky Web Analytics