The Noisy Channel

 

Guest Post: Information Retrieval using a Bayesian Model of Learning and Generalization

April 4th, 2010 · 67 Comments · Uncategorized

Dinesh Vadhia, CEO and founder of “item search” company Xyggy, has been an active member of the Noisy Community for at least a year, and it is with pleasure that I publish this guest post by him, University of Cambridge / CMU Professor Zoubin Ghahramani, and University of Cambridge / Gatsby Computational Neuroscience Unit researcher Katherine Heller. I’ve annotated the post with Wikipedia links in the hope of making it more accessible to readers without a background in statistics or machine learning.

People are very good at learning new concepts after observing just a few examples. For instance, a child will confidently point out which animals are “dogs” after having seen only a couple of examples of dogs before in their lives. This ability to learn concepts from examples and to generalize to new items is one of the cornerstones of intelligence. By contrast, search services currently on the internet exhibit little or no learning and generalization.

Bayesian Sets is a new framework for information retrieval based on how humans learn new concepts and generalize.  In this framework a query consists of a set of items which are examples of some concept. Bayesian Sets automatically infers which other items belong to that concept and retrieves them. As an example, for the query with the two animated movies, “Lilo & Stitch” and “Up”, Bayesian Sets would return other similar animated movies, like “Toy Story“.

How does this work? Human generalization has been intensely studied in cognitive science and various models have been proposed based on some measure of similarity and feature relevance. Recently, Bayesian methods have emerged as models of both human cognition and as the basis of machine learning systems.

Bayesian Sets – a novel framework for information retrieval

Consider a universe of items, where the items could be web pages, documents, images, ads, social and professional profiles, publications, audio, articles, video, investments, patents, resumes, medical records, or any other class of items we may want to query.

An individual item is represented by a vector of features of that item.  For example, for text documents, the features could be counts of word occurrences, while for images the features could be the amounts of different color and texture elements.

Given a query consisting of a small set of items (e.g. a few images of buildings) the task is to retrieve other items (e.g. other images) that belong to the concept exemplified by the query.  To achieve the task, we need a measure, or score, of how well an available item fits in with the query items.

A concept can be characterized by using a statistical model, which defines the generative process for the features of items belonging to the concept.  Parameters control specific statistical properties of the features of items.  For example, a Gaussian distribution has parameters which control the mean and variance of each feature. Generally these parameters are not known, but a prior distribution can represent our beliefs about plausible parameter values.

The score

The score used for ranking the relevance of each item x given the set of query items Q compares the probabilities of two hypotheses. The first hypothesis is that the item x came from the same concept as the query items Q. For this hypothesis, compute the probability that the feature vectors representing all the items in Q and the item x were generated from the same model with the same, though unknown, model parameters. The alternative hypothesis is that the item x does not belong to the same concept as the query examples Q. Under this alternative hypothesis, compute the probability that the features in item x were generated from different model parameters than those that generated the query examples Q. The ratio of the probabilities of these two hypotheses is the Bayesian score at the heart of Bayesian Sets, and can be computed efficiently for any item x to see how well it “fits into” the set Q.

This approach to scoring items can be used with any probabilistic generative model for the data, making it applicable to any problem domain for which a probabilistic model of data can be defined.  In many instances, items can be represented by a vector of features, where each feature can either be present or absent in the item.  For example, in the case of documents the features may be words in some vocabulary, and a document can be represented by a binary vector x where element j of this vector represents the presence or absence of vocabulary word j in the document.  For such binary data, a multivariate Bernoulli distribution can be used to model the feature vectors of items, where the jth parameter in the distribution represents the frequency of feature j.  Using the beta distribution as the natural prior the score can be computed extremely efficiently.

Automatically learns

An important aspect of Bayesian Sets is that it automatically learns which features are relevant from queries consisting of two or more items. For example, a movie query consisting of “The Terminator” and “Titanic” suggests that the concept of interest is movies directed by James Cameron, and therefore Bayesian Sets is likely to return other movies by Cameron. We feel that the power of queries consisting of multiple example items is unexploited in most search engines. Searching using examples is natural and intuitive for many situations in which the standard text search box is too limited to express the user’s information need, or infeasible for the type of data being queried.

Uses

The Bayesian Sets method has been applied to diverse problem domains including: unlabelled image search using low-level features such as color, texture and visual bag-of-words; movie suggestions using the MovieLens and Netflix ratings data; music suggestions using last.fm play count and user tag data; finding researchers working on similar topics using a conference paper database; searching the UniProt protein database with features that include annotations, sequence and structure information; searching scientific literature for similar papers; and finding similar legal cases, New York Times articles and patents.

Apart from web and document search, Bayesian Sets can also be used for ad retrieval through content matching, building suggestion systems (“if you liked this you will also like these” which is about understanding the user’s mindset instead of the traditional “people who liked your choice also liked these”) and finding similar people based on profiles (e.g. for social networks, online dating, recruitment and security). All these applications illustrate the countless range of problems for which the patent-pending Bayesian Sets provides a powerful new approach to finding relevant information. Specific details of engineering features for particular applications can be provided in a separate post (or comments).

Interactive search box

An important aspect of our approach is that the search box accepts text queries as well as items, by dragging them in and out of the search box.  An implementation using patent data is at http://www.xyggy.com/patent.php.  Enter keywords (e.g., “earthquake sensor”) and relevant items to the keywords are displayed.  Drag an item of interest from the results into the search box and the relevance changes.  When two or more items are added into the search box, the system discovers what they have in common and returns better results.  Items can be toggled in/out of the search by clicking the +/- symbol and items can be completely removed by dragging them out of the search box.  Each change to an item in the search box automatically retrieves new relevant results.  A future version will allow for explicit relevance feedback.  Certain data sets also lend themselves to a faceted search interface and we are working on a novel implementation in this area.

In our current implementation, items are dragged into the search box from the results list, but it is easy to see how they could be dragged from anywhere on the web or intranet.  For example, a New York Times reader could drag an article or image of interest into the search box to find other items of relevance. There is a natural affinity between an interactive search box as described and the new generation of touch devices.

Summary

Bayesian Sets demonstrates that intelligent information retrieval is possible, using a Bayesian statistical model of human learning and generalization.  This approach, based on sets of items encapsulates several novel principles.  First, retrieving items based on a query can be seen as a cognitive learning problem; where we have used our understanding of human generalization to design the probabilistic framework.  Second, retrieving items from large corpora requires fast algorithms and the exact computations for the Bayesian scoring function are extremely fast.  Finally, the example-based paradigm for finding coherent sets of items is a powerful new alternative and complement to traditional query-based search.

Finding relevant information from vast repositories of data has become ubiquitous in modern life.  We believe that our approach, based on cognitive principles and sound Bayesian statistics, will find many uses in business, science and society.

67 responses so far ↓

  • 1 John Tantalo // Apr 5, 2010 at 11:56 am

    Isn’t this the same as Google Sets (2002)?

  • 2 Dinesh Vadhia // Apr 5, 2010 at 2:55 pm

    Google Sets (GS) was part of our inspiration for Bayesian Sets and our Bayesian Sets paper discusses this. GS addresses the same kind of “query by examples” problem, but specifically for retrieval of list items from a limited amount of list data off the web.

    Unlike GS, Bayesian Sets is applicable to any kind of data you want to perform retrieval on, and not tied to a particular list item data set. The way retrieval is done in GS and also the results are therefore very different from Bayesian Sets. Also, unlike GS, as we’ve discussed in the post, Bayesian Sets is modeled after our understanding of human learning and generalization

  • 3 jeremy // Apr 6, 2010 at 2:07 am

    I like that you’ve got this framework for increased interactivity in the retrieval process, and I like the real-time, interactive updating of the results in your patent search example.

    I wonder, how well would this approach work with the music example that I gave you in London a few years ago, where any given song (or even set of songs) means dozens of different things? For example, if I gave you these three examples…

    http://www.youtube.com/watch?v=HzZ_urpj4As

    http://www.youtube.com/watch?v=0-Q3cp3cp88

    http://www.youtube.com/watch?v=o7aShcmEksw

    You might conclude that I wanted more cheesy 80s mainstream pop music.

    But what if what I was really after were these (types of) songs?

    http://www.youtube.com/watch?v=iZRA-Dwv86E

    http://www.youtube.com/watch?v=4uLrbodN-9A

    http://www.youtube.com/watch?v=DaMYSvy3ths

    There is a concept or theme tying all six of these songs together. But figuring out what that is might be very difficult, as the first three examples are actually consistent with more than one model (i.e. the “Cheesy 80s” model as well as the “Concept X” model). Can Bayesian Sets handle things like this?

  • 4 christopher // Apr 6, 2010 at 2:09 am

    Sounds like a mashup of Google sets and existing vector models. In-Fact beyond saying it’s based on a human cognitive model it sounds very similar to existing vector models. Am I incorrect?

    Too bad it’s going to be patented, because In practice that limits how interesting it can actually be.

  • 5 jeremy // Apr 6, 2010 at 10:52 am

    christopher,

    I don’t buy your argument about “being patented” == “limits interestingness”

    PageRank is patented. Did that also limit its interestingness?

    (http://en.wikipedia.org/wiki/PageRank)

  • 6 Daniel Tunkelang // Apr 6, 2010 at 11:24 am

    Jeremy: may I suggest Coldplay? :-) Seriously, I share your concern about black-box similarity models. Curious to hear Dinesh’s response.

    Re patents and interestingness: I think most readers know that I’m not thrilled with the state of software patents in the US. My qualms are practical rather than philosophical. Regardless, I agree with Jeremy that that an approach being patented (or, as per the post, patent-pending) does not make it less interesting. Latent semantic indexing and RSA were (and still are) both pretty interesting!

  • 7 Zoubin Ghahramani // Apr 6, 2010 at 1:00 pm

    christopher,

    it’s not a mashup of Google Sets and existing vector space models (like tf-idf) at all. You can look at the paper if you’re interested in how it works. The basic idea is that it builds a probabilistic model of the set of query items and scores how well new items fit into that set. We do use vectors to represent items (they are a pretty general representation), and one can relate it to vector space models, but existing vector space models don’t take into account properties of a *set* of query items.

    Re patents and interestingness: we hope people find it interesting and useful — we are trying to be open about what we’ve done.

    jeremy:

    that’s an excellent question. It really does depend on the data the algorithm has accessible to it — in other words how the music is represented. We can represent music in many ways: low-level audio features, text in lyrics, tags given by users, attributes from Music Genome Project, patterns of user-music preferences as used in traditional recommender systems, etc. Given just three items, and the representation of the music, Bayesian Sets will do as well as it can inferring the underlying concept. It’s obviously a difficult problem and of course it won’t always “guess” what’s in you mind. But one of the nice things about Bayesian Sets is that as you give it more items and more features in the representation the method will generally work better and better.

  • 8 Revisiting Xyggy « The Intellogist Blog // Apr 6, 2010 at 1:17 pm

    [...] very informative and interesting article about the method behind “item search” over at The Noisy Channel if you wish to dig into the nitty gritty of Xyggy. Feel free to dive into in the comments section [...]

  • 9 Katherine Heller // Apr 6, 2010 at 1:22 pm

    Jeremy:

    Thats a great question. It depends in part on the features being used of course, but in general (ie if these concepts are captured by the features) if the query is the top 3 songs, the algorithm will prefer the songs which are both “cheesy 80s” AND “concept X” (someone else with the exact same query might genuinely be interested in the cheesy 80s aspect, so I think this makes sense). Then after those songs, will likely be a mix of cheesy 80s OR concept X. Once the query includes examples of non-cheesy 80s, concept X songs, songs that belong to concept X will then be top ranked, regardless of their cheesy 80ness.

    One of the extensions to Bayesian Sets that we’re currently working on is to detect clusters within the retrieved results. For example, if the top ranked results largely fall into either the “cheesy 80s” category, or the “concept X” category, then we should be able to cluster the results into these two categories, based on their features, and possibly use this to prompt the user to determine which category they are interested in.

  • 10 Dinesh Vadhia // Apr 6, 2010 at 1:34 pm

    Daniel:

    The Bayesian Sets implementation (code) remains the same between different item-search services operating on different data types (eg. text documents and unlabelled images). What is different is the feature vector representation of the item type between different services.

    Does this address your concern about “black box similarity models” or is it something else?

  • 11 christopher // Apr 6, 2010 at 1:46 pm

    My aversion to “software” patents is also practical… I may be jumping the gun as well as I have not looked at the patent in question and if I am I appologize upfront BUT if this patent covers the concept of applying vector models to sets of data for “filtering” (NOT a valid patent in my view) vs. the specific and novel application of an algorithmic state change (the algorithm making the change can be patented but NOT the outcome as there’s unlimited ways to do the same).

    I’d argue the original (there’s many of the former type patents now out there) rsa and lsi patents fall into the latter category and if this one does too I’m interested otherwise I am not.

    I’ll concede Jeremy’s point tho that the concept of it being intetesting in general is not the right use of language by me.

    Again if the patent covers a specific / novel transform ignore everything I’ve said and not the other thing congradulations on some cool work. :)

  • 12 christopher // Apr 6, 2010 at 1:49 pm

    So this is nothing like applying LDA to a set of data then? Sure sounds like it…

  • 13 Katherine Heller // Apr 6, 2010 at 3:59 pm

    This is nothing like LDA. LDA tries to find latent “topics” in a corpus of documents, where it is assumed that each word is generated from a topic, and each document is generated from a mixture of topics. Bayesian Sets provides a totally different model for addressing a different problem than that which LDA addresses.

  • 14 christopher // Apr 6, 2010 at 4:34 pm

    Hi Katherine,

    At this time I guess I’ll have to take your word for it, but are Latent Topics not essentially equivalent to items (aka similar topics) in a “set”?

    If I am describing this correctly I think you can see why I think using LDA to deal with “topics” can return a similar “set” as described in this post which can then use inputs to query for related “topics” (aka set items) across the input corpus or set.

    By leveraging LDA one can (obviously) apply the topic model to different tasks but maybe I’m missing something key here – and I acknowledge I may well be – I’m always willing to learn and be proven wrong is sometimes a requirement to that. :)

    Let me know but I will also re-read your paper.

  • 15 Dinesh Vadhia // Apr 6, 2010 at 4:48 pm

    Christopher:

    The patent is about the novel Bayesian Sets algorithm and we also agree that it is pretty cool!

    It is a new search tool and working through it you’ll discover the countless new uses that are not possible with text-search.

    It is easy to see how item-search with the interactive search box fits in almost naturally with touch devices such as the iPod and iPad. Imagine Apps from media companies where items such as articles, images, music, movies, sounds and ads can be dragged in and out of the search box to find other relevant items.

    With text-search, you are essentially looking for some combination of the query text in a corpus of documents (or web pages). With item-search, the item can be any data type, and a query finds other relevant items based on how similar they are to the query items and delivered in ranked order.

  • 16 christopher // Apr 6, 2010 at 5:23 pm

    Hi Dinesh,

    If it’s about the novel algorithm then that is cool & useful and I congratulate you on creating a “real” patent! :)

    So what I seem to be gathering is I’m not crazy and yes I can do something similar using LDA in the text realm but your method moves beyond text to allow the SAME (at the code level) algorithm work across media types.

    That is definitely highly useful & needed in a lot of verticals! I’m especially excited about how it works on unlabeled images.

    Oh and I’m now interested. ;)

  • 17 renaissance chambara | Ged Carroll - Links of the day // Apr 6, 2010 at 7:04 pm

    [...] Guest Post: Information Retrieval using a Bayesian Model of Learning and Generalization – interesting article on increasing search relevance. I know that Microsoft had done a lot of work on Bayesian mathematics for search [...]

  • 18 Daniel Tunkelang // Apr 6, 2010 at 10:05 pm

    Dinesh, my concern is simply that users be able to understand–and ideally manipulate–the basis for similarity. That would address the problem of an algorithm making the wrong generalization from insufficient input.

  • 19 christopher // Apr 6, 2010 at 10:13 pm

    Daniel, your blog no longer seems to have a notify of new comments feature anymore?

  • 20 Daniel Tunkelang // Apr 6, 2010 at 10:18 pm

    Fixed! Thanks for the heads up.

  • 21 christopher // Apr 6, 2010 at 10:21 pm

    Thanks. :)

  • 22 Katherine Heller // Apr 7, 2010 at 5:29 am

    Hi Christopher,

    Bayesian Sets is not the same code or algorithm as LDA. Let me try to explain:

    In LDA “topics” are probability distributions over words (and are not “items” themselves), and the words might be our “features”. Documents, which are then our “items” (they’re the objects we’d like retrieved), belong to multiple “topics”. You could therefore think that one way to create a “set” is to choose a particular topic and then rank all the documents in terms of their membership to that “topic”.

    But where is the query? LDA comes up with a fixed set of topics for the entire corpus, and a single set of assignments of documents to “topics”, which are responsible for explaining all of the words in the document.

    Bayesian Sets on the other hand, takes in a user query, and then finds the set which is defined by this query. In theory there are an unbounded number of queries that could be specified/ sets that can be retrieved. Bayesian Sets also doesn’t care about clustering the whole corpus like LDA, only about that which is related to the query. Lastly, the concept exemplified by the query is not responsible for explaining all the features of the items, only those relevant to the query.

  • 23 christopher // Apr 7, 2010 at 8:48 am

    Hi Katherine,

    A good overview of Bayesian Sets on a Nutshell, thank you for taking the time.

    2 things:

    1. By same code I was refering to bayesian sets using the same bayesien set code across different media types not that they and LDA are code equivalents.

    2. I now understand how you are taking into account the query and how that goes well beyond a basic LDA topic model. I do believe a similar tact can be taken in LDA but with a bounded set of items (the topics) that can be evaluated but I get how your method is likely superior in not only coverage but performance.

    In this light Bayesian Sets do sound very useful. :)

  • 24 Dinesh Vadhia // Apr 7, 2010 at 9:12 am

    @christopher

    The image search question is a good time to briefly talk about feature vectors and feature engineering wrt Xyggy and Bayesian Sets. Xyggy can operate on all data types ranging from pure text to non-text and everything in between where each item type is defined by a specfic feature vector (schema). A useful aspect is that developers are free to define the most suitable feature vectors for their application. For example, we built two image search prototypes: in the first one, the feature vectors were obtained by processing each image with standard texture (Gabor & Tamura) and color (HSV) filters to produce a feature vector of length 240 elements. The second prototype consists of ~700K flickr images that were pre-processed with color histogram, color auto correlogram, edge direction histogram, wavelet texture, color moments and bag of visual words filters to deliver a feature vector with 1,134 features. In general, a feature vector with standard texture and color features should be sufficient for consumer image search on the web with the option to add additional features if desired. If image textual data such as tags, labels, user comments and so on are available then these can be added to the feature vector too.

    The feature vector of an item type is defined beforehand during the data processing phase and is mostly a matter of common sense. There doesn’t need to be a huge amount of feature engineering as the relevant information simply needs to be clearly present in the feature vectors. For example, if you want to search for movies, it is useful to have information about actors if you expect your system to find movies with the same actor. If you represent images only with texture features, it won’t find images with the same colors and vice-versa. The main advantage is that it doesn’t really hurt to have too many features, if they are at least plausibly relevant to search.

    Depending on the search application some clever and sophisticated features can also be created. For example, feature vectors can be defined for web pages and text documents that include word count occurrences for words as well as for phrases, semantic concepts and relationships, numbers, geo details, urls, patterns, tags and annotations and so on.

  • 25 Dinesh Vadhia // Apr 7, 2010 at 9:28 am

    @ daniel

    We believe this is best addressed through the interactive search box where individual items can be turned on and off. Additionally, as Katherine alluded to earlier we are working on an explicit relevance feedback mechanism so that the user can quickly focus on their concept of interest.

  • 26 Katherine Heller // Apr 7, 2010 at 9:28 am

    Ah, yes you’re right its the same code across media types. Thanks!! :)

  • 27 jeremy // Apr 7, 2010 at 11:06 am

    @Daniel: Yes, you joke about Coldplay, but I did some trawling and was able to find one of their songs that I believe does borderline-fit the “Concept X” from which the other 6 songs are also drawn. Here’s that Coldplay song:

    http://www.youtube.com/watch?v=_N9rH2x5KUw

    Then again, while listening to it, I realized I’d heard that hook before. I grew up listening to Kraftwerk; check out their song “Computer Love”

    http://www.youtube.com/watch?v=EEBPzD3MPWE

    Eh? Eh? Same exact hook as Coldplay! Seriously, give them both a listen!

    The irony here is that not only are both this Coldplay piece AND the Kraftwerk piece drawn from this “Concept X” that I’m trying to find more of, but the Kraftwerk piece was also released in.. you guessed it, 1981. More 80s music.

    So now I’m starting to wonder about precision vs. recall of Bayesian sets. Katherine, your explanation was quite helpful, thank you. But what if I start to get frustrated with all the 80s music, as I’m trying to find more and more Concept X?

    Finally, one more question for Dinesh: How is this different from Relevance Feedback? In classic relevance feedback, a (text) document is described in terms of (term and phrase) feature vectors. The user looks at a set of n documents, and pick k < n of those documents as relevant. Then, probability distributions between the features vectors in those selected documents, and those feature vectors in the collection as a whole are compared, using things like Kullback-Leibler divergence or Bose-Einstein statistics. Those features that best discriminate the current set from the rest of the collection are weighted higher, while those that are not good discriminators are weighted lower, and then those feature weightings are used to pull more similar documents from the collection.

    Now, there is nothing in relevance feedback that says the document has to be text. You could also have an image "document", and your feature vector for that image could be color histograms, edges, etc. And mechanically, you'd still perform relevance feedback the same way.

    So are Bayesian Sets essentially a new way of doing relevance feedback?

  • 28 Katherine Heller // Apr 7, 2010 at 2:03 pm

    Hi Jeremy,

    The algorithm can’t read your mind :) As I said before, the same query could be given by someone who is genuinely interested in the 80s aspect. The algorithm needs more information to be able to distinguish you from them. This information could come in a variety of ways. 1) You could provide examples of non-80s concept X music. 2) You could potentially incorporate other external information about what you like to listen to, this is similar to 1, but perhaps more implicit. 3) Incorporate relevance feedback. 4) Provide clustered results, which I talked about before. These are the alternatives that come to mind straight off…

    In terms of relevance feedback, it does seem that its a problem which is a natural fit for Bayes Sets to address, and one that we’re working on. The problem of relevance feedback is different than that of querying with examples if you think about it. When a user performs a query with examples, they select a few positive examples only from a very large collection of items. When a user performs (standard) relevance feedback, there has already been some stab at what the user wants, and a limited number of “close” items have been returned. From this small number of items, you’re able to get the user to tell you which ones are positive examples, but not only that, the ones they haven’t selected are also (likely) negative examples. You can use this “close but negative” information to refine your results as well. This would be good, for example, in the case that you’ve been talking about with 80s and concept X songs. The downside is that the items being labelled by the user are from a very small pool which is biased by the original retrieval process, and therefore makes most sense as a refinement technique.

    In terms of methodology, the relevance feedback methods I tend to hear alot about are vector space methods (which I’ve already addressed). I don’t know the details of the ones you’re talking about, but they sound quite different than how one might naturally think of doing relevance feedback in the Bayesian Sets paradigm.

  • 29 Daniel Tunkelang // Apr 7, 2010 at 6:56 pm

    Jeremy: I thought Coldplay only drew hooks from Joe Satriani!

  • 30 Gene Golovchinsky // Apr 7, 2010 at 11:19 pm

    This seems to be related to language-modeling as well, in the sense that your approach computes probabilities that a an item is drawn from a particular distribution. That’s sort of what language models do as well, isn’t it? Yes, the data structure might be different, but the basic principle seems related.

  • 31 Katherine Heller // Apr 8, 2010 at 5:56 am

    Yes, in that what you describe is generative probabilistic modeling in general. Generative probabilistic models are used for lots of things including language modeling, information retrieval, computer vision, compuational biology, etc.

    What we compute is the probability that an item and the query items were drawn from the same (versus different) probability distribution, although we don’t know exactly what that distribution is (ie the parameters).

  • 32 jeremy // Apr 8, 2010 at 12:04 pm

    @Daniel :-)

    @Katherine: So would you say there is any kind of relationship between what you’ve done, and some of the issues that Warren Greiff has explored?

    “A Theory of Term Weighting for Exploratory Data Analysis”

    http://ciir.cs.umass.edu/pubfiles/ir-122.pdf

    In that paper, Greiff also sets up a ratio between generative models, i.e. p(occurrance|relevant)/p(occurrance|nonrelenvat). This ratio is calculated on a per-term, per-feature basis. Those features with the highest ratio can then be selected and used to get even more “relevant” documents.

  • 33 jeremy // Apr 8, 2010 at 3:04 pm

    BTW, it’s interesting that no one has even asked what “Concept X” is. I’m not making it up. There really is a concept that connects Shania Twain, Kraftwerk, and the Eurythmics. I really would be curious of Bayesian Sets could pick up on it.

  • 34 christopher // Apr 8, 2010 at 4:07 pm

    @Jeremy is ConceptX mutt lange?

  • 35 jeremy // Apr 8, 2010 at 4:16 pm

    I had to look up the reference; I didn’t know who that person was. No, that’s not Concept X.

    I need to correct what I said in comment #33: there is NOT a concept that connects all those artists. There is a concept that connects just those particular songs, in comments #3 and #27, by those artists. But yes, this concept does exist.

  • 36 Biweekly Links – 04-09-2010 « God, Your Book Is Great !! // Apr 9, 2010 at 2:36 am

    [...] Information Retrieval using a Bayesian Model of Learning and Generalization A discussion about Bayesian Sets and using them to find related stuff. The idea looks promising [...]

  • 37 Katherine Heller // Apr 9, 2010 at 7:23 am

    Hi Jeremy,

    So I’ve only looked over that paper briefly, but it seems to me that he’s using a likelihood ratio score (which can be seen as comparing 2 different hypotheses) to do term reweighting from relevance feedback. As far as I can tell, empirical frequencies are always used to compute the likelihoods, which is analogous to saying that there’s a multivariate Bernoulli generative model, and performing maximum likelihood estimation. He runs into some overfitting troubles due to the ML estimation which he tries to compensate for in his discussion on binning.

    We also use probabilistic models, in what we’ve presented here also multivariate Bernoulli’s (though that doesn’t need to be the case), to compare two different hypotheses, also having a flavor of relevant and non-relevant to the query. However, obviously our retrieval situation is different because our query is a bunch of examples, and his model is specifically for performing relevance feedback, so the exact hypotheses that we’re comparing are different. We also don’t perform ML estimation, but take a Bayesian approach instead (you can look up Bayes factors if you’re interested). Bayesian methods avoid the kind of overfitting situation in the binning section, and in fact allow us to compare the hypotheses that we do, since our “non-relevant” hypothesis has more parameters than our “relevant” hypothesis, and would therefore always be found to be more likely using maximum likelihood.

    BTW I can’t see your music links because I’m in a different country: “This video contains content from Vevo, who has blocked it in your country on copyright grounds.”

  • 38 jeremy // Apr 11, 2010 at 7:43 pm

    Ok, I understand the difference between ML estimation and Bayesian methods. But the distinction between “his method is relevance feedback” and “our method is examples” is something I still don’t quite get. Relevance feedback is essentially the idea of getting the user to examine a set of documents, and say “yes” to some, and “no” to others. Relevance feedback *is* example-based. Don’t let the word “relevance” fool you. What’s happening is that the user is doing a by-example query, by saying “yes” to some and “no” to others.

    I found another example song.. too bad you can’t see it:

    http://www.youtube.com/watch?v=kF0IVXXg240

    It’s “If you go away” by Cyndi Lauper, covering a 1959 French song by Jacques Brel. 80s, 50s, cover songs, etc. That’s why I really am curious about how well the concept is detectable.

    FWIW, the concept I was after was “songs that are all good to dance a ‘west coast swing’ to. Good west coast swings span various genres, artists, timbres, and eras. Some can be fast and funky, some can be slow and sultry. In other words, good WCS swing songs are all across the map. So I’m just wondering how well Bayesian Sets can pick up on detecting/finding good WCS songs, even with 10 to 20 examples.

    Here are some examples of various songs and people dancing to them. Hopefully there is no Vevo in these:

    http://www.youtube.com/watch?v=5wAnbmurwtE

    http://www.youtube.com/watch?v=V3ZxiPKmacg

    And for Daniel, another Coldplay:

    http://www.youtube.com/watch?v=43SrQLFiE84

    :-)

  • 39 jeremy // Apr 11, 2010 at 7:48 pm

    By the way, these two versions of “If you go away” are not very good to dance the WCS to:

    http://www.youtube.com/watch?v=y9OJCoTQqbA

    http://www.youtube.com/watch?v=i2wmKcBm4Ik

    So it’s not the song itself.

  • 40 Daniel Tunkelang // Apr 11, 2010 at 8:17 pm

    I wonder if even Pandora would be able to generalize from a large set of west coast swing songs which had no other strong commonality. Seems this is at least much about available metadata (e.g., bpm, playlists) as choice of algorithm.

  • 41 jeremy // Apr 11, 2010 at 10:15 pm

    Well, that’s part of my question, really. Often times, the metadata that you do have are orthogonal to your information need.

    Think about this in terms of regular text search: navigation vs. exploratory search. If the relevant document has the “right” text, and your query has the “right” text, then modulo spam, the problem of finding your relevant document is relatively easy.

    But if your query terms don’t match the query terms in the relevant document, or if there are multiple relevant documents and some match and some don’t, or if your information need is not easily expressed in query terms (e.g. my ongoing frustration with Google about not being able to sort by *least recent* most topically relevant documents), then you’ll have a problem.

    Now, this same issue applies to music and Bayesian sets. Or “examples” of any kind, and Bayesian sets. How well do Bayesian set do with an “exploratory” information need? How well do they do when no specific sets of features really easily describes the information need? (e.g. west coast swing).

    I ask, because to me, that’s the more interesting kind of IR problem. I believe that there needs to be an acknowledgment that the features (whether data or metadata) will never exactly match the information need. The research question is: What does one do about that?

  • 42 Daniel Tunkelang // Apr 11, 2010 at 10:35 pm

    Point taken. I’m not expecting that the songs would have to be tagged “west coast swing”. But I think it reasonable to require that a set have a concise description in terms of the feature space in order to be findable. The challenge is arriving at that concise description. But if no such concise description exists, then I think the best one can hope for is an efficient process that yields a negative result.

  • 43 jeremy // Apr 11, 2010 at 10:55 pm

    But I think it reasonable to require that a set have a concise description in terms of the feature space in order to be findable.

    You mean something like:

    If (tempo > 50 bpm) and (tempo < 90 bpm) and (beatEmphasis == (2 | 4)) and … then song = relevant (i.e. WCS-able)?

    Maybe something so simple does exist. But if it doesn't exist, then I would hope that we could develop methods that learned how to come up with (discover) more complex interactions in the primitives that do concisely describe the information need. The primitive features given to the system might be tempo and beat onsets. But the features necessary for classification might be something like "histogram of note duration ratios". A good IR system should be able to induce some of this more complicated structure as part of the active learning, interactive, HCIR process.

    Do Bayesian sets do that, or do they rather focus more on adjusting weights on existing feature primitive vectors?

  • 44 jeremy // Apr 12, 2010 at 12:09 am

    If you think about it abstractly, “find all songs that are good to dance WCS to” is quite a similar type of information need to “find all the information necessary to describe how subway crime has varied in New York over the past two decades.”

    The very fact that crime varies means that the same concise keywords or descriptors in the late 80’s might not be the same best one’s for the late 90’s. But the information need as a whole encompasses both. Just like certain timbres might be good for one set of WCS songs and not for another.

  • 45 Katherine Heller // Apr 12, 2010 at 6:24 am

    Hi Jeremy,

    I know what relevance feedback is. The difference is as I described 3 posts ago (I’ll copy here):


    In terms of relevance feedback, it does seem that its a problem which is a natural fit for Bayes Sets to address, and one that we’re working on. The problem of relevance feedback is different than that of querying with examples if you think about it. When a user performs a query with examples, they select a few positive examples only from a very large collection of items. When a user performs (standard) relevance feedback, there has already been some stab at what the user wants, and a limited number of “close” items have been returned. From this small number of items, you’re able to get the user to tell you which ones are positive examples, but not only that, the ones they haven’t selected are also (likely) negative examples. You can use this “close but negative” information to refine your results as well. This would be good, for example, in the case that you’ve been talking about with 80s and concept X songs. The downside is that the items being labelled by the user are from a very small pool which is biased by the original retrieval process, and therefore makes most sense as a refinement technique.

    The score in the relevance feedback paper you linked to uses the “close negative” examples that the user just labelled in the non-relevant hypothesis, in order to refine the results. When we query with examples, we don’t have these “close negative” examples, nor has the algorithm presented the user with retrieval results to label (because we’re not doing relevance feedback), nor is it attempting to refine an already used retrieval process. Therefore our “non-relevant” hypothesis does not include any “close negative” examples, or “negative” examples at all. Instead the hypothesis says that the item being scored is in a different cluster from the query items, where a cluster is defined by a probabilistic model.

    WCS songs: Assuming your features do capture the information you’re trying to retrieve, the complexity you describe will depend to some degree on the probabilistic model you choose to represent a cluster, within the BS framework. The problem is, in general, if you want to have an extremely complicated model, learning is slow, and therefore retrieval is slow. In practice we’ve found that you most often get really, really good results with simple bernoulli models. And honestly I would be surprised if, given a reasonable number of music features, the concept you want couldn’t be distinguished with this.

    However, I think its also interesting to think about retrieval systems that perhaps start out doing something fast, and then if the user remains unhappy with the results, offers the option of spending more time and learning something more complicated. My guess is right now, though, that most people’s frustration comes from poor algorithms, features, and user interface design, and not from needing to learn some super-complex model of the concept they’d like retrieved.

  • 46 Antonio // Apr 12, 2010 at 7:22 am

    There is also an interesting IR approach here:

    whatisprymas.wordpress.com/

  • 47 Daniel Tunkelang // Apr 12, 2010 at 8:24 am

    (tempo > 50 bpm) and (tempo < 90 bpm) and (beatEmphasis == (2 | 4)) would be nice, but I could imagine something not quite as elegant–as long as rhythm is somehow represented. Just want to state the obvious that you can’t draw water from a stone: the information necessary to identify the basis for similarity may simply not be accessible. I think it would be a highly desirable system quality to help the user figure that out as quickly as possible.

  • 48 jeremy // Apr 12, 2010 at 11:07 am

    The problem of relevance feedback is different than that of querying with examples if you think about it. When a user performs a query with examples, they select a few positive examples only from a very large collection of items. When a user performs (standard) relevance feedback, there has already been some stab at what the user wants, and a limited number of “close” items have been returned. From this small number of items, you’re able to get the user to tell you which ones are positive examples, but not only that, the ones they haven’t selected are also (likely) negative examples. You can use this “close but negative” information to refine your results as well.

    Um, maybe I’m just dense, but I really don’t see how this is any different from relevance feedback. What I mean is, this is an interactive process, correct? It doesn’t matter if you’ve started with a “query by text” or with a “query by example”, if in the 2nd round of interaction, the system produces a list of relevant mixed with “close but nonrelevant” results into a list. At that point, as you start then dragging more relevant examples in, and non-relevant examples out, how is that not relevance feedback? Procedurally and conceptually, I mean? I understand that the mathematics that make it possible are different. But if those mathematics produce a set of relevant and close but nonrelevant results after the initial query-by-example, then you’ve got relevance feedback, do you not? As Dinesh writes:

    An important aspect of our approach is that the search box accepts text queries as well as items, by dragging them in and out of the search box. An implementation using patent data is at http://www.xyggy.com/patent.php. Enter keywords (e.g., “earthquake sensor”) and relevant items to the keywords are displayed. Drag an item of interest from the results into the search box and the relevance changes. When two or more items are added into the search box, the system discovers what they have in common and returns better results. Items can be toggled in/out of the search by clicking the +/- symbol and items can be completely removed by dragging them out of the search box. Each change to an item in the search box automatically retrieves new relevant results.

    So when you first, immediately start interacting with the system, I do see that there is a difference in that standard IR requires query-by-text, whereas you enable query-by-example — it doesn’t have to be a text query. But the results of that query by example produce a ranked list of relevant and close-but-not-relevant items, which you then use more Bayesian sets to tease apart, given that the user starts dragging each example in/out. Right? Or am I fundamentally still not understanding something?

  • 49 jeremy // Apr 12, 2010 at 11:20 am

    However, I think its also interesting to think about retrieval systems that perhaps start out doing something fast, and then if the user remains unhappy with the results, offers the option of spending more time and learning something more complicated.

    I totally agree.

    My guess is right now, though, that most people’s frustration comes from poor algorithms, features, and user interface design, and not from needing to learn some super-complex model of the concept they’d like retrieved.

    I submit that what makes a concept “complex” is that the algorithms and features, in their out-of-the-box configuration, do not align well with the concept that the user is trying to express. For example, my ongoing Google frustration that I am unable to sort by least recent, topically relevant information when doing a literature search.

    Complexity is not always a function of mathematics; it can also be a function of constrained design.

  • 50 Katherine Heller // Apr 12, 2010 at 1:56 pm

    Hi Jeremy,

    Ok, I think I understand where this is coming from now. So vanilla Bayesian Sets, is not designed for the purpose of doing relevance feedback. It could easily be extended to do so, and I have to add in a caveat here that I’m not totally up on all that has been happening at Xyggy. But, you started out asking me the difference between a particular relevance feedback score and the Bayes Sets score, and I said that there were differences because of Bayesian Sets being Bayesian and comparing different hypotheses because it wasn’t designed specifically to perform relevance feedback (ie include negative examples, etc.). This is still all true.

    So I’m glad we agree that the initial query-by-example is different than relevance feedback. That is what the Bayesian Sets score is designed to do, be a retrieval method you can use instead of the standard text query (btw even when you input a text query on the xyggy site, its just being used as a shorthand way of getting a query set). Thus the hypotheses are different than that of the relevance feedback work.

    Now, you point out that on the Xyggy site there is this interactive querying situation, and isn’t this the same as relevance feedback? Its doing relevance feedback I suppose, if that’s what the user is using the interactive process for. However, I believe, the score is not different from the initial query (though as I keep saying, relevance feedback in Bayes Sets is an extension being worked on). Its also possible that the interaction with the user has happened because the user is forming a new query, so the difficulty here in incorporating the negative examples is in determining when the user is refining an old query, versus forming a new one. Right now the algorithm treats every change to the query as a new query. So I’d call whats going on “query exploration” or something like that. I don’t know if there’s a name for this. The user is using the retrieved items to modify the query, but the scoring process is not changing, and treats every query like the first one. I’d say it’s sort of like a sandbox.

    In terms of the complexity discussion, I thought that you were implying complexity in mathematics (complex functions of the features). In terms of other things, like poor algorithms, features, UI etc. – I feel like there are bunch of issues that are getting conflated here. I certainly agree with Daniel, that if the information isn’t there, there’s not much anyone can do about it.

  • 51 jeremy // Apr 12, 2010 at 7:12 pm

    Yes, the Greiff paper was a question about the mathematics of Bayesian sets, not a question about relevance feedback.

    And I still think that the only difference between query by example and relevance feedback is at the stage of the process at which they are executed. Conceptually, I mean. Marking a document as relevant is, imho, no different than saying “add this document to the next query that I execute”. But maybe I’m just splitting hairs.

    But I do think our conversation is becoming complex enough, and with enough subtle subthreads, that it’s becoming difficult to continue properly. Perhaps in person? Will you be at SIGIR?

    Oh, and @Daniel: there is a middle ground between taking exactly the features that are given to you on the one hand, and trying to draw water from a stone on the other. Look at some of the old Della Pietra work on feature induction. That’s where you create new features out of existing raw data, in a task-driven manner.

  • 52 Daniel Tunkelang // Apr 12, 2010 at 7:29 pm

    Re feature induction / selection: I hear you. I can imagine applying a technique like streamwise feature selection toward this end. But even there the richness of the raw input matters.

    I probably won’t make it to SIGIR this year, but I hope you guys carry this conversation forward there–and continue it at the HCIR workshop a few weeks later!

  • 53 jeremy // Apr 12, 2010 at 7:32 pm

    But even there the richness of the raw input matters.

    Yes, but if you’ve got the audio of each of the songs listed above, then you should have enough raw input to help with my WCS seeking task. The song itself contains the song itself.

    I’m going to try and make it to IIiX as well as HCIR.

  • 54 Dinesh Vadhia // Apr 13, 2010 at 6:45 am

    @ jeremy & daniel wrt feature engineering

    The following are the data sets that we created demos from at Xyggy:

    – Content-based image search using unlabelled and labelled images with corel and flickr pictures
    – last.fm listener playcount and tag data by song to provide a music suggestion service (“if you liked this you will also like these” which is about understanding the users mindset instead of the traditional “people who liked your choice also liked these”)
    – Netflix ratings data to provide a movie suggestion service
    – Patents using patent bibliographic data only
    – Legal cases with citations
    – New York Times annotated corpus consisting of 1.8m articles from 1987 to 2007

    The feature vectors are defined and created prior to the search indexes. Some of the above data sets are rich (eg. images, patents and NYT) and others such as last.fm and Netflix would be difficult to classify as rich data. Rich or not, all the above demos worked very well. With flickr images we also built a version that combined the low-level features (eg. color, texture etc.) with available text data (labels, tags, user annotations and so on). This mixing of types could also be applied to music/audio data. With the last.fm data two versions were built: one with playcount data only and the other with playcount plus tag data. With text data, you can create new ‘custom’ features based on concepts or semantic relationships. In general, it really doesn’t hurt to have too many features – directly from the raw data and custom ones – if they are at least plausibly relevant to search. As can be seen there is plenty of flexibility and creativity available for feature engineering using Bayesian Sets.

  • 55 Katherine Heller // Apr 13, 2010 at 6:53 am

    Hi Jeremy,

    I’m planning to be in the SF area for most of the summer.. If you’re around perhaps we can meet up there.

  • 56 Weekly Search & Social News: 04/13/2010 | Search Engine Journal // Apr 13, 2010 at 10:24 am

    [...] Guest Post: Information Retrieval using a Bayesian Model of Learning and Generalization – Noisy channel [...]

  • 57 jeremy // Apr 13, 2010 at 6:06 pm

    Thank you, Dinesh and Katherine, for all your patient explanations!

    Katherine, let me know when you get here! I’m around.

  • 58 Katherine Heller // Apr 14, 2010 at 6:07 pm

    Great! Dinesh forwarded your email to me. I’ll let you know when I’m in town :)

  • 59 Dan // Apr 23, 2010 at 7:52 pm

    Dinesh,

    I posted a reply on Google’s Research blog that references my concerns about whether this method should be compared to human concept learning until it can better deal with polysemy.

    Here are the details of my test. I was after patents related to regression of disease, such as the regression of cancer. The seach term was regression and this is what my search looks like after I added four patents that attempt to refine the meaning of regression in this context.

    regression
    5414019 Regression of mammalian carcinomas
    5356817 Method for detecting the onset, progression and regression of gynecolic cancers
    4544305 Aiding the regression of neoplastic diseases
    4757056 Method for tumor regression in rats, mice and hamsters

    When you do this, you will see that 6 of the top 12 patents returned have the meaning of regression in the statistical sense rather than the disease sense. I would be interested in the comments of the bloggers on this anomaly. In fairness, when I run the search for “regression” on Google, it also give the more frequent usage related to statistics as the predominant result. I would need to qualify it as “regression cancer disease” before Google would understand what I was after. Yet, humans would easily be able to learn what I was after in my search with just a few added items by just looking at the titles as in this example. The problem appears to be that “statistical regression” was mentioned in the methods section of some of these patents even though the predominant idea in all of them was “regression” in the disease sense.

  • 60 Dan // Apr 24, 2010 at 6:54 am

    Dinesh,

    I do not believe that Google’s research blog will publish my last comment which was an answer to you because I mentioned the name of your product. As all that I said was that a “bag of words” approach will not be powerful enough to deal with polysemy, whereas an approach that can handle interactions would be. In other words, the statistical interaction of frequency counts of “regression” and “tumor” would be a feature that would uniquely separate the two meanings of “regression” in this example even with a small training sample size. In order to do this though, one would need massive parallel processing capabilities to compute the very large number of interaction features.

  • 61 Dinesh Vadhia // Apr 25, 2010 at 4:29 am

    Hi Dan

    The patent demo uses bibliographic data only which generally represents about 10% of the available textual information (for non-design patents). We are using a sophisticated bag-of-words method for the feature vectors but using all available textual data would improve the search results further. With Bayesian Sets you have the flexibility to define the features of the feature vectors to fit your needs. For example, you can use tri-grams instead of bag-of-words and also include concepts and semantic relationships.

    Wrt the demo, experiment by toggling on/off the keyword search line in the search box as well as toggling each item on/off to improve relevance. Also, take a look at the psychological research literature cited in the Bayesian Sets paper.

  • 62 Dan // Apr 25, 2010 at 12:18 pm

    Dinesh,

    I appreciate your response. I did have a look at the Bayesian Sets paper. I do not know enough about this method to know if it would perform better under the circumstances that you list. I am not sure if the trigram would be generally sensitive to the type of interaction that I mention. I am also not sure if adding the complete text would help with this anomaly that I find. These are empirical questions. However, I congratulate you and the rest of the team on a very interesting approach to a very difficult problem.

    I would again though mention that this does not seem to be a model of concept or category learning, such as when a child learns the category of ‘dog’. It seems more to be a model for cued recall of the contents of previously stored categories. It is more like if I ask a professor who is an expert on patents to give me a reference to all patents that concern the concept of ‘dog’ or ‘regression’. If she stumbles on ‘regression’ because of the multiple meanings, you might give a couple of examples and then she suddenly will be able to know that you mean regression related to disease or tumor. Humans can obviously do this type of task with very few examples, but the original learning of the category actually seems to require a much larger training sample size. Please see this paper for example :

    http://www.psych.nyu.edu/rehder/Hoffman_&_Rehder_10.pdf

    This paper presents a couple of experiments with adult humans where category learning requires at least 150-250 training trials (and in some cases it may not have even asymptoted yet).

  • 63 Katherine Heller // Apr 25, 2010 at 1:40 pm

    Hi Dan,

    Human category learning is obviously a complex issue. In terms of how quickly adults learn new categories, well, that really depends on the category being learned. See for example Nosofsky, Gluck et al’s replication of the Shepard, Hovland, Jenkin’s data: http://love.psy.utexas.edu/~love/papers/love_etal_2004.pdf – figure 3. Some categories people learn very quickly, while others are much more complicated and it’s not even clear that they ever completely learn them.

    There has been alot of recent work on Bayesian models for human category learning and generalization, by Tenenbaum, Griffiths, we had a paper recently: http://www.gatsby.ucl.ac.uk/~heller/prior_knowledge_nips_revision.pdf — we have another paper in submission in fact which compares human generalization off of one versus two training examples — and these Bayesian models seem to match up with human data quite well. While much of the discussion in these papers focuses on the prior, the basic way a category is being modeled, or learned, is the same way in which we model a category, or concept, in Bayesian Sets.

  • 64 Dinesh Vadhia // Apr 25, 2010 at 2:04 pm

    @dan

    The congratulations are really appreciated. Hopefully, down the road we or others will generate empirical data that compares the results of using different feature vectors on the same data set. Katherine has jumped in to answer the category learning portion of the question.

  • 65 Dan // Apr 25, 2010 at 3:56 pm

    Katherine,

    Thanks for the references. I agree that human category learning is complex and the actual rate of learning must depend upon individual difference factors as well as the nature of the category, such as the number of features. The same may be said about animal and machine learning of categories. A complex category with many features like ‘dog’ has features that overlap with many other similar categories like ‘cat’ and ‘goat’ etc. If you have an example of where something like this can be learned by a human learner with just a couple of training trials, I would be very interested.

    I am just not sure that category learning is happening with one or two examples in Zygy, as it seems to me that the cued recall of the previously stored category is what is being aided by the one or two examples in the query. That is not to discount Zygy as being a potentially new and interesting retrieval system that may have benefits beyond LDA and Google Sets. I do not know enough about Bayesian Sets to comment on that. I do know a lot about how many trials it takes a supervised machine learning system to accurately learn a category with many features that overlap with similar categories, as that has been our business for many years and in multitudes of applications. I can assure you that this cannot be accomplished with the typical high dimensional, multicollinear business data that we see daily with just a couple of training examples.

  • 66 Katherine Heller // Apr 26, 2010 at 7:44 am

    Hi Dan,

    The entire query in xyggy is formed from examples, the text word is just a quick way of indexing a few examples.

    In terms of “category learning”, if you give people even one or two examples they will use those examples to learn, or generalize, about the category. They may not always be able to generalize completely correctly, but that doesn’t mean that they aren’t learning. Just because someone is confused about whether a whale is a mammal or a fish doesn’t mean they don’t know anything about mammals or fish.

    References for people learning novel categories from one or just a couple of examples: Xu and Tenenbaum 2007 Word learning as bayesian
    inference. Psychological Review, and Pinker 1999, How the Mind Works.

  • 67 Dan // Apr 26, 2010 at 10:02 am

    Katherine,

    I agree with what you said that the process of category learning is happening with one or two examples. The more interesting questions are how long does it take to reach its peak performance and whether you have a model for this process that performs as well as human learners (or better) in reaching its peak performance and whether this process actually works very well in a test environment without labeled data. Everything else is theoretical and academic, as I am not sure that we yet have an understanding of ‘How the Mind Works’ and whether Bayesian learning will ultimately prove to be helpful here.

Clicky Web Analytics