Categories
Uncategorized

TunkRank scores added to FluidDB

For those keeping track of TunkRank, I encourage you to check out FluidDB, which just added TunkRank scores to its feature set. That lets you do cool things like find out which users I follow have a TunkRank score over 40. You can also read what Jason Adams has to say about it here.

Speaking of Jason, check out the latest improvements he’s made to the TunkRank interface. Pretty slick! To learn more about the TunkRank measure of Twitter influence / authority, check out this post.

Categories
Uncategorized

Google’s New Look

I wish I could take even a gram of credit for this! I’m really proud of my colleagues for rolling out this new design that encourages and facilitates exploratory search. Go HCIR!

Categories
Uncategorized

Deadline to Register for HCIR Challenge

If you are interested in participating in the HCIR Challenge, please let me know as soon as possible–and in any case by April 30th. The New York Times and the LDC are graciously providing access to The New York Times Annotated Corpus for free (waiving the usual $300 fee), but we need to let the LDC know who will be participating. Hope to see lots of you presenting your systems at HCIR 2010!

Also, participants building their systems in Solr can take advantage of the scripts that Tommy Chheng has prepared. Of course, you are welcome to use your own system. Commercial software companies are especially encouraged to show off their HCIR wares!

Categories
Uncategorized

Google Follow Finder

I know there’s lots of interesting stuff coming out at the Chirp Twitter developer conference this week, and I’m still catching up on it all. But I am happy to point folks to a Google Labs application that was announced this morning: Follow Finder.

It’s not the first application to suggest Twitter followers based on analysis of the social graph, but I’ve actually found its suggestions to be quite plausible. For example, it suggests @fredwilson, @cshirky, @mattcutts, @peteskomoroch, and @msftresearch as “tweeps” I should follow, and suggests that the following users have similar followers to mine: @endeca, @lemire, @yahooresearch, @googleresearch, and @mattcutts.

There’s a bit of an “everything sounds like Coldplay” effect (e.g., @fredwilson shows up in a lot of the searches I tried), but overall I’m impressed with the quality, especially compared to the other suggestion tools I’ve tried.

Categories
Uncategorized

Fernanda Viégas and Martin Wattenberg Start a New Company: Flowing Media

This just in: Fernanda Viégas and Martin Wattenberg, two of the biggest rock stars in the world of data visualization (their long list of accomplishments includes Many Eyes and the Baby Name Voyager), have left IBM to form their own company, Flowing Media, headquartered in Cambridge, MA. As they wrote me, “if you know of anyone who has interesting data and would like help bringing it to life, spread the word.”

I am very excited for them, and am eagerly anticipating the work they will produce as free agents.

Categories
Uncategorized

Build Your Own NYT Linked Data Application

Regular readers may recall hearing about the New York Times Annotated Corpus (which is the basis for the HCIR Challenge), and decision to publish their tags as Linked Open Data, Given that linked data applications are still a bit exotic, NYT semantic technologist and Noisy Community regular Evan Sandhaus published a tutorial and example application to help you build your own. If you’d like to get your feet wet in the semantic web (and can forgive the mixed metaphor), this is an excellent opportunity.

Categories
Uncategorized

Guest Post: Information Retrieval using a Bayesian Model of Learning and Generalization

Dinesh Vadhia, CEO and founder of “item search” company Xyggy, has been an active member of the Noisy Community for at least a year, and it is with pleasure that I publish this guest post by him, University of Cambridge / CMU Professor Zoubin Ghahramani, and University of Cambridge / Gatsby Computational Neuroscience Unit researcher Katherine Heller. I’ve annotated the post with Wikipedia links in the hope of making it more accessible to readers without a background in statistics or machine learning.

People are very good at learning new concepts after observing just a few examples. For instance, a child will confidently point out which animals are “dogs” after having seen only a couple of examples of dogs before in their lives. This ability to learn concepts from examples and to generalize to new items is one of the cornerstones of intelligence. By contrast, search services currently on the internet exhibit little or no learning and generalization.

Bayesian Sets is a new framework for information retrieval based on how humans learn new concepts and generalize.  In this framework a query consists of a set of items which are examples of some concept. Bayesian Sets automatically infers which other items belong to that concept and retrieves them. As an example, for the query with the two animated movies, “Lilo & Stitch” and “Up”, Bayesian Sets would return other similar animated movies, like “Toy Story“.

How does this work? Human generalization has been intensely studied in cognitive science and various models have been proposed based on some measure of similarity and feature relevance. Recently, Bayesian methods have emerged as models of both human cognition and as the basis of machine learning systems.

Bayesian Sets – a novel framework for information retrieval

Consider a universe of items, where the items could be web pages, documents, images, ads, social and professional profiles, publications, audio, articles, video, investments, patents, resumes, medical records, or any other class of items we may want to query.

An individual item is represented by a vector of features of that item.  For example, for text documents, the features could be counts of word occurrences, while for images the features could be the amounts of different color and texture elements.

Given a query consisting of a small set of items (e.g. a few images of buildings) the task is to retrieve other items (e.g. other images) that belong to the concept exemplified by the query.  To achieve the task, we need a measure, or score, of how well an available item fits in with the query items.

A concept can be characterized by using a statistical model, which defines the generative process for the features of items belonging to the concept.  Parameters control specific statistical properties of the features of items.  For example, a Gaussian distribution has parameters which control the mean and variance of each feature. Generally these parameters are not known, but a prior distribution can represent our beliefs about plausible parameter values.

The score

The score used for ranking the relevance of each item x given the set of query items Q compares the probabilities of two hypotheses. The first hypothesis is that the item x came from the same concept as the query items Q. For this hypothesis, compute the probability that the feature vectors representing all the items in Q and the item x were generated from the same model with the same, though unknown, model parameters. The alternative hypothesis is that the item x does not belong to the same concept as the query examples Q. Under this alternative hypothesis, compute the probability that the features in item x were generated from different model parameters than those that generated the query examples Q. The ratio of the probabilities of these two hypotheses is the Bayesian score at the heart of Bayesian Sets, and can be computed efficiently for any item x to see how well it “fits into” the set Q.

This approach to scoring items can be used with any probabilistic generative model for the data, making it applicable to any problem domain for which a probabilistic model of data can be defined.  In many instances, items can be represented by a vector of features, where each feature can either be present or absent in the item.  For example, in the case of documents the features may be words in some vocabulary, and a document can be represented by a binary vector x where element j of this vector represents the presence or absence of vocabulary word j in the document.  For such binary data, a multivariate Bernoulli distribution can be used to model the feature vectors of items, where the jth parameter in the distribution represents the frequency of feature j.  Using the beta distribution as the natural prior the score can be computed extremely efficiently.

Automatically learns

An important aspect of Bayesian Sets is that it automatically learns which features are relevant from queries consisting of two or more items. For example, a movie query consisting of “The Terminator” and “Titanic” suggests that the concept of interest is movies directed by James Cameron, and therefore Bayesian Sets is likely to return other movies by Cameron. We feel that the power of queries consisting of multiple example items is unexploited in most search engines. Searching using examples is natural and intuitive for many situations in which the standard text search box is too limited to express the user’s information need, or infeasible for the type of data being queried.

Uses

The Bayesian Sets method has been applied to diverse problem domains including: unlabelled image search using low-level features such as color, texture and visual bag-of-words; movie suggestions using the MovieLens and Netflix ratings data; music suggestions using last.fm play count and user tag data; finding researchers working on similar topics using a conference paper database; searching the UniProt protein database with features that include annotations, sequence and structure information; searching scientific literature for similar papers; and finding similar legal cases, New York Times articles and patents.

Apart from web and document search, Bayesian Sets can also be used for ad retrieval through content matching, building suggestion systems (“if you liked this you will also like these” which is about understanding the user’s mindset instead of the traditional “people who liked your choice also liked these”) and finding similar people based on profiles (e.g. for social networks, online dating, recruitment and security). All these applications illustrate the countless range of problems for which the patent-pending Bayesian Sets provides a powerful new approach to finding relevant information. Specific details of engineering features for particular applications can be provided in a separate post (or comments).

Interactive search box

An important aspect of our approach is that the search box accepts text queries as well as items, by dragging them in and out of the search box.  An implementation using patent data is at http://www.xyggy.com/patent.php.  Enter keywords (e.g., “earthquake sensor”) and relevant items to the keywords are displayed.  Drag an item of interest from the results into the search box and the relevance changes.  When two or more items are added into the search box, the system discovers what they have in common and returns better results.  Items can be toggled in/out of the search by clicking the +/- symbol and items can be completely removed by dragging them out of the search box.  Each change to an item in the search box automatically retrieves new relevant results.  A future version will allow for explicit relevance feedback.  Certain data sets also lend themselves to a faceted search interface and we are working on a novel implementation in this area.

In our current implementation, items are dragged into the search box from the results list, but it is easy to see how they could be dragged from anywhere on the web or intranet.  For example, a New York Times reader could drag an article or image of interest into the search box to find other items of relevance. There is a natural affinity between an interactive search box as described and the new generation of touch devices.

Summary

Bayesian Sets demonstrates that intelligent information retrieval is possible, using a Bayesian statistical model of human learning and generalization.  This approach, based on sets of items encapsulates several novel principles.  First, retrieving items based on a query can be seen as a cognitive learning problem; where we have used our understanding of human generalization to design the probabilistic framework.  Second, retrieving items from large corpora requires fast algorithms and the exact computations for the Bayesian scoring function are extremely fast.  Finally, the example-based paradigm for finding coherent sets of items is a powerful new alternative and complement to traditional query-based search.

Finding relevant information from vast repositories of data has become ubiquitous in modern life.  We believe that our approach, based on cognitive principles and sound Bayesian statistics, will find many uses in business, science and society.

Categories
Uncategorized

Want a Quora Invite?

I have had 10 invites for Quora, a social search site launched earlier this year by a bunch of ex-Facebookers (including former CTO Adam D’Angelo and Charlie Cheever (who previously led Facebook Platform and Facebook Connect)–and which was just funded at an $86M valuation. Put your email address in a comment or communicated it to me by some other means if you’d like one. I’m pretty sure all new users get 10 invites, so please share the love if I run out. Also, I believe you need to have a Facebook account in order to register (using Facebook Connect).

I’ll write more about Quora when I’ve had a chance to play with it myself.

Categories
Uncategorized

Vacation

Just letting readers know that I’ll be on vacation for the next week. If you are starved for reading materials, check out some of the blogs I read.

Categories
Uncategorized

Blogging SSM 2010 and WSDM 2010

I’m delighted to report that I’ll be blogging about the Search and Social Media Workshop (SSM 2010) and the Web Search and Data Mining Conference (WSDM 2010) for Communications of the ACM.

Of course, I’ll cross-post here. I also encourage folks to follow the live tweet streams at #ssm2010 and #wsdm2010, as well as Gene and Jeremy’s posts at the FXPAL blog.

To those attending: see you all tomorrow through Saturday! To everyone else: I will try my best to communicate the substance and spirit of the conference.