Categories
General

Why Does Latent Semantic Analysis Work?

Warning to regular readers: this post is a bit more theoretical than average for this blog.

Peter Turney has a nice post today about “SVD, Variance, and Sparsity“. It’s actually a follow-up to a post last year entitled “Why Does SVD Improve Similarity Measurement?” that apparently has remained popular despite its old age in blog years.

For readers unfamiliar with singular value decomposition (SVD), I suggest a brief detour to the Wikipedia entry on latent semantic analysis (also known as latent semantic indexing). In a nutshell, latent semantic analysis is an information retrieval techinque that applies SVD to the term-document matrix of a corpus in order to reduce this sparse, high-dimensional matrix to a denser, lower-dimensional matrix whose dimensions correspond to the “latent” topics in the corpus.

Back to Peter’s thesis. He’s observed that document similarity is more accurate in the lower-dimensional vector space produced by SVD than in the space defined by the original term-document matrix. This isn’t immediately obvious; after all, SVD is a lossy approximation of the term-document matrix, so you might expect accuracy to decrease.

In his 2007 post, Peter offers three hypotheses for why SVD improves the similarity measure:

  1. High-order co-occurrence: Dimension reduction with SVD is sensitive to high-order co-occurrence information (indirect association) that is ignored by PMI-IR and cosine similarity measures without SVD. This high-order co-occurrence information improves the similarity measure.
  2. Latent meaning: Dimension reduction with SVD creates a (relatively) low-dimensional linear mapping between row space (words) and column space (chunks of text, such as sentences, paragraphs, or documents). This low-dimensional mapping captures the latent (hidden) meaning in the words and the chunks. Limiting the number of latent dimensions forces a greater correspondence between words and chunks. This forced correspondence between words and chunks (the simultaneous equation constraint) improves the similarity measurement.
  3. Noise reduction: Dimension reduction with SVD removes random noise from the matrix (it smooths the matrix). The raw matrix contains a mixture of signal and noise. A low-dimensional linear model captures the signal and removes the noise. (This is like fitting a messy scatterplot with a clean linear regression equation.)

In today’s follow-up post, he drills down on this third hypothesis, noting that noise can come from either variance and sparsity. He then proposes independently adjusting the sparsity-smoothing and variance-smoothing effects of SVD to split this third hypothesis into two sub-hypotheses.

I haven’t done much work with latent semantic analysis. But work that I’ve done with other statistical information retrieval techinques, such as using Kullback-Leibler divergence to measure the signal of a document set, suggest a similar benefit from preprocessing steps that reduce noise. Now I’m curious about the relative benefits of variance vs. sparsity reduction.

By Daniel Tunkelang

High-Class Consultant.

2 replies on “Why Does Latent Semantic Analysis Work?”

There’s a really nice description of the large space of SVD-like factor analyses and the relations among them in Aleks Jakulin and Wray Buntine’s paper Discrete Component Analysis. It talks specifically about how the various SVD-like factorings take variance and sparsity into account.

You can find implementations of SVD for complete matrices in packages like R and Matlab, but the folks who used it for Netflix invented a missing-data variant based on stochastic gradient descent that’s so neat I implemented it in LingPipe (see our singular value decomposition tutorial, which contains lots of references and an implementation of the demos from the original latent semantic indexing papers).

Like

Very cool. Will put that on my reading queue.

By the way, does anyone know if the patent on LSA (4,839,853) has expired? According to Wikipedia, “For applications that were pending on and for patents that were still in force on June 8, 1995, the patent term is either 17 years from the issue date or 20 years from the filing date of the earliest US application to which priority is claimed (excluding provisional applications), the longer term applying.” I think that means that the patent expired in September.

Like

Comments are closed.