Categories
General

Free as in Freebase

It’s been a while since I’ve blogged about Freebase, the semantic web database maintained by Metaweb. But I recently had the chance to meet Freebasers Robert Cook and Jamie Taylor and hear them present to the New York Semantic Web Meetup on “Content, Identifiers and Freebase” (slides embedded above).

It was a fun and informative presentation. Perhaps the most surprising revelation about Freebase was that all of their data fits in RAM on a 32G box (yes, some of you caught me live-tweeting that during the presentation). Their biggest challenge is collecting good data that lends itself to the reconciliation needed to make Freebase useful as a data repository. Despite the lack of a near-term revenue model, the Freebasers are bullish about their approach: strong identifiers, strong semantics, open data. On the last point, almost all of Freebase is available under the  Creative Commons Attribution License (CC-BY)–which, as far as I can tell, make anyone free to develop a mirror of Freebase. Indeed, many people are using this data, including Google and Bing.

You might wonder whether Freebase is a business or a non-profit foundation–and the question did come up. The answer is that Freebase eventually expects to make money by providing services, e.g., helping advertisers. They see their graph store as a competitive advantage–but they freely admit that this advantage will erode over time. Indeed, the surprisingly small size of their graph makes me wonder how much speed and scalability matter, compared to the challenge of data scarcity.

I’d like to see Freebase succeed. I’m particularly a fan of the work David Huynh has done there on interfaces for semantic web browsing. Clearly their investors are true believers–Metaweb has raised a total of $57M in funding. I don’t quite get it, but I’m happy we can all benefit from the results.

By Daniel Tunkelang

High-Class Consultant.

10 replies on “Free as in Freebase”

I just talked to Jamie and Robert recently, and was very impressed that Freebase was addressing the shared semantic problem through integration. Clearly they’ve already built something useful.

As to profitability or even revenue, I was wondering the same thing about Google during a visit back in the late 1990s — awesome search, crazy company (funhouse office, food better than most restaurants, programmers complaining about not getting massages quick enough), and no revenue stream in sight (at least to me).

Like

Thanks Daniel for posting blog on Freebase. Hope it gains momentum like Wikipedia and manages to overcome the challenge of collecting the semantically enhanced data sets.

Like

Bob, I think it’s worth noting that, had Google not been, um, inspired by Goto.com‘s revenue model of auctioning sponsored listings, it’s not at all clear that Google would have succeeded. Remember: Google didn’t introduce AdWords until 2000. It’s not at all clear, that predicting Google’s success back in the late 1990s would have been rational. Hindsight is 20/20.

Like

Perhaps the most surprising revelation about Freebase was that all of their data fits in RAM on a 32G box (…). Their biggest challenge is collecting good data

Just in case one of your reader decides that we no longer need fancy database indexes or good engineers because RAM is so cheap… Know that it will take over 5 seconds to read the content of the memory—sequentially.

People have been designing RAM-based databases for indexing data such as XML for a while. The performance is not automagically acceptable.

Like

I didn’t mean to imply otherwise. But I do think it’s an eye-opener for folks who are used to measuring open web content in petabytes. Trust me, I know from experience that putting data in memory isn’t a silver bullet when you need to do anything interesting with it.

Like

The uncompressed size of a wikipedia dump is over one terabyte. There are solid state drives with that capacity—though they are expensive. However, in *compressed* form, wikipedia can fit within 4GB and thus, could fit in my one-year-old laptop’s RAM.

Now that wikipedia has decided to accept video clips, however, expect the size of the wikipedia dumps to go up significantly in the next few years.

Reference:
http://download.wikipedia.org/backup-index.html

Like

Comments are closed.