Categories
General

Wolfram Alpha: Second-Hand Impressions

Regular readers may not be surprised that my commentary on the Wolfram Alpha pre-launch publicity didn’t earn me a hands-on preview–though it did earn me a surprisingly positive email from their PR department. But fortunately I have my sources, and one of them was kind enough to share reactions to demo of the system.

His impressions in brief:

  • He’s impressed with the technology, though dubious that they have a business model.
  • Their knowledge base incorporates 10 trillion “facts” (RDF triples) derived from curated sources.
  • They focus on factual and numerically oriented queries, as opposed to fuzzier semantic ones.
  • Their engine is based on the approach described in NKS.
  • Their presentation interface reminds him of Wikipedia’s infoboxes.

My reaction: still intrigued, still skeptical. It sounds like a great toy, but a toy nonetheless. But I’ll try to keep an open mind until I get to play with it myself.

By Daniel Tunkelang

High-Class Consultant.

5 replies on “Wolfram Alpha: Second-Hand Impressions”

Are the derived facts stored in the knowledge base? How many facts in those 10 trillion are basic (could not be derived from the others) and how many are derived?

It seems like they could balloon that 10 trillion number by just adding more and more derived facts without changing the actual answers that the system could give.

Like

This is slight heresy but I have been wondering for some time if RDF is the best method for storing large scale knowledge bases.

Now before I get flamed, I am a fan of RDF, I use it and OWL in my application and I think a lot of data is best stored/queried using RDF, I’m just not convinced large scale general knowledge can be effectively managed as triples.

As a comparison I’d really love to hear from anyone using Topic Maps or some other semantic structure for large scale knowledge bases.

We shall see, I hope my miss givings are wrong.

Like

Comments are closed.