Over the past week, there’s been lots of commentary about “The Unreasonable Effectiveness of Data“, an article by Googlers Alon Halevy, Peter Norvig, and Fernando Pereira in the most recent issue of IEEE Intelligent Systems.
Here are a few posts that have been appearing in my RSS reader:
- Geeking with Greg: Semantic interpretation and the effectiveness of big data
- Jeff’s Search Engine Caffe: Statistical Learning of Semantics from Web Data
- Matthew Hurst: Strings are not Meanings
- Stefano’s Linotype: Unreasonable Hypocrisy
I’m intrigued by the amount of attention this paper has attracted–especially the vitriol in this Stefano’s post:
What upset me about that paper is not how they say “oh sure, structure is great, but look overhere: there is a goldmine in all the sand” (which is something I fully resonate with) but they phrased it as a fight, deterministic vs. statistical, trying to convince people that adding structure it not the way to go, it’s basically a global waste of research resources.
And yet, without the <a> tag (that is: machine-readable imposed structure), they wouldn’t be where they are, not they would be able to speak from such a tall soapbox.
I’m actually sympathetic to the view that it’s usually better to have more data than heavier theoretical machinery. But I’ve seen this view taken to an extreme so absurd as to be worthy of an April Fool’s joke–in Chris Anderson’s Wired article about “The End of Theory“. Moreover, that same article quotes Peter Norvig as saying that “All models are wrong, and increasingly you can succeed without them.” Note: Peter Norvig explains here that he was misquoted.
So perhaps Stefano is right to react so harshly.