One of the points he makes echoes the “computers aren’t mind readers” theme I’ve been hammering at for a while:
If the user has not phrased her search clearly enough for another person to understand what she’s trying to find, then it’s not reasonable to expect that a comparatively “dumb” machine could do better. In a Turing test, the response to a question incomprehensible even to humans would prove nothing, because it wouldn’t provide any distinction between person and machine.
While I’m not convinced that search engine designers should be aspiring to pass the Turing test, I agree wholeheartedly with the vision John puts forward:
It describes an ideal form of human-computer interaction in which people express their information needs in their own words, and the system understands and responds to their requests as another human being would. During my usability test, it became clear that this was the very standard to which my test participants held search engines.
It’s not about the search engine convincing the user that another human being is producing the answers, but rather engaging users in a conversation that helps them articulate and elaborate their information needs. Or, as we like to call it around here, HCIR.