Categories
General

Is SOA Enabling Intelligent Agents?

I recently blogged about sofware agents, mostly musing about how to reconcile their inherent rationality with our lack thereof as human beings.

But today I noticed an article by John Markoff in the New York Times entitled “A Software Secretary That Takes Charge “, which considers some companies trying to build services based on such agents. The article called my attention to the recent death of AI pioneer Oliver Selfridge, who coined the term “intelligent agents” and devoted much of his career to trying to make them a reality.

Markoff notes that “efforts to build useful computerized assistants have consistently ended in failure”, which raises the question of why any student of history is still investing in this area. Markoff quotes an answer from Rearden Commerce founder / CEO Patrick Grady:

The promise of the Web 2.0 era of the Internet has been the interconnection of Web services. Mr. Grady says he has a far easier task today because the heavy lifting has been done by others.

“This is the connective tissue that sits on top of the Web and brings you more than the sum of the parts,” he said. “I set out to deliver on the longstanding ‘holy grail of user-centric computing,’ a ‘personal Internet assistant.’”

In other words, intelligent agents are possible now because of Web 2.0 and service-oriented architecture. An interesting theory–and I can certainly accept it in theory. But I’m curious how it plays out in practice. It seems to me that there’s a lot of “heavy lifting” still waiting to be done.

By Daniel Tunkelang

High-Class Consultant.

9 replies on “Is SOA Enabling Intelligent Agents?”

i find it amusing that grady says the task is “far easier today” because of the work done by others. my company uses rearden commerce. if their task is so easy, why can’t they develop a system that doesn’t crash? we’re switching to concur as soon as our contract with rearden expires.

Like

I don’t see how web 2.0 technologies inform the reasoning required to have truly intelligent assistants. Pipe dream + SOA = pipe dream.

Like

Gene, I agree that we aren’t there yet, but a part of me does wonder if improving the “connective tissue” is the critical step to enable techniques that require large scale to work.

For example, link analysis techniques to support web search in turn rely on the mass adoption of such links. If you’ll accept this sort of transitive reasoning, then the quasi-intelligent behavior exhibited by web search engines (imagine demoing one in the 80s) owes much of its success to the developers of HTTP and related protocols.

I think we all agree that infrastructure is necessary for these large leaps. But I agree with you that the current infrastructure for connecting information sources isn’t sufficient for the reasoning to follow. This is still one of the grand challenges of classical AI.

Like

Funny, we appear to be drawn to the exact same blog-worthy articles… I would argue that the ‘connective tissue’ necessary for intelligent agents to thrive are professional services– a much more literal human component. The heavy lifting may have been done- the AI component may not be able to be 100%. We still need the business processes to understand the technology, how it fits in, and things like under which conditions the agent’s preparation and execution truly let us be hands-off.

Like

Oh man; SOA and intelligent agents! Two of my absolute favorite things, in a single blog post! 😉

Without going too flamey (I see SOA as a total Red Herring in this debate), one thing that I see as being a barrier to Intelligent Agents is that the vast majority of data on the Net is unstructured, and we are far from reaching a point where intelligent agents can “understand” English or any other text.

They require more structure. To your point, the only handle they have to grab right now is the linking structure itself of the net (and, say, tags vs. in terms of content priority, etc.). Sadly for agents, much of the structured information (exactly the kind of information that you’d really want to provide to them) is walled off, and for two good reasons that I can see.

The first is that data that is valuable enough for someone to spend the time structuring it is, for most organizations, a competitive advantage. Why share it?

The second is that, except in a few rare cases, people haven’t figured out how to monetize access to a database.

This is the core argument that TBL puts out for the Semantic Web; present all information in a way that computers can actually make sense of it, and you pave the way for intelligent agents! For the reasons listed above (technical feasibility issues aside), I don’t believe there is sufficient incentive to make this anywhere near a reality.

Like

Rob, I knew that if I put out enough buzzword bait you’d show up. 🙂

I agree that the inputs available today don’t seem to be at a high enough level to be useful to inference engines, and that it’s hard to see where the labor will come from to change that, at least outside the enterprise. Even within the enterprise, where there’s the lure of monetizing the return, it’s not clear that the expected return justifies the investment.

Perhaps what we need are more structured environments for *creating* information. The example that comes to mind is LinkedIn. They didn’t rely on information extraction techniques to parse resumes; instead, they persuaded users to enter this data in structured form. I wonder how much we can generalize from that example.

Like

Daniel-

“The example that comes to mind is LinkedIn. They didn’t rely on information extraction techniques to parse resumes; instead, they persuaded users to enter this data in structured form. I wonder how much we can generalize from that example.”

FreeBase is another attempt at this, and one with a more ambitious end game, but it’s hard to know what the tipping point is in terms of the volume of information contained in its DB. At some point it’s probably valuable enough to encourage people to build tools that take advantage of the data, which, in turn, might push other sites to use something like RDFa so that mashups are simpler…but I’m a bit skeptical since Wikipedia really seems “good enough” and its hard to imagine the larger masses moving their info into FreeBase instead.

Also, regarding LinkedIn: it’s data is the core part of its value, so they are unlikely to open up the kind of interface that an agent would be able to readily take advantage of! LinkedIn did release an API for app developers, much as Facebook did, but not a generalizable query API of the kind that agents would need.

Like

I’ve commented about Freebase here before (https://thenoisychannel.com/?s=freebase) and am highly skeptical despite my excitement about David Huynh’s interface work there. As you point out, there needs to be an incentive to assemble and publish usefully structured data.

And yes, I can see why LinkedIn isn’t eager to open their data to the world and lose its key asset. Though I think it’s only a matter of time before much of that data becomes public.

Like

Comments are closed.