Back when I was an undergraduate (yes, a long long time ago), there was a lot of excitement about software agents, also called intelligent agents. The general idea was that a software agent would be able to pursue goal-directed behavior on a person’s behalf. Of course, what that meant ran the gamut from the mundane (e.g., autodialers) to science fiction (e.g., Braniac in the Superman comics).
With the increasing role that the web plays in our interactions, I wonder about the role of software agents on the web. We already see comment spammers and prankster instant messaging bots, as well as more benign shopbots.
But a question that plagues me is how to reconcile the inherent rationality of software agents with the systematic irrationality of the human beings they represent. Herb Simon argued that humans exercise bounded rationality, but the research from prospect theory suggests that the situation is even worse: not only are we bounded by our limited mental resources, but we don’t even make the most rational use of the resources we have.
So, if software agents start making decisions on our behalf, I wonder how happy we’ll be with those decisions. Will software agents have to simulate our deviations from rationality? Or will we have to learn to be more rational?
Finally, I shouuld not that machine agents are not restricted to the web or even to software. Just pick up the New York Times, and you can read about attempts to make Terminators a reality. Those efforrts raise concerns not only about rationality, but about ethics and accountability.