Categories
Uncategorized

An Open Letter to the USPTO

Following the Supreme Court’s decision in Bilski v. Kappos, the United States Patent and Trademark Office (USPTO) plans to release new guidance as to which patent applications will be accepted, and which will not. As part of this process, they are seeking input from the public about how that guidance should be structured. The following is an open letter than I have sent to the USPTO at Bilski_Guidance@uspto.gov. More information is available at http://en.swpat.org/wiki/USPTO_2010_consultation_-_deadline_27_sept and http://www.fsf.org/news/uspto-bilski-guidance. As with all of my posts, the following represents my personal opinion and is not the opinion or policy of my employer.

To whom it may concern at the United States Patent Office:

Since completing my undergraduate studies in mathematics and computer science at the Massachusetts Institute of Technology (MIT) and my doctorate in computer science at Carnegie Mellon University (CMU), I have spent my entire professional life in software research and development. I have worked at large software companies, such as IBM, AT&T, and Google, and I also was a founding employee at Endeca, an enterprise software company where I served as Chief Scientist. I am a named inventor on eight United States patents, as well as on eighteen pending United States patent applications. I played an active role in drafting and prosecuting most of these patents. I have also been involved in defensive patent litigation, which in one case resulted in the re-examination of a patent and a final rejection of most of its claims.

As such, I believe my experience gives me a balanced perspective on the pros and cons of software patents.

As someone who has developed innovative technology, I appreciate the desire of innovators to reap the benefits of their investments. As a founding employee of  a venture-backed startup, I understand how venture capitalists and other investors value companies whose innovations are hard to copy. And I recognize how, in theory, software patents address both of these concerns.

But I have also seen how, in practice, software patents are at best a nuisance and innovation tax and at worst a threat to the survival of early-stage companies. In particular, I have witnessed the proliferation of software patents of dubious validity that has given rise to a “vulture capitalist” industry of non-practicing entities (NPEs), colloquially known as patent trolls, who aggressively enforce these patents in order to obtain extortionary settlements. Meanwhile, the software companies where I have worked follow a practice of accumulating patent portfolios primarily in order to use them as deterrents against infringement suits by companies that follow the same strategy.

My experience leads me to conclude that the only beneficiaries of the current regime are patent attorneys and NPEs. All other parties would be benefit if software were excluded from patent eligibility. In particular, I don’t believe that software patents achieve either of the two outcomes intended by the patent system: incenting inventors to disclose (i.e., teach) trade secrets, and encouraging investment in innovation.

First, let us consider the incentive to disclose trade secrets. In my experience, software patents fall into two categories. The first category focuses on interfaces or processes, avoiding narrowing the scope to any non-obvious system implementation details. Perhaps the most famous example of a patent in this category is Amazon’s “one-click” patent. The second category focuses on algorithm or infrastructure innovations that typically implemented as inside of proprietary closed-source software. An example in this category is the patent on latent semantic indexing, an algorithmic approach used in search and data mining applications. For the first category, patents are hardly necessary to incent disclosure, as the invention must be disclosed to realize its value. Disclosure is meaningful for patents in the second category, but in my experience most companies do not file such patents because they are difficult to enforce. Without access to a company’s proprietary source code, it is difficult to prove that said source code is infringing on a patent. For this reason, software companies typically focus on the first category of patents, rather than the second. And, as noted, this category of innovation requires no incentive for disclosure.

Second, let us ask whether software patents encourage investment in innovation. Specifically, do patents influence decisions by companies, individual entrepreneurs, or investors to invest time, effort, or money in innovation?

My experience suggests that they do not. Companies and entrepreneurs innovate in order to further their business goals and then file patents as an afterthought. Investors expect companies to file patents, but only because everyone else is doing it, and thus patents offer a limited deterrent value as cited above. In fact, venture capitalists investing in software companies are some of the strongest voices in favor of abolishing software patents. Here are some examples:

Chris Dixon, co-founder of software companies SiteAdvisor and Hunch and of seed-stage venture capital fund Founder Collective, says:

Perhaps patents are necessary in the pharmaceutical industry. I know very little about that industry but it would seem that some sort of temporary grants of monopoly are necessary to compel companies to spend billions of dollars of upfront R&D.

What I do know about is the software/internet/hardware industry. And I am absolutely sure that if we got rid of patents tomorrow innovation wouldn’t be reduced at all, and the only losers would be lawyers and patent trolls.

Ask any experienced software/internet/hardware entrepreneur if she wouldn’t have started her company if patent law didn’t exist. Ask any experienced venture investor if the non-existence of patent law would have changed their views on investments they made. The answer will invariably be no (unless their company was a patent troll or something related).

http://cdixon.org/2009/09/24/software-patents-should-be-abolished/

Brad Feld, co-founder of early-stage venture capital firms Foundry GroupMobius Venture Capital and TechStars, says:

I personally think software patents are an abomination. My simple suggestion on the panel was to simply abolish them entirely. There was a lot of discussion around patent reform and whether we should consider having different patent rules for different industries. We all agreed this was impossible – it was already hard enough to manage a single standard in the US – even if we could get all the various lobbyists to shut up for a while and let the government figure out a set of rules. However, everyone agreed that the fundamental notion of a patent – that the invention needed to be novel and non-obvious – was at the root of the problem in software.

I’ve skimmed hundreds of software patents in the last decade (and have read a number of them in detail.) I’ve been involved in four patent lawsuits and a number of “threats” by other parties. I’ve had many patents granted to companies I’ve been an investor in. I’ve been involved in patent discussions in every M&A transaction I’ve ever been involved in. I’ve spent more time than I care to on conference calls with lawyers talking about patent issues. I’ve always wanted to take a shower after I finished thinking about, discussing, or deciding how to deal with something with regard to a software patent.

I’ll pause for a second, take a deep breath, and remind you that I’m only talking about software patents. I don’t feel qualified to talk about non-software patents. However, we you consider the thought that a patent has to be both novel AND non-obvious (e.g. “the claimed subject matter cannot be obvious to someone else skilled in the technical field of invention”), 99% of all software patents should be denied immediately. I’ve been in several situations where either I or my business partner at the time (Dave Jilk) had created prior art a decade earlier that – if the patent that I was defending against ever went anywhere – would have been used to invalidate the patent.

http://www.feld.com/wp/archives/2006/04/abolish-software-patents.html

Fred Wilson, managing partner of venture-capital firm Union Square Ventures:

Even the average reader of the Harvard Business Review has a gut appreciation for the fundamental unfairness of software patents. Software is not the same as a drug compound. It is not a variable speed windshield wiper. It does not cost millions of dollars to develop or require an expensive approval process to get into the market. When it is patented, the “invention” is abstracted in the hope of covering the largest possible swath of the market. When software patents are prosecuted, it is very often against young companies that independently invented their technology with no prior knowledge of the patent.

http://www.unionsquareventures.com/2010/02/software-patents-are-the-problem-not-the-answer.php

In summary, software patents act as an innovation tax rather than a catalyst for innovation. Perhaps it is possible to resolve the problems of software patents through aggressive reform. But it would be better to abolish software patents than to maintain the status quo.

Sincerely,

Daniel Tunkelang

Categories
General

Search at the Speed of Thought

A guiding principle in information technology has been to enable people to perform tasks at the “speed of thought”. The goal is not just to make people more efficient in our use of technology, but to remove the delays and distractions that make us focus on the technology rather than the tasks themselves.

For example, the principle motivation for the faceted search work I did at Endeca was to eliminate hurdles that discourage people from exploring information spaces. Most sites already offered user the ability to perform this exploration through advanced or parametric search interfaces–indeed, I recall some critics of faceted search objecting that it was nothing new. But there’s a reason that most of today’s consumer-facing sites place faceted search front and center while still relegating advanced search interfaces to an obscure page for power users. Faceted search offers users the fluidity and instant feedback that makes exploration natural for users. Once you’re used to it, it’s hard to live without it, whether your looking for real estate (compare Zillow.com to housing search on craigslist), library books (compare the Triangle Research Libraries Network to the Library of Congress), or art (compare to art.com to artnet).

Why is faceted search such a significant improvement over advanced or parametric search interfaces? Because it supports exploration at the speed of thought. If it takes you several seconds–rather than a single click–to refine a query, and if you have to repeatedly back off from pages with no results (aka dead ends), your motivation to explore a document collection fades quickly. But when that experience is fluid, you explore without even thinking about it. That is the promise (admittedly not always fulfilled) of faceted search.

Microsoft Live Labs director Gary Flake offered a similar message in his SIGIR 2010 keynote. He argued that we needed to replace our current discrete interactions with search engines into a mode of continuous, fluid interaction where the whole of data is greater than sum or parts. While he offered Microsoft’s Pivot client as an example of this vision, he could also have invoked the title of a book that Bill Gates wrote in 1999: Business @ the Speed of Thought. Indeed, anyone who has ever worked on data analysis understands that you ask fewer questions when you know you’ll have to wait for answers. Speed changes the way you interact with information.

And at Google, speed has been an obsession since day one. It makes the top 3 on the “Ten things we know to be true” list:

3. Fast is better than slow.

We know your time is valuable, so when you’re seeking an answer on the web you want it right away – and we aim to please. We may be the only people in the world who can say our goal is to have people leave our website as quickly as possible. By shaving excess bits and bytes from our pages and increasing the efficiency of our serving environment, we’ve broken our own speed records many times over, so that the average response time on a search result is a fraction of a second. We keep speed in mind with each new product we release, whether it’s a mobile application or Google Chrome, a browser designed to be fast enough for the modern web. And we continue to work on making it all go even faster.

People have made much of Google VP Marissa Mayer’s estimate that Google Instant will save 350 million hours of users’ time per year by shaving two to five seconds per search. That’s an impressive number, but I personally think it understates the impact of this interface change. Rather, I’m inclined to focus on a phrase I’ve seen repeatedly associated with Google Instant: “search at the speed of thought”.

What does that mean in practice? I see two major wins from Google Instant:

1) Typing speed and spelling accuracy don’t get in the way. For example, by the time you’ve typed [m n], you see results for M. Night Shyamalan, a name whose length and spelling might frustrate even his fans. A search for [marc z] offers results for Facebook CEO Mark Zuckerberg. Admittedly, the pre-Instant type-ahead suggestions already got us most of the way there, but the feedback of actual results offers not just guidace but certainty.

2) Users spend less–and hopefully no time–in a limbo where they don’t know if the system has understood the information-seeking intent they have expressed as a query. For example, if I’m interested in learning more about the Bob Dylan song “Forever Young“, I might enter [forever young] as a search query–indeed, the suggestion shows up as soon as I’ve typed in “fore”. But a glance at the first few instant results for [forever young] makes it clear that there are lots of songs by this title (including those by Rod Stewart and Alphaville–as well as the recent Jay Z song “Young Forever” that reworks the latter). Realizing that my query is ambiguous, I type the single letter “d” and instantly see results for the Dylan song. Yes, I could have backed out from an unsuccessful query and then tried again, but instant feedback means far less frustration.

Google Instant also makes it a little easier for users to explore the space of queries related to their information need, but exploration through instant suggestions is very limited compared to using related searches or the wonder wheel–let alone what we might be able to do with faceted web search. I’d love to see this sort of exploration become more fluid, but I recognize the imperative to maintain the simplicity of the search box. Good for us HCIR folks to know that there’s still lots of work to do on search interface innovation!

But, in short, speed matters. Instant communication has transformed the way we interact with one another–both personally and professionally. Instant search is more subtle, but I think it will transform the way we interact with information on the web. I am very proud of my colleagues’ collective effort to make it possible.

Categories
Uncategorized

New Web Site for HCIR Workshop

In 2007, I persuaded MIT graduate students Michael Bernstein and Robin Stewart (who was interning at Endeca that summer) to help organize the first Workshop on Human-Computer Information and Information Retrieval (HCIR 2007), which we held at MIT and Endeca. Its success convinced us to keep going, and we enjoyed record attendance at this year’s HCIR 2010, held at Rutgers University.

As the workshop has grown, we as organizers have realized that we need to invest a little in its online presence. A first step in that direction is a new site for the workshop: http://hcir.info/. The site contains all of the proceedings from the four annual workshops in one place. It is powered by Google Sites, which will make it easy for a bunch of us (and perhaps some of you) to collaboratively maintain it.

I hope everyone here finds the new site useful. Please feel free to come forward with ideas for improving it! But be warned–if you have a great idea, I might ask you to implement it yourself.

Categories
General

David Petrou Presents Google Goggles at NY Tech Meetup

Image recognition is one of those problems that has presented long-standing challenges to computer scientists, despite being taken for granted by science fiction writers. Google Goggles represents one of the most audacious efforts to implement image recognition on on a massive scale.

Tonight, I had the pleasure of watching my colleague, David Petrou, present a live demo of Goggles to about 800 people who filled the NYU Skirball Center to attend the NY Tech Meetup. Many thanks to Nate Westheimer and Brandon Diamond for giving Google the opportunity to present this cool technology to a very engaged audience and in particular to show off some of the technology that Googlers are building here in New York City.

You can’t see the live demo in the slides, so I encourage you to view a recording of the presentation here.

Also, if you’re in the New York area and interested in hearing about upcoming Google NYC events, please sign up at http://bit.ly/googlenycevents.

Categories
General

Slouching Towards Creepiness

One of the perks of blogging is that publishers sometimes send me review copies of new books. I couldn’t help but be curious about a book entitled “The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships“–especially when principal author Clifford Nass is the director of the Communications between Humans and Interactive Media (CHIMe) Lab at Stanford. He wrote the book with Corina Yen, the editor-in-chief of Ambidextrous, Stanford’s journal of design.

They start the book by reviewing evidence that people treat computers as social actors. Nass writes:

to make a discovery, I would find any conclusion by a social science researcher and change the sentence “People will do X when interacting with other people” to “People will do X when interacting with a computer”

They then apply this principle by using computers as confederates in social science experiments and generalizing conclusions about human-compter interaction to human-human interaction. It’s an interesting approach, and they present results about how people respond to praise and criticism, similar/opposite personalities, etc. You can get a taste of Nass’s writing from an article he published in the Wall Street Journal entitled “Sweet Talking Your Computer“.

The book is interesting and entertaining, and I won’t try to summarize all of its findings here. Rather, I’d like to explore its implications.

Applying the “computers are social actors” principle, they cite a variety of computer-aided experiments that explore people’s social behaviors. For example, they cite a Stanford study on how “Facial Similarity Between Voters and Candidates Causes Influence” , in which secretly morphing a photo of a candidate’s face to resemble the voter’s face induces a significantly positive effect on the voter’s preference. They also cite  another experiment on similarity attraction that varies a computer’s “personality” to be either similar or opposite to that of the experimental subject. A similar personality draws a more positive response than an opposite one, but the most positive response comes from the computer starts off with an opposite  personality and then adapts to conform to the personality of the subject. Imitation is flattery, and–as yet another of their studies shows–flattery works.

It’s hard for me to read results like these and not see creepy implications for personalized user interfaces. When I think about the upside of personalization, I envision a happy world where we see improvement in both effectiveness and user satisfaction. But clearly there’s a dark side where personalization takes advantage of knowledge about users to manipulate their emotional response. While such manipulation may not be in the users’ best interests, it may leave them feeling more satisfied. Where do we draw the line between user satisfaction and manipulation?

I’m not aware of anyone using personalization this way, but I think it’s a matter of time before we see people try. It’s not hard to learn about users’ personalities (especially when so many like taking quizzes!), and apparently it’s easy to vary the personality traits that machines project in generated text, audio, and video. How long will it before people put these together? Perhaps we are already there.

O brave new world that has such people and machines in it. Shakespeare had no idea.