I’m a karaoke junkie and proud to admit it. But one of the challenges I regularly face, especially when I go to an unfamiliar karaoke joint, is finding a song I know well enough to sing. I’m sure I’m not the only person who encounters this micro-IR problem, and it occurred to me that there might be better technical solutions to it.
Most karaoke venues provide printed song books, typically sorted by title and by artist. This approach is certainly adequate for very limited selections, but it doesn’t scale gracefully. Indeed, one of my favorite karaoke bars, the Courtside in Cambridge, MA, has a fantastic song selection that is only accessible through printed books. Kinda frustrating for a search guy, even though the staff is very helpful!
My regular karaoke venue in New York, Second on Second, is a bit more technologically advanced: it provides computers with dedicated software that allows patrons to search through their song catalog. Aside from being faster than thumbing through books, the software makes it possible to find songs when you only remember words that are in the middle of song or artist names.
But even such a system only addresses known-item search–in this case, looking for a song or artist by name when you know precisely what you are looking for. There’s room for incremental improvement here, e.g., searching for songs based on the lyrics you remember. For example, many people remember a famous David Bowie song based on its protagonist “Major Tom” rather than its title “Space Oddity“; fortunately, tools like Google’s music search are happy to make such connections.
But none of the karaoke search technology I’ve see to date supports exploration. Specifically, I’d love to go into a karaoke bar and have a procedure for finding songs I know that is better than trial and error. For example, I’d like to be able to see my options for hard rock 80s songs with male vocals. Or to find out which downtempo bands, if any, are on the menu. A little faceted search would go a long way towards making the song-finding experience more pleasant and efficient.
But why stop there? I’d really like a system that suggests songs based on what it knows about me. For example, knowing that I like to sing Scorpions songs is a reasonable basis to suggest similar artists like Def Leppard and Guns N’ Roses. Or perhaps to suggest 80s songs in general–after all, karaoke roulette notwithstanding, most people sing songs they know (or at least think they know), and their song knowledge tends to have some temporal locality. I’m sure you can imagine far more sophisticated personalization–and such personalization could be accomplished with complete transparency to the user.
Even if you aren’t into karaoke (and yet have managed to read this far!), I hope you can appreciate the universality of the information needs I’m describing. Exploratory search is everywhere. But I think it’s easiest to demonstrate its practical importance by working through concrete use cases. As an HCIR advocate, I’ve repeatedly learned the lesson that such demonstrations are critical in order to successfully evangelize this worldview.
7 replies on “Karaoke: A Hotbed for Micro-IR?”
Daniel, see this paper:
And the accompanying demo:
And this paper:
And the accompanying demo:
And that’s just from the last year or two. Lots more from the ISMIR conference, stretching all the way back to October 2000.
Music IR is a great domain for HCIR/exploration.
Click to access pampalk_ismir07_music_sun.pdf
Note also the focus on “actionable” transparency 😉
Jeremy, thanks for the links! And my bad for not checking out the discussion from my earlier post on exploratory music search.
Still, I’d love to see this stuff make it to actual karaoke bars. Micro-IR applications fall flat if they aren’t coupled to the context they’re supposed to support! I suppose I’ll just have to accept that karaoke bars (at least in the United States) aren’t the earliest technology adopters.
As Daniel mentions, successful micro-IR marshals the user’s immediate context to render complicated information needs tractable. It strikes me that a keen problem here is the demand that people to step out of the task at hand when they search for a song. The retrieval system loses rich information by requiring this (what songs have you sung before?, how prepared are you to take a risk?, how drunk are you???). Obviously my lack of karaoke knowledge is showing.
Personalization is a clear response to this problem. But personalizing based on what?
It’s interesting to consider features that might be predictive of a song’s appeal. Does it matter what time it is? How long you’ve been at the bar? What others in your party have sung tonight?
An important part of designing useful micro-IR is identifying useful features. But as Jeremy notes, this effort has to avoid running afoul of offering actionable transparency.
Actually, it’s worst for karaoke novices. Consider the poor soul who finally gets up the nerve to sing, only to then face the frustration of finding a song he or she actually knows–with only known-item search as a tool.
Perhaps the right answer to this problem is to make distribution of karaoke content so ubiquitous that patrons can assume an abundant song selection–ideally all songs that have been arranged for karaoke. Then, at least, the exploratory search tools wouldn’t have to be so venue specific.
One of the very earliest Music IR problems espoused by the MIR community was “query by humming”. It was a very popular research topic from 1999 to maybe around 2003, 2004.
I won’t go into all the details of all the work that has been done. Rather, I’ll simply say that one possibility for finding the right karaoke song would be to have the user attempt to hum a song or two that he/she likes. If the system manages to find the song, based on the humming, then the song exists and the user can use it. Problem solved.
The only catch here is that most people, esp. when drunk, probably don’t hit the right pitches that well. But that’s a research challenge, doing the fuzzy matching.