Every day, some 30 people in the U.S. die in car crashes that involve an alcohol-impaired driver. That amounts to one death every 51 minutes, reports the Centers for Disease Control and Prevention. While drunk driving is hardly a new problem, in recent years researchers have been creating innovative ways to prevent someone from driving drunk (or at least make it a lot harder to do) – from smartphone breathalyzers to breathalyzer-controlled ignition systems.
One day, perhaps even voice-activated software will control whether or not you can get your car to start: In this scenario, your car would ask you a question and if your speech is slurred and you “sound” intoxicated, you’d need to call a cab or find a designated driver to get home.
The first steps for such a ground-breaking device are in the works now, thanks to inventors at the Bavarian Archive for Speech Signals at Ludwig Maximilian University of Munich and the Institute of Legal Medicine in Munich, Germany. Between 2007 and 2009, the researchers recorded the “drunk” speech patterns (stammering, stuttering, slurring and high-pitched tones, for instance) of 162 German men and women. Participants were placed in the passenger seat of a car while talking and experiencing varying degrees of drunkenness – from slightly buzzed to absolutely smashed. The result was the Alcohol Language Corpus (ALC), the first publicly available audio library of drunk (and sober) speech.
While turning this archive into a sophisticated system that might control whether your car is operable or not is quite a while off, work is being done to pave the way for practical uses of the ALC. In 2011, Andrew Rosenberg, PhD, an assistant professor of computer science at Queens College in New York City, co-authored a paper based on the results a major annual international speech science and technology conference that used the ALC. Part of the challenge for the 2011 conference participants was to figure out a way to use the archive’s data to separate sober speakers from those who were alcohol-impaired.
Rosenberg, along with a team of researchers from Columbia University, tested the speech just as they would various accents. They created an algorithm to detect drunkenness from speech patterns and discovered that it was right only about 75% of the time; the car might think you’re drunk 25% of the time – even when you’re sober. “The task is challenging,” Rosenberg acknowledges. “The difference between speakers – both in how their speech is changed by intoxication as well as their sober voices – makes it difficult to distinguish perfectly from one speaker to the next. It’s certainly better than chance accuracy, but still far from perfection.”
Other unresolved hurdles include privacy issues. After all, if the ALC-powered system were added to your car’s computer your Prius, Honda or Hyundai would essentially be spying on you – and then controlling whether you could go anywhere. Though that’s clearly a frightening thing to consider, it should probably still pale in comparison to the possible repercussions of driving drunk.