DEARBORN, Mich. | Ford made in-car voice activation a reality for millions of drivers with SYNC, first introduced in 2007. Now, Ford engineers - working with voice technology pioneers Nuance Communications plan to once again raise the bar with the next generation of SYNC, a system that can understand 100 times more commands than the original, thus delivering a more conversational experience between car and driver.
The voice upgrades will be available on the next generation of SYNC powering the new driver connect technology, MyFord Touch, launching this year on the new 2011 Ford Edge. The system will make it easier for drivers to use voice control and get what they want more quickly using more natural phrases.
“Ford is committed to making voice recognition the primary user interface inside of the car because it allows drivers to keep their eyes on the road and hands on the wheel,” said Jim Buczkowski, director of Ford electronics and electrical systems engineering. “The improvements we’ve made will make it easier for drivers to use and interact with it, even those customers that have never used voice recognition before.”
At the heart of SYNC is the speech engine, and Ford is working with speech technology leader Nuance to create and integrate a vast library of possible driver requests. This library will enable the SYNC speech engine to listen for and respond to more voice commands directly, recognize different words that mean the same thing (aliases), and integrate a vast number of point-of-interest (POI) names and business types into its navigation system.
“With this latest generation of SYNC, users can control the system without having to learn nearly as many commands or navigate as many menus,” said Brigitte Richardson, Ford global voice control technology and speech systems lead engineer. “As we’ve gained processing power and learned more about how drivers use the system, we’ve been able to refine the interface. Customers can do more and say more from the top-level menu, helping them accomplish their tasks more quickly and efficiently.”
Examples of some improvements to SYNC powering MyFord Touch-equipped vehicles include:
More direct, first-level commands
“Call John Smith” dials the phone number associated with John in a connected phone’s phonebook directly - the user isn’t required to say “Phone” first.
Direct commands related to destinations, like “Find a shoe store” or “Find a hotel,” place users in the navigation system menu where they will be walked through the POI search process.
The command, “Add a phone,” will enter the phone pairing menu and walk users through the connection process - users don’t have to enter a phone submenu to initiate the pairing process.
Quicker, easier entry and search
Navigation entries can be spoken as a single one-shot command; for example, “One American Road, Dearborn,” instead of requiring individual city, street and building number entries.
Brand names are recognized by the navigation POI menu, allowing drivers to look for chain restaurants, shoe stores, department stores and more, as well as regional and local favorites.
Direct tuning of radio stations by simply saying “AM 1270” or “FM 101.1,” or using SIRIUS station names or numbers such as “21” or “Alt-Nation”.
Use of aliases
Within the climate menu, users can voice-request the same function using several different phrases, such as “Warmer,” “Increase temperature” or “Temperature up” - helping reduce the need for drivers to learn specific commands.
When requesting a specific song from an MP3 player, users can now say “Play song [title]” in addition to saying “Play track [title]”.
Personalized access
If an occupant’s USB-connected device, such as an MP3 player, has been named, users can simply say the device name, such as “John Smith’s iPod,” rather than the less personal “USB” command.
Ford voice engineers refined SYNC beginning with the two features customers interact with first: the voice recognition system and Samantha, the digital voice behind system commands.
To help SYNC react to driver commands more quickly and accurately, the team integrated Nuance’s Unsupervised Speaker Adaptation (USA) technology. USA learns the voice of a driver within the first three voice commands, quickly creating a user profile and adapting to tone, inflection and even dialect for a 50 percent improvement in recognition performance. USA then continues to learn during that same trip, even picking out another user and creating a second profile if the voice is markedly different. Currently SYNC can actively adapt to voices in English, French-Canadian and Mexican-Spanish - with more languages on tap.
“The power of the SYNC voice control system is its ability to understand and respond to more natural language commands - and the advanced adaptability of the speech recognition technology enables the system to train itself with each successive use,” said Michael Thompson, senior vice president and general manager, Nuance Mobile. “The adaptability of SYNC is pretty remarkable - a feature functionality Nuance and Ford worked hard to develop to ensure seamless customer interaction with the system every time it starts up. So even if the car owner has a cold or someone borrows the car, SYNC will adapt to the changed voice and process spoken commands without missing a beat.”
Initial interactions also involve Samantha, the “voice” of SYNC. In an attempt to help Samantha sound less computerized, Ford boosted the size of her speech profile approximately fivefold. The additional speech units will help Samantha speak in a smoother, more human voice as she helps vehicle occupants accomplish their in-car tasks such as making phone calls, playing songs from a connected digital device and getting directions.
With smart phones expected to replace desktop and laptop PCs as the primary web access point by 2015, some industry analysts believe voice control will replace touch devices like keyboards and screens as the primary method of search. Dr. Philip E. Hendrix, Ph.D., founder and director of immr and analyst with GigaOM Pro, says that a majority of smart phones will have optimized a Voice User Interface by the end of 2012.
Research trends show strong consumer acceptance of voice recognition technology. The Harris Interactive 2010 AutoTECHCAST study found that 35 percent of drivers1 say they would be likely to adopt voice-activated controls or features in their vehicle, up from just over one-quarter (27 percent) in 2009. In recent Ford-conducted market research of SYNC owners, more than 60 percent reported they use the voice controls while driving.
Datamonitor, an independent research firm, predicts that the global market for advanced speech recognition in the mobile world will triple from 2009 to 2014. Market growth of speech recognition in vehicles is expected to grow at a similar rate, from $64.3 million in 2009 to $208.2 million in 2014.
Ford knows that customers are increasingly using mobile electronics while driving, and studies show hands-free, voice-activated systems such as Ford SYNC offer significant safety benefits versus hand-held devices.
According to a 100-car study conducted by Virginia Tech Transportation Institute, driver inattention that may involve looking away from the road for more than a few seconds is a factor in nearly 80 percent of accidents. The improvements to SYNC should help drivers accomplish tasks hands-free using natural speech patterns and fewer commands, enabling them to focus on the task of driving.
Please read our comment policy before commenting.