We could all use a personal assistant — especially when driving. Instead of searching for the defrost button, you could ask them to turn it on for you. When you want directions to a Chinese restaurant in a particular shopping district, you could ask your personal assistant to find one that is highly recommended. AT&T is developing a voice-enabled virtual assistant for your car in a collaborative effort with Panasonic Automotive Systems Company of America and QNX Software Systems.

How did the Idea Hatch?

Automotive manufacturers first look to safety when designing a vehicle and then to the infotainment systems. AT&T WatsonSM speech recognition technology is a perfect fit as it allows drivers to focus on the road while speaking naturally to their vehicles to perform tasks. Recent advances in the technology being put into cars have made a connected vehicle possible. As this trend has caught on, vehicles are consuming more and more data each month. While data usage continues to rise, researchers have considered alternatives to mobile Internet connectivity to support vehicles.

About the Project

The QNX concept car with AT&T WatsonSM speech recognition technology is a connected, voice-enabled virtual assistant prototype that has been incorporated into cars. AT&T provides their network services and connectivity and Panasonic Automotive Systems Company supply the hardware and integration services. This collaboration provides a framework for the industry leaders to jointly create customized products for global automotive manufacturers in the U.S. The cloud-based speech recognition provided by AT&T WatsonSM speech recognition technology allows the driver to interact with the car through simple voice commands and potentially with body gestures. This connected vehicles platform is designed to limit driver distraction by focusing on customer needs and system interface usability to improve safety and comfort.

The Future

The QNX concept car with AT&T WatsonSM speech recognition technology is currently testing an initial concept for the connected in-car infotainment system and emerging mobile devices interface in Peachtree City, GA, through an initiative managed by AT&T's emerging devices organization. The future holds an array of opportunities to enhance this technology, including:

  • Wi-Fi hotspots. A number of auto manufacturers are currently exploring the potential for embedding hotspots into cars in order to augment existing mobile Internet features.
  • Vehicle-to-vehicle (V2V) connectivity. This technology would allow cars to communicate with one another. If one car is stopped in traffic, for example, it could give an alert to another car five miles behind it on the road so that the driver could avoid the congested area. In addition, V2V could be useful for safety purposes by sensing when two cars are too close to one another and possibly headed for an accident.
  • Vehicle-to-infrastructure (V2I). Vehicle-to-Infrastructure connectivity would involve adding short-range radio capabilities to physical structures on the road, such as construction sites, highways and overpasses. This type of communication would enable drivers to receive the most up-to-date information on delays and lane closures.
  • Network-based advancements. As more cars are connected and more people carry mobile devices, we may be able to use the data points from each to more effectively predict traffic trends.

About the Researchers

Michael Johnston, Ph.D., is a Principal member of technical staff at AT&T Labs. He has over 21 years of experience in speech and language technology and has worked at the forefront of multimodal interface research for 15 years. He is currently responsible for AT&T's research program in advanced multimodal interfaces, holds 14 U.S. patents, has published over 50 technical papers, and currently serves as editor and chair of the W3C EMMA Multimodal standard.

Vivek Kumar Rangarajan Sridhar is a Senior Member of Technical Staff at AT&T Labs. He has a M.S. and Ph.D. in Electrical Engineering from University of Southern California (2008). Prior to joining AT&T he was a scientist at BBN Technologies for two years. His research interests are in the areas of spoken language understanding, machine translation, automatic speech recognition and text-to-speech synthesis. He has published in several prestigious journals and conferences and currently has 1 patent with several more under review. His Ph.D. dissertation also served as the basis for a National Science Foundation grant titled "An Integrated Approach to Context Enriched Speech-to-Speech Translation" that he is involved in collaboration with University of Southern California. He has been instrumental in the translation effort at AT&T and has been a key contributor to the team that released AT&T Translator in the iTunes and Android Store. He also received the AT&T Labs President's Excellence Award in 2012 for his work on text and speech translation.

Innovation Space Blog