FUJITSU TEN Develops English Version of ECLIPSE Navigation Unit

February 17, 2014  by  


FUJITSU TEN LIMITED has developed a prototype English version of its Interactive Voice-Recognition Car Navigation Unit.

Development partners include Nuance Communications, iNAGO, and ANIMO.

A system providing the same type of search has been released in Japan under the ECLIPSE brand.

The prototype English unit uses a smartphone app and a central server to perform searches. Users can add conditions while conversing with the unit, which understands the context and updates the search accordingly. There is little need for the driver to check the display screen.

Interactive Voice Recognition Car Navigation Unit Features

  • Recognizes naturally spoken language even if standard phrases are not used.
  • Results are returned vocally.
  • Even if conditions are added, the context is understood and a search is performed (narrowing down process).
  • Destinations can be set vocally.

Sample Conversation (D: Driver A: Agent)

D: Find me a Italian restaurant around here.

A: Here is what I found. There is “XXXX” in 0.3 miles from the current location.

D: I’ve changed my mind. I’ll take Spanish instead of that.

A: Here is what I found. There is “YYYY” in 1.2 miles from the current location.

D: Navigate me.

A: Sent destination to Car Navigation.

Interactive Search

By speaking into a specialized car-navigation microphone, users can search for facilities or weather information and receive answers vocally through the car-mounted speakers.

Voice Processing Performed at the Central Server:

  • To execute the extraction of the speech section required as a pretreatment of speech recognition and the noise cancellation specific to in-vehicle environment, with high accuracy;
  • Natural speech recognition utilizing a large recognition dictionary;
  • To estimate the speech contents and the intent of the speaker by use of the advanced understanding capability and an inference engine;
  • To execute the processing corresponding to the request from the speaker by use of various contents as well as the latest and real-time information, and
  • To send back the execution result and the search result by natural speech, using a huge amount of speech dictionary database.

The system will be unveiled at the 2014 GSMA Mobile World Congress in Barcelona February 24-27 2014.


Speak Your Mind

Tell us what you're thinking...
and oh, if you want a pic to show with your comment, go get a gravatar!