The days of ringing a taxi company only to speak to a machine which fails to recognise what you are saying are numbered, according to a UNSW researcher.

Dr Emery Schubert, from UNSW's School of English, Media and Performing Arts, says music could hold the key to improving voice recognition systems, which currently can't cope with emotion-charged voices and people who speak with a strong accent.

"Some researchers are suggesting that if you look at the emotion that music is conveying - the undulation and the pitch of music, for example - we might be able to model that and plug it into speech models to improve their performance," said Dr Schubert, who is the chair of the inaugural International Conference on Music Communication Science (ICOMCS) which is being held at UNSW this week.

Musicologists, psychologists, educators, linguists, composers, engineers, computer scientists, speech scientists, physicists, philosophers and performance artists will be brought together to bring new insights into areas such as speech, next generation search technology and communication disorders.

Another possible application for this type of research is finding music online.

"People often have a tune in their head and they really like that song, but they have no idea what it is called," said Dr Schubert. "We hope that one day, people might be able to whistle a song into a database and the technology suggests what song it might be."

There are nearly 100 papers being presented by researchers from 16 countries including some performances and four-minute 'speed' papers. The presentations at the conference include:

The Australian Research Council's Network on Human Communication Sciences is supporting the three-day conference (Wednesday 5th - Friday 7th December). For the full conference program go to the website

For media planning to attend the event or for interview requests, please contact: Susi Hamilton, UNSW media unit, ph: 0422 934 024