Tuesday, March 18, 2008

Generating vocal output with TaDA

What would be the first vocal gestures of hominids? The inspiration could be the vocalizations of apes, but, for now, something even simpler will do.

The idea is to use simple vowel gestures possibly with nasalization and lip rounding. Lip rounding is particularly attractive as it is a visible speech gesture that can be mimicked easily.

A (very simple) perceptual model of vowel perception will need to be adopted, so the agents can form a vowel space and start to get preferences. Maybe deBoer's model or something similar?

Implementation of gestural score generation starts today, as I am more familiar with the parameters. Output from TaDA to HLSyn without the Graphical User interface (GUI) exists only for coupling graphs, not gestural scores. That is the part which needs most of the code adaptation.

Good news is that we will be able to start simulating BP patterns real soon, as it is quite easy to implement gestural scores directly.

No comments: