Using Neuronets in Multimodal Human Computer Interfaces
In recent years there has been much interest in developing novel human computer interfaces based on speech and visual processing, aiming at more natural and effective interaction between humans and computers. Examples of HCI tasks are: Person identification, speech recognition, emotion recognition, and combining speech and vision-based gesture analysis in display control. Artificial Neural Networks are potentially excellent candidates for achieving some of these tasks. The advantages of ANN include: Low latency; allowing nonlinear interactions; ability to handle time-varying patterns (e.g., Time-Delayed Neural Networks); and suitability for unsupervised clustering (e.g., Self-Organizing-Maps). In this talk, we shall describe the application of ANN to some of the HCI tasks.