Hugh Jordan
Title: Multi-Lingual, Lip-Synch for photorealistic virtual humans with Emotion
Supervision Team: Rachel McDonnell, TCD / Peter Corcoran, UoG
Description: This project will improve the naturalness of speech synthesis for photorealistic digital humans, by predicting visual speech features, including emotion, from a linguistic input, using a combination of advanced computer graphics and deep learning methods. A large database of training data will be created using photorealistic virtual humans and used to train the generative adversarial networks (GAN). The focus will be on high quality multi-lingual lip animations with emotion that will lead to better user experiences in a wide range of applications such as computer games, movie subtitles, and intelligent assistants, etc. The researcher on this project will have a unique opportunity to collaborate with engineers from Xperi, the supporting industry partner..