Mathematics, etc.
Yisong Yue, assistant professor of computing and mathematical sciences, and his team developed a machine-learning tool to automatically generate talking animations from audio recordings of speech. Using eight hours of videos of "neutral" speech—speech without emotional inflection—comprising 2,543 sentences, the team trained a neural network to convert audio of speech into the corresponding animated facial movements. The algorithm allows animation of audio of any language, such as English and Chinese. The work was supported by a collaboration with @DisneyResearch. A post shared by Caltech (@caltechedu) on Aug 10, 2017 at 12:39pm PDT