Real-time, offline, and cross-platform lip sync for MetaHuman and custom characters. Now featuring two quality models: Standard for broad compatibility and Realistic for premium MetaHuman experiences.
Choose between our Standard model for broad compatibility and efficient performance, or the new Realistic model for enhanced visual fidelity specifically optimized for MetaHuman characters.
Best for: High-fidelity MetaHuman projects where visual quality is paramount
Best for: Mobile, VR/AR applications and custom character systems
Runtime MetaHuman Lip Sync provides a comprehensive system for dynamic lip sync animation, enabling characters to speak naturally in response to audio input from various sources. Despite its name, the plugin works with a wide range of characters beyond just MetaHumans.
Generate lip sync animations in real-time from microphone input or pre-recorded audio
Works with MetaHuman, Daz Genesis 8/9, Reallusion CC3/CC4, Mixamo, ReadyPlayerMe, and more
Supports FACS-based systems, Apple ARKit, Preston Blair phoneme sets, and 3ds Max phoneme systems
Process audio from microphones, imported files, text-to-speech, or any PCM audio data
Get up to speed quickly with our comprehensive video tutorials covering both the Standard and Realistic models, various audio sources, and integration techniques.
Learn to use the Realistic model with ElevenLabs & OpenAI for premium quality lip sync.
Real-time lip sync from microphone input using the enhanced Realistic model.
Efficient real-time lip sync using the Standard model for broad compatibility.
Offline lip sync using the Standard model with local TTS for complete independence.
Combine Standard model efficiency with high-quality external TTS services.
Comprehensive guide covering plugin setup and configuration fundamentals.
Set up lip sync animation with just a few Blueprint nodes. Choose between Standard and Realistic generators based on your project needs.
Integrate lip sync into your character's animation blueprint with dedicated blending nodes for each model.
Fine-tune the lip sync behavior with adjustable parameters for interpolation speed and reset timing for both models.
Controls how quickly lip movements transition between visemes. Higher values result in more abrupt transitions.
The duration in seconds after which lip sync is reset when audio stops, preventing lingering mouth positions.
✓ MetaHuman-exclusive
✓ Enhanced visual quality
✓ More natural mouth movements
✓ External TTS compatible
✓ Universal character support
✓ Optimized performance
✓ Mobile/VR friendly
✓ Local TTS compatible
The plugin uses a standard set of visemes (visual phonemes) that are mapped to your character's morph targets:
Audio data is received as float PCM format with specified channels and sample rate
The plugin processes the audio to generate visemes using the selected model (Standard or Realistic)
Visemes drive the lip sync animation using the character's pose asset or advanced facial controls
The animation is applied to the character in real-time with smooth transitions
Android/Quest: Disable x86_64 architecture in project settings for compatibility
UE 5.0 - 5.6
MetaHuman plugin enabled in your project (UE 5.5 and earlier)
Runtime Audio Importer plugin
Runtime Text To Speech or Runtime AI Chatbot Integrator plugins
Runtime MetaHuman Lip Sync works seamlessly with other plugins to create complete audio, speech, and animation solutions for your Unreal Engine projects.
Import, stream, and capture audio at runtime to drive lip sync animations. Process audio from files, memory, or microphone input.
Learn moreAdd speech recognition to create interactive characters that respond to voice commands while animating with lip sync.
Learn moreGenerate realistic speech from text offline with 900+ voices and animate character lips in response to the synthesized audio.
Learn moreCreate AI-powered talking characters that respond to user input with natural language and realistic lip sync. Perfect for the Realistic model.
Learn moreCombine Runtime MetaHuman Lip Sync with our other plugins to create fully interactive characters that can listen, understand, speak, and animate naturally. From voice commands to AI-driven conversations with premium visual quality using the Realistic model, our plugin ecosystem provides everything you need for next-generation character interaction.
Despite its name, Runtime MetaHuman Lip Sync works with a wide range of characters beyond just MetaHumans. The Standard model provides flexible viseme mapping for various character systems.
The Standard model supports mapping between different viseme standards, allowing you to use characters with various facial animation systems:
Map ARKit blendshapes to visemes
Map Action Units to visemes
Classic animation mouth shapes
Standard 3ds Max phonemes
Get started quickly with our detailed documentation covering both Standard and Realistic models. From basic setup to advanced character configuration, we're here to help you succeed.
Step-by-step guides for both lip sync models, MetaHumans, and custom characters
Latest tutorials covering Realistic model and various integration methods
Get real-time help from developers and users
Contact [email protected] for tailored solutions
Demo Project Walkthrough
Bring your MetaHuman and custom characters to life with real-time lip sync animation. Choose between our Standard model for broad compatibility or the new Realistic model for premium MetaHuman experiences.
Enhanced visual quality with more natural mouth movements specifically for MetaHuman characters
Universal compatibility with MetaHumans and custom characters, optimized for all platforms