Real-time, offline, and cross-platform lip sync for MetaHuman and custom characters. Universal language support - works with any spoken language, on any platform.
Choose the model that fits your project's requirements - from broad platform compatibility to cinematic-quality facial animation with emotional expression.
Optimized for real-time performance across all supported platforms. Works with MetaHumans and any character with morph targets.
MetaHuman-exclusive model with 81 facial controls for natural, high-quality mouth movement. Available in two variants: standard Realistic and Mood-Enabled for emotional expression.
Mood-Enabled Variant
The plugin analyzes raw audio phonemes directly - no text transcription required - making it fully language-independent and suitable for any spoken language.
Microphone, TTS, or audio files - any audio data source
Language-independent audio processing, on-device
Visemes or facial controls generated with optional mood context
Applied to character with smooth blending transitions
Any audio source that provides PCM data is supported.
Real-time lip sync from live microphone capture using Runtime Audio Importer's capturable sound wave.
Compatible with local offline TTS and external services including ElevenLabs, OpenAI, Azure, Google Cloud, and more.
Process pre-recorded audio files, streaming sources, or any custom PCM provider.
Despite the name, the plugin works well beyond MetaHumans.
All three models support MetaHumans. Realistic and Mood-Enabled are MetaHuman-exclusive with direct facial control output.
Standard model supports Daz Genesis 8/9, Reallusion CC3/CC4, Mixamo, ReadyPlayerMe, and more via flexible viseme mapping.
Any character with morph targets can be configured. Supports FACS, Apple ARKit, Preston Blair, and 3ds Max phoneme systems.
Standard Model - Custom Character Examples
The plugin processes raw audio phonemes directly rather than relying on text transcription, making it inherently language-independent.
English, Spanish, French, German, Japanese, Chinese, Korean, Russian, Arabic, Hindi, Portuguese - and any other spoken language.
Quick-reference for platform support and model capabilities.
| Capability | Standard | Realistic | Mood-Enabled |
|---|---|---|---|
| Animation controls | 14 visemes | 81 controls | 81 + 12 moods |
| MetaHuman | |||
| Custom characters | |||
| Laughter detection | |||
| Windows | |||
| Mac / Linux / iOS | |||
| Android / Quest |
Step-by-step guides covering all models, audio sources, and integration workflows.
Full workflow: speech recognition, AI chatbot, TTS, and real-time lip sync.
High-quality lip sync with emotional expression.
Realistic model with ElevenLabs & OpenAI for premium quality output.
Real-time lip sync from microphone input using the Realistic model.
Efficient real-time lip sync using the Standard model.
Fully offline lip sync with local TTS — no internet required.
Runtime MetaHuman Lip Sync integrates with the broader Georgy Dev plugin suite to build complete interactive character pipelines.
Import, stream, and capture audio at runtime to drive lip sync animations from any source.
Learn moreAdd speech recognition for interactive characters that respond to voice commands.
Learn moreOffline speech synthesis with 900+ voices, fully compatible with all lip sync models.
Learn moreAI-driven conversations with natural language responses and synchronized lip animation.
Learn moreComprehensive documentation covers all three models - from initial setup to advanced character configuration, emotional animation, and custom viseme mapping.
Step-by-step guides for all models, MetaHuman integration, and custom characters
Active Discord community with developer support
Tailored integration or feature development - [email protected]
Available on Fab for UE 5.0 – 5.7. Includes all three models, full documentation, and demo projects.