Runtime MetaHuman Lip Sync

Real-time, offline, and cross-platform lip sync for MetaHuman and custom characters. Universal language support - works with any spoken language, on any platform.

UE 5.0 – 5.7
Blueprints & C++
Windows, Mac, iOS, Linux, Android, Quest
MetaHuman & custom characters

Choose Your Model

Choose the model that fits your project's requirements - from broad platform compatibility to cinematic-quality facial animation with emotional expression.

Standard

Universal Compatibility

Optimized for real-time performance across all supported platforms. Works with MetaHumans and any character with morph targets.

Standard model preview
  • 14 visemes with flexible mapping
  • Works with all character systems
  • Laughter detection
  • Windows, Android, Quest
Realistic

Enhanced Visual Fidelity

MetaHuman-exclusive model with 81 facial controls for natural, high-quality mouth movement. Available in two variants: standard Realistic and Mood-Enabled for emotional expression.

Realistic model preview
  • 81 facial controls
  • MetaHuman-exclusive
  • Full Face or Mouth-Only modes
  • All platforms including Mac & iOS

Mood-Enabled Variant

  • 12 mood types with configurable intensity
  • Happy, Sad, Angry, Confident, Excited & more

How It Works

The plugin analyzes raw audio phonemes directly - no text transcription required - making it fully language-independent and suitable for any spoken language.

1

Audio Input

Microphone, TTS, or audio files - any audio data source

2

Phoneme Analysis

Language-independent audio processing, on-device

3

Animation Data

Visemes or facial controls generated with optional mood context

4

Real-Time Playback

Applied to character with smooth blending transitions

Audio Sources

Any audio source that provides PCM data is supported.

Microphone Input

Real-time lip sync from live microphone capture using Runtime Audio Importer's capturable sound wave.

Text-to-Speech

Compatible with local offline TTS and external services including ElevenLabs, OpenAI, Azure, Google Cloud, and more.

Audio Files & Streams

Process pre-recorded audio files, streaming sources, or any custom PCM provider.

Character Support

Despite the name, the plugin works well beyond MetaHumans.

Unreal MetaHuman

All three models support MetaHumans. Realistic and Mood-Enabled are MetaHuman-exclusive with direct facial control output.

Commercial Character Systems

Standard model supports Daz Genesis 8/9, Reallusion CC3/CC4, Mixamo, ReadyPlayerMe, and more via flexible viseme mapping.

Custom Characters

Any character with morph targets can be configured. Supports FACS, Apple ARKit, Preston Blair, and 3ds Max phoneme systems.

Standard Model - Custom Character Examples

Custom character example
Custom character example

Universal Language Support

The plugin processes raw audio phonemes directly rather than relying on text transcription, making it inherently language-independent.

English, Spanish, French, German, Japanese, Chinese, Korean, Russian, Arabic, Hindi, Portuguese - and any other spoken language.

English
Spanish
French
German
Japanese
Chinese
Korean
+ Any

Platform & Model Reference

Quick-reference for platform support and model capabilities.

Capability Standard Realistic Mood-Enabled
Animation controls 14 visemes 81 controls 81 + 12 moods
MetaHuman
Custom characters
Laughter detection
Windows
Mac / Linux / iOS
Android / Quest

Video Tutorials

Step-by-step guides covering all models, audio sources, and integration workflows.

Speech-to-Speech AI Demo

Full workflow: speech recognition, AI chatbot, TTS, and real-time lip sync.

Mood-Enabled Model

High-quality lip sync with emotional expression.

External TTS Integration

Realistic model with ElevenLabs & OpenAI for premium quality output.

Live Microphone — Realistic

Real-time lip sync from microphone input using the Realistic model.

Live Microphone — Standard

Efficient real-time lip sync using the Standard model.

Local Text-to-Speech

Fully offline lip sync with local TTS — no internet required.

Plugin Ecosystem

Runtime MetaHuman Lip Sync integrates with the broader Georgy Dev plugin suite to build complete interactive character pipelines.

Runtime Audio Importer

Import, stream, and capture audio at runtime to drive lip sync animations from any source.

Learn more

Runtime Speech Recognizer

Add speech recognition for interactive characters that respond to voice commands.

Learn more

Runtime Text To Speech

Offline speech synthesis with 900+ voices, fully compatible with all lip sync models.

Learn more

AI Chatbot Integrator

AI-driven conversations with natural language responses and synchronized lip animation.

Learn more

Documentation & Support

Comprehensive documentation covers all three models - from initial setup to advanced character configuration, emotional animation, and custom viseme mapping.

Full Documentation

Step-by-step guides for all models, MetaHuman integration, and custom characters

Community & Support

Active Discord community with developer support

Custom Development

Tailored integration or feature development - [email protected]

Ready to Add Lip Sync to Your Project?

Available on Fab for UE 5.0 – 5.7. Includes all three models, full documentation, and demo projects.