Runtime MetaHuman Lip Sync

Real-time, offline, and cross-platform lip sync for MetaHuman and custom characters. Three models to choose from: Standard for universal compatibility, Realistic for premium visual quality, and Mood-Enabled for emotional facial animation. Universal language support - works with any spoken language.

UE 5.0 - 5.6
Blueprints & C++
Multi-platform (Windows, Mac, iOS, Linux, Android, Quest)
Multiple character systems

Two Model Categories to Suit Your Project Needs

Choose between our Standard model for broad compatibility and efficient performance, or the Realistic model family for enhanced visual fidelity and emotional expression specifically optimized for MetaHuman characters.

Standard Model

Standard MetaHuman Lip Sync Example
Standard Model - Universal Compatibility
  • Works with MetaHumans and all custom character types
  • Optimized for real-time performance on all platforms
  • Includes laughter detection and animation
  • Full compatibility with local TTS
  • 14 visemes with flexible mapping system

Platforms: Windows, Android, Quest

Realistic Model Family

Realistic MetaHuman Lip Sync Example
Realistic Models - Enhanced Quality

Two Variants Available:

Standard Realistic

81 facial controls for natural mouth movements

Mood-Enabled Realistic NEW

Adds 12 emotional moods with configurable intensity

  • MetaHuman-exclusive with advanced facial animation
  • Three optimization levels for performance tuning
  • Mood support: Happy, Sad, Confident, and 9 other emotions
  • Mood-Enabled model supports local TTS
  • Full Face or Mouth-Only control options

Platforms: Windows, Mac, iOS, Linux

Custom Character Support (Standard Model)

Custom Character Lip Sync Example 2
Standard Model with Custom Character
Custom Character Lip Sync Example 3
Standard Model with Different Viseme System

Universal Language Support

Works with any spoken language including English, Spanish, French, German, Japanese, Chinese, Korean, Russian, Italian, Portuguese, Arabic, Hindi, and literally any other language. The plugin analyzes audio phonemes directly, making it language-independent.

🇺🇸 English
🇪🇸 Spanish
🇫🇷 French
🇩🇪 German
🇯🇵 Japanese
🇨🇳 Chinese
🇰🇷 Korean
🌐 Any Language

Real-Time Character Lip Sync Made Simple

Runtime MetaHuman Lip Sync provides a comprehensive system for dynamic lip sync animation, enabling characters to speak naturally in response to audio input from various sources. Despite its name, the plugin works with a wide range of characters beyond just MetaHumans.

Real-Time & Offline Processing

Generate lip sync animations in real-time from microphone input or pre-recorded audio

Universal Character Compatibility

Works with MetaHuman, Daz Genesis 8/9, Reallusion CC3/CC4, Mixamo, ReadyPlayerMe, and more

Multiple Animation Standards

Supports FACS-based systems, Apple ARKit, Preston Blair phoneme sets, and 3ds Max phoneme systems

Emotional Expression Control

12 mood types with configurable intensity for enhanced character expressiveness

Latest Tutorial - Mood-Enabled Lip Sync
High-Quality Lip Sync with Local TTS
High-Quality TTS Tutorial →

Latest Video Tutorials

Get up to speed quickly with our comprehensive video tutorials covering Standard, Realistic, and Mood-Enabled models, various audio sources, and integration techniques.

Mood-Enabled with Local Text-to-Speech

Learn to use the Realistic Mood-Enabled model with Local TTS (Piper and Kokoro) for premium quality lip sync.

Realistic Model Local TTS

High-Quality AI TTS Integration

Learn to use the Realistic model with ElevenLabs & OpenAI for premium quality lip sync.

Realistic Model External TTS

High-Quality Live Microphone

Real-time lip sync from microphone input using the enhanced Realistic model.

Realistic Model Microphone

Standard Live Microphone

Efficient real-time lip sync using the Standard model for broad compatibility.

Standard Model Microphone

Local Text-to-Speech

Offline lip sync using the Standard model with local TTS for complete independence.

Standard Model Local TTS

Standard with External TTS

Combine Standard model efficiency with high-quality external TTS services.

Standard Model External TTS

Key Features

Easy Blueprint Integration

Set up lip sync animation with just a few Blueprint nodes. Choose between Standard, Realistic, and Mood-Enabled generators based on your project needs.

Creating Runtime Viseme Generator
Standard Model
Creating Realistic Lip Sync Generator
Realistic Model
Creating Mood-Enabled Lip Sync Generator
Mood-Enabled Model

Mood-Enabled Emotional Animation

Advanced emotional expression control with 12 different mood types, configurable intensity, and selectable output controls.

😊 Happy
😢 Sad
😠 Angry
😮 Surprise
😌 Confident
🎉 Excited

Performance Optimization

Three optimization levels for Realistic models, configurable processing parameters, and platform-specific optimizations.

Highly Optimized

Best performance for real-time applications

Semi-Optimized

Balanced quality and performance

Original

Highest quality for cinematic use

Technical Details

Model Comparison

Standard Model

14 visemes
✓ Universal characters ✓ Windows, Android, Quest ✓ Laughter detection & Local TTS

Realistic Models

81 controls
✓ MetaHuman-exclusive ✓ Windows, Mac, iOS, Linux ✓ 3 optimization levels & 12 moods

Platform Support

Standard

Windows
Android
Meta Quest

Realistic

Windows
Mac, iOS
Linux

Animation Output

Standard Model

14 visemes

Maps to character morph targets: Sil, PP, FF, TH, DD, KK, CH, SS, NN, RR, AA, E, IH, OH, OU

Realistic Models

81 controls

Generate direct MetaHuman facial controls without predefined viseme poses

Mood-Enabled 12 moods

Full Face or Mouth-Only modes with emotional context

How It Works

1

Audio Input

Float PCM from any source (microphone, TTS, files)

2

Phoneme Analysis

Language-independent audio processing

3

Animation Generation

Generates visemes or facial controls with optional mood context

4

Real-Time Application

Applied to character with smooth transitions

Requirements

Unreal Engine

UE 5.0 - 5.6

For Standard Model

Extension plugin required (included in documentation)

For Audio Capture

Runtime Audio Importer plugin

Powerful Integrations

Runtime MetaHuman Lip Sync works seamlessly with other plugins to create complete audio, speech, and animation solutions for your Unreal Engine projects.

Runtime Audio Importer

Import, stream, and capture audio at runtime to drive lip sync animations. Process audio from files, memory, or microphone input.

Learn more

Runtime Speech Recognizer

Add speech recognition to create interactive characters that respond to voice commands while animating with lip sync.

Learn more

Runtime Text To Speech

Generate realistic speech from text offline with 900+ voices and animate character lips in response to the synthesized audio.

Learn more

Runtime AI Chatbot Integrator

Create AI-powered talking characters that respond to user input with natural language and realistic lip sync. Perfect for all models.

Learn more

Complete Interactive Character Solution

Combine Runtime MetaHuman Lip Sync with our other plugins to create fully interactive characters that can listen, understand, speak, and animate naturally with emotions. From voice commands to AI-driven conversations with premium visual quality, our plugin ecosystem provides everything you need for next-generation character interaction in any language.

Custom Character Support

Despite its name, Runtime MetaHuman Lip Sync works with a wide range of characters beyond just MetaHumans. The Standard model provides flexible viseme mapping for various character systems.

Flexible Viseme Mapping

The Standard model supports mapping between different viseme standards, allowing you to use characters with various facial animation systems:

Apple ARKit

Map ARKit blendshapes to visemes

FACS-Based Systems

Map Action Units to visemes

Preston Blair System

Classic animation mouth shapes

3ds Max Phoneme System

Standard 3ds Max phonemes

Custom Character Example 2
Custom Character Example 3

Documentation & Support

Get started quickly with our detailed documentation covering Standard, Realistic, and Mood-Enabled models. From basic setup to advanced character configuration and emotional animation, we're here to help you succeed.

Comprehensive Documentation

Step-by-step guides for all three lip sync models, MetaHumans, and custom characters

Video Tutorials

Latest tutorials covering all models and various integration methods

Discord Community

Get real-time help from developers and users

Custom Development

Contact [email protected] for tailored solutions

Demo Project Walkthrough

Create Emotionally Expressive Characters

Bring your MetaHuman and custom characters to life with real-time lip sync animation and emotional expression. Choose from three powerful models to match your project's needs, with universal language support included.

Standard Model

Universal compatibility with all character types, optimized for mobile and VR platforms

Realistic Model

Enhanced visual quality with 81 facial controls specifically for MetaHuman characters

Mood-Enabled

Emotional facial animation with 12 mood types and configurable intensity settings