Overview
**UELTX2: Curated Generation** is a streamlined bridge between **Unreal Engine 5** and **Lightricks' LTX-2** generative video model.
Unlike external tools (Runway, Sora) that require cloud subscriptions and break your pipeline, UELTX2 runs **locally** on your hardware via ComfyUI. It allows you to generate animated placeholders, dynamic textures, and cinematic animatics directly inside the Editor Viewport.
Your IP never leaves the local network. Perfect for NDA-bound projects.
Right-click a texture to animate it. Drag-and-drop generation.
Architecture
Understanding how the pieces fit together is crucial for stability.
The plugin sends a JSON payload to your configured ComfyURL (default: http://127.0.0.1:8188). ComfyUI processes the Diffusion Transformer (DiT) steps, saves an output file to disk, and UELTX2 auto-imports it as a FileMediaSource.
Phase 1: Backend Setup
UELTX2 currently supports ComfyUI only. SwarmUI support has been removed in this build to ensure reliability and consistent output handling.
1. Install ComfyUI (Recommended)
Download the standalone Windows version to avoid Python dependency issues.
2. Node Requirements
UELTX2’s default workflows run on core ComfyUI nodes (no custom nodes required). If you plan to use GGUF models, install the GGUF node package.
- Optional: install
ComfyUI-GGUFfor GGUF quantized models.
3. Auto-Setup (Recommended)
In UE, open the UELTX2 panel and use ComfyUI Setup → Auto-Setup to configure extra model paths. Use Auto-Setup + Downloads to download the LTX-2 model and the required text encoder. Restart ComfyUI after it finishes.
4. Download Models
Place the LTX-2 weights in your checkpoints folder if you did not use auto-download.
# Recommended: GGUF Quantized (16GB - 24GB VRAM)
C:\ComfyUI\models\checkpoints\LTX-2-dev-Q4_K_M.gguf (example)
# Optional: Full Precision (24GB+ VRAM)
C:\ComfyUI\models\checkpoints\ltx219BNextGenAIVideo_ltx2FP8VERSION.safetensors (example)
5. Confirm Output Folder
UELTX2 can fall back to reading ComfyUI's local output directory if HTTP downloads fail.
Make sure Output Directory (Fallback) in Project Settings points to your ComfyUI output folder.
Phase 2: Plugin Setup
1. Installation
Steps to integrate into your C++ project:
1. Close Unreal Engine.
2. Copy the 'UELTX2' folder to 'YourProject/Plugins/'.
3. Right-click YourProject.uproject -> Generate Visual Studio Files.
4. Build the solution.
2. Configuration
Once compiled, open the Editor and navigate to Project Settings > Game > UELTX2 Generation. The plugin now exposes granular control over the generation pipeline.
Connection & Backend
ComfyUIhttp://127.0.0.1:8188Model Management
LTX-2-dev-Q4_K_M.ggufGeneration Defaults
Presets & UI State
Asset Automation
Text-to-Video Workflow
The standard generation method for cinematic placeholders.
- Start your local ComfyUI server (run
run_nvidia_gpu.bat). -
In UE5, click the LTX-2 Icon in the main toolbar.
- When the panel opens, enter your prompt (e.g., "A spinning golden coin, 4k, loopable").
- Click Generate.
-
Wait for the
OnGenerationCompletenotification. The file will appear in your Content Browser automatically.
Image-to-Video (Texture Animating)
The "Curated Generation" power-feature. Turn static screenshots into dynamic elements.
Simply Running Right-Click > Scripted Actions > Animate with LTX-2 on any texture will create a 2-second video variation of that texture, preserving color and composition.
Mastering the Workflow
A deep dive into the interface, prompt engineering for LTX-2, and converting static assets into motion.
1. The Interface Panel
The UELTX2 Panel is a native Slate window (no Blueprint UI dependencies). It is designed to be dockable anywhere in the Unreal Editor layout (we recommend docking it next to the World Outliner).
2. Health & Connection Monitoring
UELTX2 now exposes a Health section that reports HTTP status, WebSocket status, polling fallback, output directory, and the last backend error. This makes it obvious whether you are connected, running on polling fallback, or fully real-time.
- HTTP OK confirms the REST API is reachable.
- WS Connected enables real-time progress updates.
- Polling On indicates WebSockets were unavailable; the plugin is still retrieving outputs.
- Retries shows remaining automatic retries if a job stalls.
If a job stalls beyond JobTimeoutSeconds, UELTX2 will retry automatically up to JobMaxRetries. Queue position is displayed in the Progress area when multiple jobs are waiting.
3. Presets & Last-Used Parameters
Presets let you apply a curated configuration instantly (resolution, frames, steps, sampler, etc.). UELTX2 also persists your last-used parameters so reopening the panel restores the exact state you left it in.
4. Model Scanning & Auto-Selection
Model directories are scanned for .gguf and .safetensors. The first LTX model found is auto-selected, and your selection is persisted in settings.
Model folders are also watched in the editor, so adding or removing a model updates the list without restarting.
If you mix GGUF and SafeTensors, UELTX2 selects the appropriate template and warns if a template/model mismatch is detected.
Image-only checkpoints (e.g., SDXL 1.0) are supported. UELTX2 will generate a single image and repeat it into an MP4 (static video). For real motion, use an LTX video model.
Motion induction for image models is enabled by default via randomized latent batches. For subtler motion, lower Denoise (e.g., 0.2–0.5).
5. Prompt Engineering for LTX-2
LTX-2 is a Diffusion Transformer. Unlike older U-Net models, it understands scene composition better but requires specific "trigger words" to get the best motion.
Prompts are sanitized automatically (control characters removed, whitespace trimmed). If enabled in settings, prompts are also clamped to a maximum length for stability.
"A car driving."
Result: Often static, blurry, or the car morphs into the road. Lacks temporal context.
"Cinematic shot, low angle, joyful movement, a red sports car driving fast on a wet highway, motion blur, 4k, hyperrealistic"
Result: Dynamic camera movement, reflections, and consistent object permanence.
Key Tokens for Motion
- "Slow pan right" - Control Camera
- "Zoom in" - Dynamic depth
- "Fluid motion" - Smooth liquids/smoke
- "Loopable" - Good for textures
6. Context-Aware Generation (I2V)
The most powerful feature of UELTX2 is generating motion from existing project assets. This runs a different pipeline that preserves the color palette and composition of your source image.
Resolution Warning
Aspect Ratio Matters: LTX-2 is trained natively on 768x512 (approx) resolutions. If you input a square 1024x1024 texture, ComfyUI may crop it or stretch it. For best results, ensure your Source Texture is landscape aspect ratio.
7. Output, Import, and Persistent Media
Generated videos are imported automatically as FileMediaSource assets (if auto-import is enabled). UELTX2 also copies the output to a persistent location at
Content/Movies/UELTX2 so your media source remains valid even if the original ComfyUI output gets deleted.
- Open Output Folder jumps to the last output on disk.
- Open Imported Asset highlights the most recent asset in the Content Browser.
- Output Validation checks the file signature (mp4/webm) before import to prevent corrupt media issues.
8. Custom Workflows (Advanced)
While Project Settings now handle resolution, steps, and model selection, you can still fully customize the underlying ComfyUI node graph.
The plugin uses JSON templates located in Plugins/UELTX2/Content/Templates/. You can create your own (e.g., adding LoRA nodes, Upscalers, or ControlNets) and link them in the Project Settings under "Workflows".
{
"9": {
"class_type": "KSampler",
"inputs": {
"seed": {{SEED}}, <-- Injected by Plugin
"steps": {{STEPS}}, <-- Injected by Plugin
"cfg": {{CFG}}, <-- Injected by Plugin
"sampler_name": "{{SAMPLER}}"
}
},
"1": {
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "{{MODEL_NAME}}"
}
}
}
Supported Placeholders:
{{WIDTH}}, {{HEIGHT}}, {{FPS}}, {{FRAMES}}, {{SEED}}, {{STEPS}}, {{CFG}},
{{MODEL_NAME}}, {{MODEL_PATH}}, {{MODEL}},
{{PROMPT}}, {{NEGATIVE_PROMPT}}, {{DENOISE}},
{{SOURCE_IMAGE}}, {{CLIP_NAME}}, {{VAE_NAME}}.
You can supply separate templates for T2V and I2V via Custom Workflow Template and Custom I2V Workflow Template. Built-in templates are cached asynchronously for faster submission.
"Living Texture" Workflow for I2V
Turn static screenshots into dynamic, loopable materials in one click.
**Image-to-Video (I2V)** is the most powerful feature of UELTX2. It takes an existing texture from your project, encodes it into the Latent Space, and uses LTX-2 to hallucinate motion while preserving the original composition and color palette.
The "Zero-Click" Pipeline: Unlike standard video generation, UELTX2 automatically handles the entire Unreal Engine Media Framework setup for you. It creates the Player, sets it to loop, builds the Material, and starts playback instantly.
- Auto-Looping
- Material Generation
- Shader Linking
Step-by-Step Guide
Select Source Asset
Locate a static texture in your Content Browser. Single-click to select it.
Recommendation: Use Landscape aspect ratios (e.g., 2:1) for best LTX-2 results.
Set Source in Panel
In the UELTX2 Panel, click the "Set Source from Selection" button.
The "Zero Click" Result
After generation triggers the OnGenerationComplete event, navigate to your output folder (Default: /Game/UELTX2_Generations/).
You will see a complete Asset Family created for you:
_Mat file onto your mesh.
Pro Tips for LTX-2 I2V
Always include words like loopable, seamless, infinite in your prompt if this texture is for a background element. LTX-2 will try to match the end frame to the start frame.
Use the Denoise slider in the JSON config.
0.75 allows motion. 0.40 keeps the exact pixels but just warps them slightly (good for "breathing" characters).
Rapid Pre-Visualization Workflow for T2V
Generate "Instant Animatics" to block out cinematic timing before 3D production begins.
The "Greybox" Problem
Level Designers usually place static mannequins or text actors to represent complex events (e.g., "Building Collapses Here", "Explosion"). This makes it impossible to judge timing, sound design sync, lighting intensity, or camera cuts until weeks later.
The UELTX2 Solution
UELTX2 generates a bespoke video of the event and automatically wraps it into a native Level Sequence Asset. You simply drop this asset into your level, and the event plays on the timeline immediately.
# The Workflow
Describe the Camera & Action
Open the LTX-2 Panel. In Text-to-Video mode (ensure no texture is selected), describe the shot. Use camera terminology.
"Wide shot, low angle, camera shake, a stone bridge collapsing into a river, dust, debris, physics destruction, cinematic lighting, 4k"
Generate Animatic
Click Generate.
The plugin will generate the MP4, import it, create a FileMediaSource, create a Level Sequence, add a Media Track, and bind the source to the track automatically.
Drag & Drop Integration
Navigate to your output folder. You will find a distinct asset type:
Drag this _Animatic asset directly into your Level Viewport.
In the Details Panel, verify "Auto Play" is checked. Hit Simulate or Play to watch your AI-generated cutscene happening in 3D space.
"Niagara Source" Workflow for VFX
Generate fluid simulations (Fire, Smoke, Magic) and inject them immediately into Niagara.
UELTX2 now auto-creates a Niagara template system on editor startup: /UELTX2/Templates/NS_LTX2_Template (with an emitter at /UELTX2/Templates/NE_LTX2_TemplateEmitter).
When VFX creation is enabled, the plugin duplicates this template and swaps in your generated media material automatically.
The "Simulation Killer"
Normally, creating a photorealistic explosion Loop requires external software like EmberGen or Houdini. With UELTX2, you prompt for the effect, and the plugin automatically generates a Ready-to-Simulate asset family.
Material->BlendMode = BLEND_Additive;
Material->ShadingModel = MSM_Unlit;
LinkNode(Texture, ParticleColor, Multiply);
# The Workflow
Prompting for Alpha
Use Text-to-Video mode. LTX-2 does not output Alpha channels (transparency), so you must rely on Additive Blending (Black = Transparent). You MUST explicitly instruct the model to create a black background.
"Black background, isolated subject, massive fireball explosion, billowing smoke, slow motion, high contrast, 4k"
Generate Assets
Click Generate. Navigate to output folder.
Unlike the other workflows, UELTX2 detects the context and creates a special material suffix: _VFXMat.
Niagara Assembly
The plugin auto-creates NS_LTX2_Template on startup. When VFX is enabled, it duplicates that template into a _VFX system and assigns your media-driven material.
If the template is missing or corrupted, recreate it by restarting the editor.
| Module | Setting |
|---|---|
| Sprite Renderer |
Drag _VFXMat into 'Material' slot.Set Alignment to Velocity Aligned (optional).
|
| Particle Update |
Use Color Over Life to tint the fire. The graph we generated multiplies them essentially. |
| SubUV / Animation | Do not use. This is a Video Texture, not a SubUV Sheet. It plays automatically via the Media Player. |
Video textures have a tiny startup delay. For instant explosions (e.g., Gunshots), LTX-2 videos are slightly too slow. This workflow is best for looping elements (torches, burning wreckage, magic portals).
Troubleshooting & Diagnostics
Solve connection refusals, memory spikes, and missing assets.
1. Quick Diagnostics
> POST http://127.0.0.1:8188/prompt
> LogUELTX2: Error: Connection Refused. Is ComfyUI running?
Fix: Ensure `run_nvidia_gpu.bat` is running, the port matches Project Settings, and the ComfyURL is correct.
> POST http://127.0.0.1:8188/prompt
> LogUELTX2: Error: 400 Bad Request. Node(s) missing.
Fix: Update ComfyUI and ensure required nodes exist. If you are using GGUF models, install ComfyUI-GGUF. Check the ComfyUI console for missing node names.
2. VRAM & Out of Memory (OOM)
LTX-2 is a heavy model. If Unreal Engine crashes or ComfyUI reports CUDA Out of Memory, check your configuration against this gauge.
Requirement: Must use GGUF Q5_k_m quantization. Max resolution 768x512.
Requirement: Can run bf16 full precision, or high-res GGUF up to 1280x720.
Optimization Checklist
- Limit Frame Rate: In UE5 Editor Preferences, enable "Use Less CPU when in Background" to free resources for ComfyUI.
- Close High-VRAM assets: Close Metahuman blueprints or massive levels in UE5 before generating.
-
Use GGUF: Installing the
ComfyUI-GGUFnode is the #1 way to fix memory crashes.
3. The "Red Node" Problem
If the console logs "Success" but no video appears, drop the template JSON into ComfyUI manually to verify integrity.
- ComfyUI loads the workflow but blocks appear Red.
- The console says
ImportError: failed to load...
4. Unreal Engine Quirks
Toolbar Button Missing ▼
- Check Window > Generative AI in the top menu bar.
- Ensure
Plugins/UELTX2/Resources/ButtonIcon.pngexists. - Check "Output Log" for
LogUELTX2Editorerrors during startup.
Generated Video is Black / Corrupt ▼
- Ensure the ComfyUI
SaveVideonode outputsh264-mp4(default in the built-in templates). - Unreal requires Electra Player or WmfMedia plugins to be enabled to play H.264 inside the editor. Check Plugins > Built-in > Media Players.
Remote URL Blocked ▼
Allow Remote Connections in Project Settings if you are connecting to another machine.
No Output / Stuck Generating ▼
- If WS is disconnected, polling fallback should be On.
- If the job stalls past
JobTimeoutSeconds, it will retry automatically. - Use Reinitialize Backend if ComfyUI was restarted.
Imported Asset is an Image ▼
CreateVideo + SaveVideo) and that the model supports video generation.
Media Source Path Breaks ▼
Content/Movies/UELTX2 to keep FileMediaSource stable.
If a media source still breaks, ensure the persistent file exists on disk and that the project has write access to the Content folder.
Panel Won't Open ▼
- Use Window > Generative AI > Open LTX-2 Panel.
- Check Output Log for
LogUELTX2Editorerrors during startup. - Rebuild the project to regenerate plugin binaries.
GregOrigin
Created by Andras Gregori @ Gregorigin, a single dad currently based outside Budapest, HU.
"Building tools that empower creators to shape worlds."