Skip to content

πŸŽ™οΈ Talk2Scene

Audio-driven intelligent animation generation β€” from dialogue to visual storytelling.

Python 3.11+ License uv Hydra GPT-4o


Talk2Scene is an audio-driven intelligent animation tool that automatically parses voice dialogue files, recognizes text content and timestamps, and uses AI to recommend matching character stances (STA), expressions (EXP), actions (ACT), backgrounds (BG), and CG illustrations inserted at the right moments. It produces structured scene event data and composes preview videos showing AI characters performing dynamically across scenes.

Designed for content creators, educators, virtual streamers, and AI enthusiasts β€” Talk2Scene turns audio into engaging visual narratives for interview videos, AI interactive demos, educational presentations, and more.

πŸ’‘ Why Talk2Scene

Manually composing visual scenes for dialogue-driven content is tedious and error-prone. Talk2Scene automates the entire workflow: feed in audio or a transcript, and the pipeline produces time-synced scene events β€” ready for browser playback or video export β€” without touching a single frame by hand.

πŸ—οΈ Architecture

flowchart LR
    A[Audio] --> B[Transcription\nWhisper / OpenAI API]
    T[Text JSONL] --> C
    B --> C[Scene Generation\nLLM]
    C --> D[JSONL Events]
    D --> E[Browser Viewer]
    D --> F[Static PNG Render]
    D --> G[Video Export\nffmpeg]

Scenes are composed from five layer types stacked bottom-up:

flowchart LR
    BG --> STA --> ACT --> EXP

A CG illustration, when active, replaces the entire layered scene.

πŸ–ΌοΈ Example Output

Example Video

Example output video

Rendered Scenes

Basic Scene β€” Lab + Stand Front + Neutral Cafe Scene β€” Cafe + Stand Front + Thinking CG Mode β€” Pandora's Tech

Left: Basic scene (Lab + Stand Front + Neutral) Β· Center: Cafe scene (Cafe + Stand Front + Thinking) Β· Right: CG mode (Pandora's Tech)

Asset Layers

Each scene is composed by stacking transparent asset layers on a background. Below is one sample from each category:

Layer Sample Code Description
πŸŒ… BG BG_Lab_Modern Background (opaque)
🧍 STA STA_Stand_Front Stance / pose (transparent)
🎭 EXP EXP_Smile_EyesClosed Expression overlay (transparent)
🀚 ACT ACT_WaveGreeting Action overlay (transparent)
✨ CG CG_PandorasTech Full-scene illustration (replaces all layers)

πŸ“¦ Install

Important

Requires Python 3.11+, uv, and FFmpeg.

uv sync

Set your OpenAI API key:

export OPENAI_API_KEY="your-key"

πŸš€ Usage

uv run talk2scene --help

πŸ“ Text Mode

Generate scenes from a pre-transcribed JSONL file:

uv run talk2scene mode=text io.input.text_file=path/to/transcript.jsonl

🎧 Batch Mode

Process an audio file end-to-end (place audio in input/):

uv run talk2scene mode=batch

🎬 Video Mode

Render a completed session into video:

uv run talk2scene mode=video session_id=SESSION_ID

πŸ“‘ Stream Mode

Consume audio or pre-transcribed text from Redis in real time:

uv run talk2scene mode=stream

πŸ“¬ Contact

  • βœ‰οΈ Email: hobart.yang@qq.com
  • πŸ› Issues: Open an issue on GitHub

πŸ“„ License

Licensed under the Apache License 2.0.