Long-Form Understanding
Uses Memories.ai-powered indexing and comprehension to reason over long videos.
Narrative Generation
Generates recap/highlight scripts from your prompt and project context.
Clip Planning + Rendering
Selects relevant clips, composes timeline, and renders final output.
Production Outputs
Exports both
.mp4 deliverables and Final Cut Pro XML for downstream editing.Pipeline
- Index source media to build searchable understanding artifacts.
- Generate structured response plan (recap, highlights, or custom narrative).
- Select and assemble clips based on semantic relevance.
- Apply narration/music/cropping/subtitles according to options.
- Export outputs for distribution and editor handoff.
API Workflow
Index first:Setup Essentials
MEMORIES_API_KEYGOOGLE_CLOUD_PROJECTELEVENLABS_API_KEY
- Python 3.11+
- FFmpeg installed
gcloud auth application-default loginbefore runtime
Output Structure
Generated artifacts are typically saved underdata/outputs/{ProjectName}/:
- Final rendered video (
.mp4) - Final Cut Pro XML (
.fcpxml) - Intermediate assets such as clip plan, narration files, and music assets
Best Fit
- Movie/documentary recap generation
- Highlight reel automation for media teams
- Long-to-short content repurposing pipelines
- Editor-assist workflows that still need timeline-level control
Repository: Memories-ai-labs/vea-open-source
