Memories.ai Video Agents are open-source reference implementations built on top of the Memories.ai ecosystem. Use them to bootstrap real workflows, then customize for your own product and data.Documentation Index
Fetch the complete documentation index at: https://api-tools.memories.ai/llms.txt
Use this file to discover all available pages before exploring further.
Video Searching Agent
Official OpenClaw skill on ClawHub for searching and analyzing videos across YouTube, TikTok, Instagram, and X.
Video Editing Agent (VEA)
Turn long-form videos into short-form outputs with indexing, script generation, clip planning, TTS, and rendering.
Which Agent Should You Use?
| Goal | Recommended Agent | Why |
|---|---|---|
| Discover trends and creators from social platforms | Video Searching Agent | Multi-platform search, query parsing, ranking, and references |
| Build recap/highlight generation from long videos | Video Editing Agent (VEA) | End-to-end editing pipeline from indexing to final export |
| Do both discovery and automated repurposing | Use both | Source ideas with Video Searching Agent, then generate deliverables with VEA |
Shared Foundation
- Video Understanding: Both agents can leverage Memories.ai for video metadata, transcript, and semantic understanding.
- Agentic Execution: Both projects orchestrate multiple tools/components instead of single-shot prompting.
- Extensible by Design: You can add tools, prompts, models, and routing logic for your own domain.
Typical Integration Path
- Pick the agent matching your workflow objective.
- Configure required keys and environment variables.
- Run locally and validate with a small set of real videos.
- Customize prompts, tools, and scoring logic for production use.
Source repositories: Memories-ai-labs/video-searching-agent and Memories-ai-labs/vea-open-source.
