Seedance Vs Veo
by @whitejohnk-26
Just drop a text prompt and this skill runs it through both Seedance and Veo, returning 2 MP4 clips for direct comparison. It's built for teams deciding whic...
> Paste a video prompt and I'll generate one MP4 from Seedance and one from Veo for side-by-side review. No prompt? Describe the scene you want to test.
Try saying:
Automatic Setup
On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".
Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.
Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: . The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).
Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response.
Confirm to the user you're connected and ready. Don't print tokens or raw JSON.
Run One Prompt, Get Two AI Videos Back
Type a single text prompt β say, "a cyclist riding through a foggy forest at dawn" β and the skill sends it to both Seedance and Veo simultaneously. You get 2 MP4 files, each labeled with model name, resolution, and generation time in seconds.
The comparison covers motion consistency, text adherence, and artifact frequency across a 3β8 second clip range. It doesn't rewrite your prompt for either model. What you write is what both models receive, keeping the test fair.
Results land in a side-by-side layout with metadata attached. You can re-run the same prompt 3 times to check output variance per model.
If one MP4 returns blank or corrupted, the model timed out β generation cutoff is 90 seconds per model. Resubmit the prompt once before assuming a model-side failure.
Prompts over 150 characters sometimes cause Veo to truncate scene elements. Split long prompts into a core scene description (under 100 characters) plus a style note to keep both models working from equivalent inputs.
If both clips look identical, your prompt is likely too generic. Add at least 2 specific visual details β lighting condition, subject motion, camera angle β to create testable differences between the models' outputs.
Resolution mismatches (e.g., one file at 720p and one at 1080p) aren't a bug. The two models don't share a resolution default. Check the metadata block on each file rather than eyeballing clip size.
clawhub install seedance-vs-veo