Generating unique variations shouldn't be manual work
You have a video. You need 10 unique copies. Each one needs to be different enough that duplicate detection can't match them, but identical enough that viewers see the same content.
Doing this manually means running FFmpeg 10 times with different parameters. That's tedious and error-prone. A script does it in seconds.
This post covers three approaches: shell scripts (local), Python scripts (API), and n8n workflows (no code).
Approach 1: Shell script (local FFmpeg)
If you have FFmpeg installed locally, this bash script generates N variations:
Run it:
Output:
Pros: Fast, no API costs, works offline. Cons: Uses your local CPU, doesn't scale, blocks your machine during encoding.
The same batch pattern works for other conversions too. If you need to convert a folder of MP4s to GIF, the approach is similar but with palette optimization in the filter chain.
Approach 2: Python script with API
Offload the encoding to RenderIO. Your machine stays free. Works from any environment.
Run it:
Output:
All 10 encode simultaneously in isolated containers. Total time: same as encoding one video (they run in parallel).
Approach 3: n8n workflow (no code)
For non-developers, build it as an n8n workflow:
Node 1: Webhook Trigger
Node 2: Code node (generate parameters)
Node 3: Split in Batches (size: 5)
Node 4: HTTP Request (submit to RenderIO)
Reference {{ $json.crop }}, {{ $json.bright }}, etc. in the FFmpeg command.
Nodes 5-7: Poll and collect
Parameter randomization strategies
Fully random
Each parameter is independently random within a range. Simplest approach. Small chance of two similar combinations.
Grid-based
Distribute parameters evenly across the range. Guarantees maximum diversity.
Seeded random (deterministic)
Same seed always produces the same parameters. Useful when you need to reproduce variations.
Verifying uniqueness
After batch generation, verify each file is unique:
If any two files have the same MD5 hash, the parameter randomization produced identical values. Re-run with wider ranges.
Scaling considerations
| Variations | API Processing Time | Approach |
| 1-5 | 1-3 min | Any approach |
| 5-20 | 2-5 min | API (parallel) |
| 20-50 | 3-8 min | API (parallel) |
| 50-100 | 5-15 min | API (batched parallel) |
| 100+ | 15+ min | API (batched, monitor rate limits) |
All variations process in parallel on the API. The main bottleneck at high volume is rate limiting. Batch your submissions to stay within limits.
Get started
Pick your approach:
Shell script: For quick local generation
Python + API: For reliable, scalable generation
n8n workflow: For no-code automation
The Growth plan at 99/mo, 20,000 commands) as volume grows. For TikTok-specific automation, the TikTok content automation pipeline in n8n builds on these techniques. Dropshippers can check the dropshipping video automation guide for a supplier-to-TikTok pipeline.