One video is easy. A hundred is a workflow problem.
Processing a single video via API is straightforward: submit, poll, download. Processing 500 videos requires orchestration. You need batching, parallelism, error handling, and progress tracking.
n8n's Split in Batches node was designed for exactly this. Combined with RenderIO's API, you can process hundreds of videos without writing a single line of backend code.
Use the native RenderIO node
RenderIO has a partner-verified community node on the n8n marketplace. It provides a visual interface for running FFmpeg commands — including a "Run Multiple" operation that submits up to 10 parallel commands in a single call. Install it from Settings → Community Nodes → search "renderio".
For batch workflows, the node simplifies each iteration inside your Split in Batches loop. The examples below use HTTP Request nodes for full control, but the same FFmpeg commands work with the native node.
The batch video processing workflow
Here's the complete architecture:
Trigger → Get Video List → Split in Batches → Submit to RenderIO → Wait → Check Status → IF Complete → Collect Results → Done
Each batch item processes independently. Failed items don't block the rest.
Step 1: Get your video list
The video list can come from anywhere:
From a webhook:
From Google Sheets:
A sheet with columns: video_url, output_name, status
From an S3 bucket (via HTTP Request): List objects in a bucket and map URLs.
From a database: Query your Postgres/MySQL for unprocessed video records.
The key requirement: each item needs a video URL and an output filename.
Step 2: Prepare items (Code node)
If your source data isn't already in the right format, use a Code node:
This outputs N items, one per video.
Step 3: Split in Batches
Add the Split in Batches node:
Batch Size: 5 (process 5 videos at a time)
Why 5 and not 100? Rate limits and reliability. Processing 5 at a time means:
You stay within API rate limits
Failed batches are smaller and easier to retry
You get results sooner (first batch finishes while others queue)
Adjust batch size based on your plan's rate limits.
Step 4: Submit each video (HTTP Request)
Method: POST
URL:
https://renderio.dev/api/v1/run-ffmpeg-commandAuthentication: Header Auth
Body:
Enable "Continue on Fail" so one bad video doesn't kill the batch.
Step 5: Poll for each result
Add a Wait node set to 5 seconds, then an HTTP Request to check the status:
Method: GET
URL:
https://renderio.dev/api/v1/commands/{{ $json.command_id }}Authentication: Header Auth
Follow it with an IF node: {{ $json.status }} equals completed.
True: Continue to results
False: Check if
failed. If neither, loop back to Wait.
Step 6: Collect results
After each item completes, store the result. You can write to Google Sheets, update a database record, send a Slack notification, or use n8n's Aggregate node to collect all results and process them together at the end.
Performance benchmarks
Processing time varies by video duration, resolution, and the FFmpeg operation. Here's what to expect on RenderIO's infrastructure:
| Video type | Operation | Typical time |
| 30s 1080p clip | H.264 transcode (CRF 23) | 3-8 seconds |
| 2min 1080p | Resize to 720p + compress | 10-20 seconds |
| 30s 4K | Downscale to 1080p | 8-15 seconds |
| 5min 1080p | Full re-encode + watermark | 30-60 seconds |
| 10min 1080p | Two-pass compress | 60-120 seconds |
These numbers assume the source file downloads quickly (CDN-hosted or S3). If the source URL is slow, add download time on top.
Batch size recommendations by plan
Starter (500 commands/mo): Batch size 3-5. ~16 videos/day, good for testing and light production.
Growth (1,000 commands/mo): Batch size 5-10. ~33 videos/day. Covers most content workflows.
Scale (5,000 commands/mo): Batch size 10-20. ~166 videos/day. E-commerce catalogs and multi-platform distribution.
The batch size controls how many videos you submit before waiting for results. Larger batches are faster overall but use more concurrent API slots.
Complete error handling with Code node
The workflow outline above covers the happy path. Real batch jobs need proper error handling. Here's a complete Code node that wraps the submission and retry logic:
This handles three scenarios: completed (pass through), failed with retry (resubmit), and failed permanently (log and move on). The _retryCount tracker prevents infinite retry loops.
Complete workflow with error handling
Here's the robust version:
The counter prevents infinite loops. 60 iterations at 5 seconds each = 5 minutes maximum wait per video.
Parallel batch processing
For faster throughput, submit all videos first, then poll all at once.
Phase 1: Submit all
Phase 2: Poll all
This approach is faster because you don't wait for each video to finish before submitting the next one. The downside is more complex workflow logic.
Dynamic FFmpeg commands per video
Not every video needs the same processing. Use expressions to build commands dynamically. For example, you might resize some clips and merge others into a single video depending on your workflow:
Then in the HTTP Request body, reference {{ $json.ffmpegCommand }} instead of a hardcoded command. If you need to transcode to different codecs per video (H.265 for archival, H.264 for web delivery), the same pattern works with codec-specific commands.
Monitoring batch progress
Add a counter to track progress. Use an n8n Set node to maintain state:
Before batch: Set
totalVideos= number of itemsAfter each item: Increment
processedCountAt intervals: Send Slack message with
{{ $json.processedCount }}/{{ $json.totalVideos }} processed
Or use n8n's built-in execution view to monitor in real time.
Handling rate limits
If you submit too many videos too fast, the API returns 429 Too Many Requests. Handle this:
Use smaller batch sizes (3-5 instead of 10-20)
Add a Wait node between batch submissions (2-3 seconds)
On 429 response: Wait 10 seconds and retry
Combining batch processing with compression
A common pattern: batch process a library of videos to reduce storage costs. You can compress before uploading to a CDN, or compress as part of a migration pipeline. The FFmpeg command for aggressive but visually clean compression:
CRF 26 with -preset slow produces smaller files than CRF 23 with -preset fast, and the visual difference is hard to spot on talking-head or product video content.
For n8n-specific FFmpeg setup, the n8n FFmpeg cloud guide covers authentication and common configuration issues. The broader n8n video processing guide walks through single-video workflows if you want to start simpler before scaling to batches.
Pricing
The Growth plan at $29/mo covers 1,000 commands. For a batch workflow processing 5 videos/day, that gives you 6+ commands per video (submit + poll + retries) with headroom.
If you need to resize for multiple platforms, the n8n resize video guide has platform-specific commands. For TikTok-specific pipelines, the TikTok content automation guide covers variation generation and scheduling across multiple accounts.
FAQ
How many videos can I batch process at once?
There's no hard limit on the number of videos per batch. The constraint is your plan's rate limit. On the Growth plan, you can submit about 10 requests per minute. A batch of 100 videos at 5-per-batch with 5-second waits takes roughly 2 minutes to submit, then processing happens in parallel on RenderIO's side.
What happens if one video in the batch fails?
If you enable "Continue on Fail" on the HTTP Request node, the workflow keeps going. The failed item gets logged, the rest of the batch processes normally. The error handling Code node above shows how to retry transient failures and skip permanent ones.
Can I process videos in parallel instead of sequentially?
Yes. The "Parallel batch processing" section above covers the two-phase approach: submit all jobs first, then poll for all results. This is faster but requires more complex workflow logic. For most use cases, sequential batching with a batch size of 5-10 is fast enough.
How do I track which videos succeeded and which failed?
Use an Aggregate node at the end of the workflow to collect all results into a single summary. Write the results to Google Sheets or a database with columns for input URL, output URL, status, and error message. This gives you a complete audit trail.
What batch size should I use?
Start with 5. If you're on the Scale plan and processing many short clips, you can go up to 20. The sweet spot depends on your video sizes and processing complexity. Larger batches mean more API calls in flight at once, which is faster but burns through rate limits quicker.