Batch Video Processing in n8n: FFmpeg at Scale

February 26, 2026 · RenderIO

One video is easy. A hundred is a workflow problem.

Processing a single video via API is straightforward: submit, poll, download. Processing 500 videos requires orchestration. You need batching, parallelism, error handling, and progress tracking.

n8n's Split in Batches node was designed for exactly this. Combined with RenderIO's API, you can process hundreds of videos without writing a single line of backend code.

Use the native RenderIO node

RenderIO has a partner-verified community node on the n8n marketplace. It provides a visual interface for running FFmpeg commands — including a "Run Multiple" operation that submits up to 10 parallel commands in a single call. Install it from Settings → Community Nodes → search "renderio".

For batch workflows, the node simplifies each iteration inside your Split in Batches loop. The examples below use HTTP Request nodes for full control, but the same FFmpeg commands work with the native node.

The batch video processing workflow

Here's the complete architecture:

TriggerGet Video ListSplit in BatchesSubmit to RenderIOWaitCheck StatusIF CompleteCollect ResultsDone

Each batch item processes independently. Failed items don't block the rest.

Step 1: Get your video list

The video list can come from anywhere:

From a webhook:

{
  "videos": [
    { "url": "https://example.com/video1.mp4", "name": "output1.mp4" },
    { "url": "https://example.com/video2.mp4", "name": "output2.mp4" }
  ]
}

From Google Sheets: A sheet with columns: video_url, output_name, status

From an S3 bucket (via HTTP Request): List objects in a bucket and map URLs.

From a database: Query your Postgres/MySQL for unprocessed video records.

The key requirement: each item needs a video URL and an output filename.

Step 2: Prepare items (Code node)

If your source data isn't already in the right format, use a Code node:

const videos = $input.first().json.videos;

return videos.map(video => ({
  json: {
    inputUrl: video.url,
    outputName: video.name || `processed_${Date.now()}.mp4`
  }
}));

This outputs N items, one per video.

Step 3: Split in Batches

Add the Split in Batches node:

  • Batch Size: 5 (process 5 videos at a time)

Why 5 and not 100? Rate limits and reliability. Processing 5 at a time means:

  • You stay within API rate limits

  • Failed batches are smaller and easier to retry

  • You get results sooner (first batch finishes while others queue)

Adjust batch size based on your plan's rate limits.

Step 4: Submit each video (HTTP Request)

  • Method: POST

  • URL: https://renderio.dev/api/v1/run-ffmpeg-command

  • Authentication: Header Auth

  • Body:

{
  "ffmpeg_command": "-i {{in_video}} -c:v libx264 -preset fast -crf 23 -c:a aac {{out_video}}",
  "input_files": {
    "in_video": "{{ $json.inputUrl }}"
  },
  "output_files": {
    "out_video": "{{ $json.outputName }}"
  }
}

Enable "Continue on Fail" so one bad video doesn't kill the batch.

Step 5: Poll for each result

Add a Wait node set to 5 seconds, then an HTTP Request to check the status:

  • Method: GET

  • URL: https://renderio.dev/api/v1/commands/{{ $json.command_id }}

  • Authentication: Header Auth

Follow it with an IF node: {{ $json.status }} equals completed.

  • True: Continue to results

  • False: Check if failed. If neither, loop back to Wait.

Step 6: Collect results

After each item completes, store the result. You can write to Google Sheets, update a database record, send a Slack notification, or use n8n's Aggregate node to collect all results and process them together at the end.

Performance benchmarks

Processing time varies by video duration, resolution, and the FFmpeg operation. Here's what to expect on RenderIO's infrastructure:

Video typeOperationTypical time
30s 1080p clipH.264 transcode (CRF 23)3-8 seconds
2min 1080pResize to 720p + compress10-20 seconds
30s 4KDownscale to 1080p8-15 seconds
5min 1080pFull re-encode + watermark30-60 seconds
10min 1080pTwo-pass compress60-120 seconds

These numbers assume the source file downloads quickly (CDN-hosted or S3). If the source URL is slow, add download time on top.

Batch size recommendations by plan

  • Starter (500 commands/mo): Batch size 3-5. ~16 videos/day, good for testing and light production.

  • Growth (1,000 commands/mo): Batch size 5-10. ~33 videos/day. Covers most content workflows.

  • Scale (5,000 commands/mo): Batch size 10-20. ~166 videos/day. E-commerce catalogs and multi-platform distribution.

The batch size controls how many videos you submit before waiting for results. Larger batches are faster overall but use more concurrent API slots.

Complete error handling with Code node

The workflow outline above covers the happy path. Real batch jobs need proper error handling. Here's a complete Code node that wraps the submission and retry logic:

// Error-handling Code node
// Place after the HTTP Request (check status) node
const item = $input.first().json;
const maxRetries = 3;
const currentRetry = item._retryCount || 0;

if (item.status === 'SUCCESS') {
  return [{
    json: {
      status: 'SUCCESS',
      outputUrl: item.output_files?.out_video?.storage_url,
      originalInput: item._originalInput,
      processingTime: item.total_processing_seconds
    }
  }];
}

if (item.status === 'FAILED') {
  const errorMsg = item.error || 'Unknown error';
  
  // Retry on transient errors
  if (currentRetry < maxRetries && isTransient(errorMsg)) {
    return [{
      json: {
        ...item._originalPayload,
        _retryCount: currentRetry + 1,
        _originalInput: item._originalInput,
        _resubmit: true
      }
    }];
  }
  
  // Permanent failure - log and continue
  return [{
    json: {
      status: 'failed',
      error: errorMsg,
      originalInput: item._originalInput,
      retries: currentRetry
    }
  }];
}

// Still processing - will loop back to Wait
return [{ json: { ...item, _pollCount: (item._pollCount || 0) + 1 } }];

function isTransient(error) {
  return error.includes('timeout') || 
         error.includes('rate limit') || 
         error.includes('502') ||
         error.includes('503');
}

This handles three scenarios: completed (pass through), failed with retry (resubmit), and failed permanently (log and move on). The _retryCount tracker prevents infinite retry loops.

Complete workflow with error handling

Here's the robust version:

Webhook Trigger
  → Code Node (prepare items)
  → Split in Batches (size: 5)
    → HTTP Request (submit) [Continue on Fail: ON]
    → IF (submit succeeded?)
      → True:
        → Set (store command_id + attempt counter = 0)
        → Wait (5s)
        → HTTP Request (check status)
        → Switch:
          → "completed": Store result → Back to Split in Batches
          → "failed": Log error → Back to Split in Batches
          → "processing": Increment counter
            → IF (counter > 60):
              → True: Log timeout → Back to Split in Batches
              → False: Back to Wait
      → False:
        → Log error → Back to Split in Batches
  → All batches done
  → Aggregate results
  → Send summary notification

The counter prevents infinite loops. 60 iterations at 5 seconds each = 5 minutes maximum wait per video.

Parallel batch processing

For faster throughput, submit all videos first, then poll all at once.

Phase 1: Submit all

Get Videos → Split in Batches (size: 10)
  → HTTP Request (submit)
  → Store command_id in array
  → Back to Split in Batches

Phase 2: Poll all

Code Node (get all command_ids)
  → Split in Batches (size: 10)
    → HTTP Request (check status)
    → IF complete?
      → True: Collect result
      → False: Re-queue for next poll cycle

This approach is faster because you don't wait for each video to finish before submitting the next one. The downside is more complex workflow logic.

Dynamic FFmpeg commands per video

Not every video needs the same processing. Use expressions to build commands dynamically. For example, you might resize some clips and merge others into a single video depending on your workflow:

// Code node: generate per-video commands
const videos = $input.all();

return videos.map(item => {
  const v = item.json;
  let command;

  if (v.targetPlatform === "tiktok") {
    command = '-i {{in_video}} -vf "scale=1080:1920:force_original_aspect_ratio=decrease,pad=1080:1920:(ow-iw)/2:(oh-ih)/2:black" -c:v libx264 -crf 23 -c:a aac {{out_video}}';
  } else if (v.targetPlatform === "youtube") {
    command = '-i {{in_video}} -vf "scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:black" -c:v libx264 -crf 22 -c:a aac {{out_video}}';
  } else {
    command = '-i {{in_video}} -c:v libx264 -crf 23 -c:a aac {{out_video}}';
  }

  return {
    json: {
      inputUrl: v.url,
      outputName: `${v.targetPlatform}_${v.id}.mp4`,
      ffmpegCommand: command
    }
  };
});

Then in the HTTP Request body, reference {{ $json.ffmpegCommand }} instead of a hardcoded command. If you need to transcode to different codecs per video (H.265 for archival, H.264 for web delivery), the same pattern works with codec-specific commands.

Monitoring batch progress

Add a counter to track progress. Use an n8n Set node to maintain state:

  • Before batch: Set totalVideos = number of items

  • After each item: Increment processedCount

  • At intervals: Send Slack message with {{ $json.processedCount }}/{{ $json.totalVideos }} processed

Or use n8n's built-in execution view to monitor in real time.

Handling rate limits

If you submit too many videos too fast, the API returns 429 Too Many Requests. Handle this:

  1. Use smaller batch sizes (3-5 instead of 10-20)

  2. Add a Wait node between batch submissions (2-3 seconds)

  3. On 429 response: Wait 10 seconds and retry

IF: {{ $json.statusCode }} equals 429
  → Wait (10 seconds)
  → Retry submission

Combining batch processing with compression

A common pattern: batch process a library of videos to reduce storage costs. You can compress before uploading to a CDN, or compress as part of a migration pipeline. The FFmpeg command for aggressive but visually clean compression:

{
  "ffmpeg_command": "-i {{in_video}} -c:v libx264 -crf 26 -preset slow -c:a aac -b:a 96k -movflags +faststart {{out_video}}",
  "input_files": { "in_video": "{{ $json.inputUrl }}" },
  "output_files": { "out_video": "{{ $json.outputName }}" }
}

CRF 26 with -preset slow produces smaller files than CRF 23 with -preset fast, and the visual difference is hard to spot on talking-head or product video content.

For n8n-specific FFmpeg setup, the n8n FFmpeg cloud guide covers authentication and common configuration issues. The broader n8n video processing guide walks through single-video workflows if you want to start simpler before scaling to batches.

Pricing

The Growth plan at $29/mo covers 1,000 commands. For a batch workflow processing 5 videos/day, that gives you 6+ commands per video (submit + poll + retries) with headroom.

If you need to resize for multiple platforms, the n8n resize video guide has platform-specific commands. For TikTok-specific pipelines, the TikTok content automation guide covers variation generation and scheduling across multiple accounts.

FAQ

How many videos can I batch process at once?

There's no hard limit on the number of videos per batch. The constraint is your plan's rate limit. On the Growth plan, you can submit about 10 requests per minute. A batch of 100 videos at 5-per-batch with 5-second waits takes roughly 2 minutes to submit, then processing happens in parallel on RenderIO's side.

What happens if one video in the batch fails?

If you enable "Continue on Fail" on the HTTP Request node, the workflow keeps going. The failed item gets logged, the rest of the batch processes normally. The error handling Code node above shows how to retry transient failures and skip permanent ones.

Can I process videos in parallel instead of sequentially?

Yes. The "Parallel batch processing" section above covers the two-phase approach: submit all jobs first, then poll for all results. This is faster but requires more complex workflow logic. For most use cases, sequential batching with a batch size of 5-10 is fast enough.

How do I track which videos succeeded and which failed?

Use an Aggregate node at the end of the workflow to collect all results into a single summary. Write the results to Google Sheets or a database with columns for input URL, output URL, status, and error message. This gives you a complete audit trail.

What batch size should I use?

Start with 5. If you're on the Scale plan and processing many short clips, you can go up to 20. The sweet spot depends on your video sizes and processing complexity. Larger batches mean more API calls in flight at once, which is faster but burns through rate limits quicker.