Postproduction API Guide
Scene-level endpoints for editing pipelines, timeline prep, and structured postproduction metadata.
https://api.creatornode.io/postproductionWhat is the Postproduction API?
The Postproduction API converts creative assets into structured metadata that is easy to plug into editing tools, timeline builders, and automation workflows.
- Compose OTIO - send an explicit manifest of assets, trims, transitions, and audio layers and receive a deterministic OpenTimelineIO artifact for downstream editorial workflows.
- Describe Scenes - upload ordered scene images plus optional narration context and receive concise scene descriptions, optional cue fields, and stable scene mapping by index.
- Scene Timestamps - upload narration audio plus ordered scenes with required
startCueTextandanchorText, and receive suggested scene start times and transition points on the final timeline.
📼 Deterministic timeline export
Compose explicit manifests into OpenTimelineIO artifacts with stable clip trims, transitions, and layered audio decisions.
🎬 Scene-first workflows
Describe storyboard frames, then align downstream scene cue fields back to narration audio for editing workflows.
⏱️ Timeline-ready timing
Return ordered scene start times and transition recommendations you can feed into editors, automations, and cut planners.
📦 Creator-native inputs
Use standard multipart form data for scene images or narration audio plus structured metadata in the same request.
Endpoints
Click an endpoint to see the full guide — examples, tips & tricks, and limits.
Compose OTIO
Compose an OpenTimelineIO timeline from an explicit manifest and deterministic editorial timing.
Describe Scenes
Generate concise per-scene descriptions from uploaded scene images and optional narration context.
Scene Timestamps
Align ordered scene descriptions and optional cue hints to narration audio, then return transition timestamps.
Credit Costs
Postproduction APIScene metadata generation
| Endpoint | Cost | Description | |
|---|---|---|---|
/postproduction/v1/compose-otio | 4 credits | Deterministic timeline composition from an explicit asset manifest. Returns an OTIO artifact in file mode by default, with optional JSON wrapper transport for API-first consumers. Free tier: max 20 manifest items, 180 seconds total timeline duration. Premium: max 200 manifest items, 3600 seconds total timeline duration. | Full guide → |
/postproduction/v1/describe-scenes | 5+ credits | Includes up to 5 images, then +1 credit per additional image. Returns scene descriptions you can use directly in timeline and editing workflows. With narration context, responses may also include cue hints for downstream alignment. Premium and Enterprise can also mark up to 5 scenes with sceneOptions[].extraDetail for deeper analysis at +1 credit each.Free tier: max 5 images (2 MB each), 2K chars narration, 200 chars/description. Premium: max 50 images (5 MB each), 20K chars narration, 500 chars/description. | Full guide → |
/postproduction/v1/scene-timestamps | 9+ credits | Starts at 9 credits. That minimum already includes the 6-credit request base and the first started 5-minute audio block. Each additional started5-minute audio block adds +3 credits, then +1 credit per additional 10-scene block after the first 10 scenes. Audio file size is a tier limit only, not a billing dimension. Returns timeline-aligned scene start times and transition recommendations from required per-scene cue fields. Free tier: max 5 MB audio, 5 min duration, 10 scenes, 5K narration chars, 4K total scene text. Premium: max 25 MB audio, 20 min duration, 100 scenes, 50K narration chars, 40K total scene text. | Full guide → |
See the Pricing page for credit packages and tier comparison.
Usage Examples
Compose an OTIO timeline
Send one explicit manifest and receive a deterministic OTIO timeline artifact in file mode by default:
// Node.js example
const response = await fetch('https://api.creatornode.io/postproduction/v1/compose-otio', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.CREATORNODE_KEY,
},
body: JSON.stringify({
project: {
name: 'Forest Morning Cut',
fps: 24,
resolution: { w: 1920, h: 1080 },
audioSampleRate: 48000,
},
folder: {
items: [
{ id: 'img-opening', kind: 'image', path: 'storyboards/forest-opening.png', meta: { w: 1920, h: 1080 } },
{ id: 'vid-stream', kind: 'video', path: 'rushes/stream-walk.mp4', meta: { durationSec: 8, fps: 24, hasAudio: true } },
{ id: 'aud-music', kind: 'audio', path: 'audio/forest-piano.wav', meta: { durationSec: 6, sampleRate: 48000, channels: 2 } },
],
},
intent: {
kind: 'explicit',
videoSequence: [
{ itemId: 'img-opening', fromSec: 0, toSec: 2, clipNameHint: 'Opening Still' },
{ itemId: 'vid-stream', fromSec: 1, toSec: 5, clipNameHint: 'Forest Walk' },
],
audioLayers: [
{ itemId: 'aud-music', fromSec: 0, toSec: 5, placement: 'trimToFit', role: 'music' },
],
},
output: {
includeClipNames: true,
otioSchema: 'Timeline.1',
},
}),
});
const otioArtifact = await response.text();
console.log(response.headers.get('content-disposition'));Describe three storyboard scenes
Upload images and metadata as multipart form data, then consume structured scene descriptions:
// Node.js example
const fs = require('fs');
const form = new FormData();
form.append('images', new Blob([fs.readFileSync('scene-01.png')]), 'scene-01.png');
form.append('images', new Blob([fs.readFileSync('scene-02.png')]), 'scene-02.png');
form.append('images', new Blob([fs.readFileSync('scene-03.png')]), 'scene-03.png');
form.append('metadata', JSON.stringify({
narrationText: 'A cyclist crosses the bridge, then enters a crowded market, and ends on a sunset skyline.',
sceneIds: ['scene-1', 'scene-2', 'scene-3'],
hints: { languageCode: 'en', style: 'normal' }
}));
const response = await fetch('https://api.creatornode.io/postproduction/v1/describe-scenes', {
method: 'POST',
headers: { 'X-API-Key': process.env.CREATORNODE_KEY },
body: form,
});
const result = await response.json();
console.log(result.data.scenes);