Designing n8n workflow
The user wants an automation file for generating long videos using n8n. This will likely involve fetching scripts (perhaps using GPT), converting to TTS, gathering b-roll clips, assembling with FFmpeg, adding subtitles, and creating thumbnails. I'll ensure the workflow is packaged as a JSON export, ready for import into n8n.
Creating n8n workflow JSON
I’ll craft an n8n workflow in JSON, using environment variables for API keys and including necessary elements like "nodes" and "connections" to match the required structure. I’ll ensure it’s formatted correctly but won’t rely on real-time data. The user will get a downloadable link to the JSON file.
Structuring the workflow
To create the n8n workflow, I'll start with defining nodes like a "Manual Trigger" or "Webhook" for scheduling. I’ll include a "Set" node for video settings (e.g., title, topic), and an OpenAI node to generate the script. The "id" field might not be necessary, but I'll keep the structure clear for import.
Configuring OpenAI and TTS nodes
I’ll configure a generic OpenAI node for script generation, using a model like "gpt-4o-mini" or "gpt-4.1". Since n8n doesn’t allow credentials in exports, I’ll use environment variables for the API key. For text-to-speech, I’ll either use ElevenLabs or OpenAI’s own TTS with an HTTP request to handle the audio conversion.
Creating a video automation pipeline
To meet the user's request for a ready-to-upload YouTube automation, I'll define a robust pipeline: from triggering via Webhook, generating scripts with OpenAI, producing audio with ElevenLabs, fetching B-roll via Pexels, assembling media with ffmpeg, creating subtitles, and preparing metadata. The final output will include a complete JSON template for n8n nodes structured for efficient workflow execution.
Designing the automation workflow
I'll set up a streamlined pipeline using basic nodes: manual triggers, HTTP requests to OpenAI for script and TTS generation, and Pexels for video search. The files generated will be saved, and I'll use ffmpeg to combine everything. A final video file will be produced, and the workflow will allow the user to upload it to YouTube. I'll generate a JSON based on these settings.
Creating the workflow steps
I'll design a flow to generate a YouTube-ready video, starting with manually triggering the script. The configuration will include topic, language, duration, and voice settings. Then, I’ll generate a script from OpenAI, create synthesis via TTS, gather video clips from Pexels, and assemble everything into a final video.
Thought for 2m 39s