VideoMaker gives you one practical workspace to write, segment, match clips, assemble, and publish. It is built for people who need output fast, not another bloated editor to manage.
The current app is strongest when it behaves like an execution system. The homepage should reflect that clearly: fewer vague claims, more direct explanation of what someone can actually do with it.
Start from your own script, pull from a URL, use a template, or generate a first version with AI so the blank-page problem disappears early.
Split the story into scenes, generate clip search cues, auto-pick visuals, and only step in where the app still needs human judgement.
Assemble the video, refine titles and metadata, then move toward YouTube publishing and delivery without jumping into a second toolchain.
The best version of this product is one where the next action is obvious. This public page now mirrors that logic instead of looking like a leaked internal dashboard.
Open a new video, import a source, and get the script into a usable starting state fast.
Generate scenes, improve clip searches, and fill the first visual pass automatically before doing manual cleanup.
Review pacing, assemble the final output, and move toward release from the same workspace.
This pricing block is intentionally simple. The goal here is to get someone into the product with the right expectation, not overwhelm them with too many plan mechanics before they even try it.
Use the workspace, create projects, and see how the automation-assisted flow works before you commit.
Unlock more AI help, more output, and the faster path from draft to completed video when the workflow becomes part of regular execution.
The biggest improvement here is practical clarity: the public site now explains the product simply, and the app remains the place where the real work happens.
Transform a single idea into a professional 50-slide video course module instantly.
Generates HTML deck from your saved JSON data, captures screenshots via Puppeteer, synthesizes Google TTS audio, and weaves them together via FFMPEG onto a single MP4.
API keys, TTS, video settings, and publishing credentials
Use any OpenAI-compatible API — including Ollama (local, free), Groq (free tier), OpenAI, Anthropic (via proxy), etc. Set provider to "None" to use built-in mock data.
Ollama: http://localhost:11434/v1 | Groq: https://api.groq.com/openai/v1
Google TTS is free and requires no key. ElevenLabs and OpenAI TTS require your own API key.
Add royalty-free background music to assembled YouTube videos. Music will be mixed at low volume behind narration.
Adds a fade effect between slides using FFmpeg. "None" = instant cut (default, fastest).
Burns a PNG logo into the final video. Use a transparent PNG for best results.
Prepend a channel intro and/or append an outro to every rendered video. Must be local MP4 files.
Provide a local path to any MP3/WAV file. It will be looped under the narration at the specified volume.
Add keys for both sources to get the most clips. The app searches all active sources simultaneously and ranks results by video quality (resolution + ideal duration). All free.
Generate custom B-roll clips using AI instead of searching stock footage. Each segment shows a 🤖 AI button when configured.
Create a Google Cloud project, enable YouTube Data API v3, create OAuth 2.0 credentials, and paste them here. All free with your own Google account.
Script → AI narration → B-roll → MP4
Start free, scale when ready. No hidden fees, cancel anytime.