Captions that feel.
Turn flat transcripts into subtitles that show when the joke lands, when the speaker softens, when the tension rises, and where the viewer should feel the beat. Built for creators, podcasters, musicians, educators, and SMM teams.
You rewrite captions by hand
Auto-captions are flat. You delete, retype, break lines, add emojis, redo intros. Every clip. Every Sunday.
Your voice gets lost
Posts on TikTok need to hit punchy. LinkedIn needs restraint. Podcast clips need rhythm. One platform's caption looks wrong on the next.
Generic AI doesn't help
ChatGPT writes captions but never the same way twice. There's no brand voice, no platform rules, no pacing. You still rewrite.
Paste a transcript. Pick a feeling. Ship.
The engine runs in your browser. Try it with sample text, switch tones, switch tiers — everything below is real.
Output
Same line. Different feelings.
Each tone is a rule set — emphasis pattern, punctuation flavor, emoji budget, pacing. Pick the one your clip needs.
Three packages, three workflows.
Same captioning brain. Different muscles. Pick what fits how you actually post.
Starter
- 1 transcript at a time, up to 1,500 chars
- 8 emotional tones · 6 platforms
- 3 hook variants per generation
- 1 social caption + emoji set
- Copy-ready text export
- Lifetime access · no login
14-day refund · pay once · keep forever
Pro
- Unlimited generations, up to 8,000 chars
- Brand voice profile (saved & reusable)
- Audience presets & tone mapping
- Intensity & pacing scale
- Before / after comparison
- Batch mode · 5 clips at once
- Export TXT / SRT / kinetic-ready JSON
Cancel anytime · save 31% annually
Studio
- Saved client profiles & brand-safe rules
- Campaign-level caption system
- Team style guide & tone lock
- Multi-platform content packs
- Token usage dashboard (v1.1)
- API access & webhooks (v1.2)
- Priority support · onboarding call
3-seat minimum · onboarding included
| Feature | Starter | Pro | Studio |
|---|---|---|---|
| Expressive caption generation | ✓ | ✓ | ✓ |
| Character limit per generation | 1,500 | 8,000 | 24,000 |
| Hook variants per run | 3 | 5 | 7 |
| Brand voice profile | — | ✓ | ✓ |
| Batch mode (clips/run) | 1 | 5 | 25 |
| SRT / kinetic-JSON export | — | ✓ | ✓ |
| Saved client profiles | — | — | ✓ |
| Multi-platform content packs | — | — | ✓ |
| Team seats | 1 | 1 | 3+ |
| API access | — | — | v1.2 |
Does Emotype actually understand emotion?
It's not magic and we're not pretending. Emotype reads sentence structure, punctuation, and emotionally loaded vocabulary, then applies tone-and-platform rules built with content editors who post for a living. You stay in control — pick tone, pick intensity, edit anywhere.
Is this just a ChatGPT wrapper?
No. The captioning engine is a deterministic, rules-based system trained on platform-specific patterns (TikTok rhythm, LinkedIn cadence, podcast pacing). LLM features come later as an optional layer — clearly labeled when active.
Do I need to upload my video?
Not in the MVP. You paste a transcript or subtitle text. In v1.1 we'll add direct VTT/SRT import. Direct video transcription is on the roadmap (Pro / Studio).
What about my data?
Generation runs locally in this browser tab. Your transcript never leaves your device. When AI-assisted features arrive, they'll be opt-in and clearly labeled.
Can I use this for client work?
Yes. Studio is built for it: saved client profiles, brand-safe tone rules, campaign mode, and team seats.
How do I get a refund?
Starter and Creator Kit have a 14-day no-questions refund. Studio is monthly — cancel any time, prorated.
What languages are supported?
The engine works on any Latin-script language at MVP (English-tuned defaults). Explicit DE, ES, PT, FR, RU presets ship in v1.1.
Will Emotype work with Premiere / CapCut / Descript?
Yes via SRT export (Pro / Studio). Pro also exports kinetic-ready JSON for Remotion or custom kinetic-caption workflows.