Your code knows shadcn, tRPC, Prisma, Redis, and Vercel. Your dictation should too.
Test with your own phrases. If you edit less, you ship more.
Example outputs. Use the copy blocks below to try your own phrases.
It works in VS Code, Xcode, Cursor, JetBrains IDEs, and your browser. The app types wherever the cursor is focused.
Shipped a Next.js app with shadcn ui, tRPC, and Zustand. Deployed on Vercel. Used pnpm and Homebrew. Added memcached as a sidecar.
Upgrade plan: migrate to Postgres with Prisma, add Redis for queues, keep Nginx in front of Node. Switch to TypeScript strict.
Set up GitHub Actions, run Vitest, then publish to npm. Add a changelog and semantic release.
Tip: copy a line, place your cursor in a text field, and read it out. Then try the same line with Apple Dictation. Compare the raw text before any rewrite.
Dictate the same line with the built in tool and with Voice Type. Look for library names, service names, and the overall edit time. Use your own phrase if you prefer. Product names are a good test.
K trim low rumble. Normalize loudness. Cut silence. Feed the recognizer a clear signal. Result: product names and short codes survive the trip.
Room noise and uneven levels blur word boundaries. Result: names and acronyms fall apart, edits pile up.
This is not a claim about a specific vendor. It is a simple summary of why input conditioning helps.
Audio stays on your Mac. The last chunk finalizes quickly, even on older M1 machines. That means long PR descriptions and tickets still feel responsive.