OpenAPI → MCP server → blog post, a deterministic pipeline
At some point in 2025 I got curious whether I could mechanise the "integrate a service with Cursor" loop end-to-end: feed in an OpenAPI spec, get out a published blog post about an MCP server that wraps it. The pipeline works, and I used it to ship nine servers (Notion, GitHub, GitLab, Jira, Confluence, Google Calendar, Google Tasks, Habitify, HeadHunter) along with a generated article for each. The results are interesting — not quite in the way I hoped.
The pipeline
Four stages, each deterministic:
- OpenAPI spec → typed client. For each target service (Notion, GitHub, Jira, …) take its public OpenAPI spec and run a code generator to produce a typed TypeScript SDK. Standard stuff.
- Typed client → MCP server. A template maps each endpoint to an MCP tool: a tool name, a Zod schema for the input, and a handler that forwards to the SDK. Authentication comes from an env var. Output: a standalone
@sargonpiraev/<service>-mcpnpm package. - MCP server → blog post. The server's tool list, along with a few fixed prompt templates, feed an LLM that produces a post with a standard structure: motivation, project structure, installation, configuration, example prompts, and a mermaid sequence diagram of how a request flows through the server.
- Blog post → website. Published on my old blog deployment.
Nine real services went through this (plus one test pass). The output is now archived here on mindex under artifacts.
What actually worked
The infrastructure part is solid. For any well-documented service with an OpenAPI spec, generating a typed SDK is boring and reliable. Wrapping that SDK as an MCP server is also template-friendly: tool name, input schema, handler — repeat per endpoint. Nine servers in, the pattern didn't break once. All of them are still runnable, all of them do exactly what's on the tin.
The pipeline answers a real question I had at the time: are MCP servers a commodity? Short answer, yes — for any CRUD-y SaaS API, the MCP wrapper is mostly a translation exercise. If you have OpenAPI, you can have an MCP server in hours. That changes how you think about which integrations deserve a bespoke server (ones where you need custom reasoning, filtering, orchestration across endpoints) and which don't (straight passthroughs).
What didn't
The article step is where the honesty gets uncomfortable. The LLM-generated posts all follow the same visible template — opening "As a daily Cursor user…", the same "Project Structure" section, the same "Installation → Configuration → Example Prompts" arc, the same sequence diagram. Read one and you've read all nine. They don't have a voice; they describe an artifact rather than tell a story. Even with "write in the author's voice" prompt nudges, the result ends up structurally identical because the pipeline's input is structurally identical — spec in, spec out.
This is the useful negative result: scaffolding generation works, editorial generation doesn't. Code generators produce legitimate code because code has a correct answer; article generators from the same input produce legitimate-looking filler because writing has no correct answer from a spec. A structural prompt plus a structural input gives you structural output, and that output reads exactly as flat as it is.
I kept the nine posts as artifacts rather than delete them. They're not dishonest — they describe real packages I published — but they're not blog posts either. They're documentation, and treating them as editorial content was the wrong frame. Separated out, they do the job docs do: if you want to use @sargonpiraev/jira-mcp, you can read its artifact and set it up.
What I'd change
If I ran the pipeline again today I'd:
- Keep stages 1 and 2. They earn their keep — mechanical work that would have cost me days per service is free.
- Drop stage 3 entirely. Don't try to generate an article per server. Let the server's README (also generated — that part works) be the documentation; write one editorial post (this one) about the pipeline overall.
- Use stage 3's capacity for something code-shaped instead. Generate example scripts, integration tests, or a web playground for each server — things where "structural input → structural output" is the right frame.
The lesson I keep re-learning: pipelines that produce code scale; pipelines that produce opinion don't. MCP servers are code. Blog posts aren't.