Every blog post on icanbefitter.com is produced with the help of AI. Not written by AI. Produced with the help of AI. That distinction matters, and I am going to explain exactly how the system works because transparency is not optional when AI is involved.
I built a 4-agent pipeline that handles the mechanical parts of content production — research, strategy, writing assistance, and SEO optimization. Each agent has a specific role, a specific model, and a specific output that feeds into the next agent. The result is content that carries my voice, my experience, and my perspective, produced at a speed that would be impossible without AI.
Here is the full breakdown. No secrets. No vague hand-waving about "AI-assisted content." The exact system, agent by agent.
Agent 0: The Researcher
Before a single word is written, Agent 0 goes to work. This agent thinks like a journalist. Its job is to gather facts, statistics, unique angles, and relevant context about the topic.
Agent 0 runs on Claude Sonnet — fast, capable, and cost-effective for research tasks. It receives a topic brief — usually a few sentences about what the post should cover — and returns a structured research document. That document includes: key facts and statistics, common misconceptions about the topic, unique angles that most articles miss, relevant personal experience points to weave in, and potential counterarguments to address.
The research document is not the post. It is the raw material. Think of it as a journalist's notebook — full of facts and angles, waiting for a writer to turn them into a story.
Why a separate research agent instead of just having the writer research and write simultaneously? Because separation of concerns produces better results. When an AI tries to research and write at the same time, the research suffers. It grabs the first relevant fact instead of finding the best one. By separating research from writing, each agent can focus entirely on its specialty.
Agent 1: The Strategist
Agent 1 takes the research document and creates the strategic plan for the post. This agent thinks like an editor-in-chief.
Also running on Claude Sonnet, Agent 1 produces: title options (usually 3-5, ranked by impact), the slug (URL-friendly, SEO-optimized), a detailed outline with H2 headings and bullet points for each section, the tone and voice specifications, the target audience definition, and the hook strategy for the opening paragraph.
The strategist does not write the content. It architects the content. It decides the structure, the flow, and the emotional arc. Where should the personal story go? Where should the data land? Where should the reader feel challenged? Where should they feel inspired? These are editorial decisions, and Agent 1 makes them based on what has performed well in previous posts.
This is the agent I interact with most. I review the strategy, adjust the outline if it misses something important, change the angle if it feels too generic, and approve it before the writer touches it. The strategy is where the post succeeds or fails — a well-structured mediocre post outperforms a brilliantly written chaotic one every time.
AI is not magic. It is a well-prompted function. The quality of the input determines the quality of the output. Always.
Agent 2: The Writer
Agent 2 is the writer. This is the agent that produces the actual HTML content you read on icanbefitter.com. And this is the agent where I refuse to compromise on model quality.
Agent 2 runs on Claude Opus. Not Sonnet. Opus.
Here is why. Sonnet is excellent for structured, analytical tasks — research, strategy, SEO. It is fast, it is cheap, and its output quality for those tasks is more than sufficient. But for long-form writing that needs to capture a specific voice, weave personal stories naturally, vary sentence rhythm, and create emotional resonance — Opus is in a different class.
The difference is subtle but real. Sonnet writes like a very competent content writer. Opus writes like it understands what makes writing feel human. The sentence variation is more natural. The transitions between sections are smoother. The personal anecdotes land with more emotional weight. For a platform built on authenticity, that quality gap justifies the higher cost.
Agent 2 receives: the research document from Agent 0, the strategic outline from Agent 1, voice guidelines (my specific writing style — short punchy sentences alternating with longer philosophical ones, Hindi phrases where natural, direct first person always), and structural requirements (H2 sections, blockquotes for key insights, no generic filler phrases).
The output is complete HTML content — formatted, structured, and ready for review. Not ready for publishing. Ready for review. Nothing goes live without my explicit approval. That is a non-negotiable rule. The AI produces drafts. I produce decisions.
Agent 3: The SEO Specialist
Agent 3 takes the finished content and optimizes it for search. This agent thinks like a search marketing specialist.
Running on Claude Sonnet (SEO is an analytical task, not a creative one), Agent 3 produces: the focus keyword (a long-tail term people actually search for), secondary keywords (4-5 related terms woven naturally into the content), the SEO title (different from the post title, optimized for search result clicks), the meta description (150-160 characters, compelling, includes the focus keyword), tag suggestions, and internal link recommendations to other posts on the site.
Agent 3 also reviews the content for SEO issues: is the focus keyword in the first paragraph? Are the H2 headings descriptive and keyword-relevant? Is the content length competitive for the target keyword? Are there opportunities for featured snippet formatting?
The SEO agent is the least glamorous part of the pipeline but arguably the most valuable for long-term traffic. A well-written post with bad SEO reaches nobody. A well-written post with good SEO compounds traffic for years.
Why Opus for Writing, Sonnet for Everything Else
This is the question I get most when I explain the pipeline. Why not use Opus for everything? Or why not use Sonnet for everything?
Cost versus quality, applied strategically.
Opus costs significantly more per token than Sonnet. If I ran all four agents on Opus, the cost per post would be prohibitive for daily publishing. But running the writer on Sonnet would produce content that sounds competent but not compelling — technically correct but emotionally flat.
The solution is to use each model where its strengths justify its cost. Sonnet handles research, strategy, and SEO brilliantly — these are structured, analytical tasks where the output quality between Sonnet and Opus is minimal. Opus handles writing — the one task where the quality gap between models is significant and directly impacts reader experience.
This is not just a cost optimization. It is a quality optimization. Each agent gets the model best suited to its task. The result is a pipeline that is both affordable to run daily and produces content quality that justifies publishing under my name.
The Biggest Lesson: AI Is a Tool, Not a Creator
After running this pipeline for months and producing dozens of posts, here is what I know for certain: AI is a phenomenally powerful tool. It is not a creator.
The AI cannot live my life. It cannot train calisthenics for 14 years, leave the Navy with no pension, build a startup from nothing, or raise a son named Avyaansh. The raw material of authentic content — lived experience, earned perspective, real emotion — does not come from a language model. It comes from a human who has done the work.
What the AI can do is take that raw material and produce polished content at a speed and consistency that would be impossible for a solo creator. It handles the craft — structure, formatting, SEO, research synthesis. I handle the soul — the ideas, the stories, the decisions about what matters and what does not.
I handle the soul. AI handles the craft. The day that equation flips, I stop publishing.
Why Transparency Matters
Some creators use AI and pretend they do not. They present AI-generated content as entirely hand-crafted, hoping their audience never finds out. I think that is a mistake — both ethically and strategically.
Ethically, because audiences deserve to know how content is produced. Not every detail of the pipeline, but the honest acknowledgment that AI is part of the process.
Strategically, because the audience will find out eventually. AI detection is improving. And when the audience discovers the deception, the trust damage is catastrophic. It is far better to be transparent from the start and let the quality of the content speak for itself.
I use AI. Every post on this blog is produced through a 4-agent pipeline. And every post is reviewed, approved, and published by me — with my experience, my judgment, and my name behind it. If the content is good, the AI deserves some credit. If the content is bad, I deserve all the blame. That is the honest deal.
All model assignments are stored in the settings table in Supabase — never hardcoded. If Anthropic releases a model that outperforms Opus for writing, I change one setting in the admin panel and the entire pipeline upgrades. No code changes. No deployments. The architecture is designed to evolve as AI improves.
That is the pipeline. Four agents, two models, one creator. Full transparency. Now you know exactly how every post on this blog is made.
Har Har Mahadev. Go Win!

