I Run My Entire Company From a Single Git Repo
Blog / · 8 min read

I Run My Entire Company From a Single Git Repo

No Zapier, no Notion, no n8n. One monorepo, one AI (Claude Code), and a 245-line CLAUDE.md file. Here's how I built an operating system for a one-person company: 10 agents, 10 commands, 139 commits in 5 weeks.

Search “solopreneur AI stack 2026” and articles like Entrepreneur’s “7 AI Tools That Run a One-Person Business” and PrometAI’s solopreneur tech stack guide appear everywhere. Zapier for automation, n8n for workflows, Notion for knowledge, Make for logic, ChatGPT for writing. Stack them together and you’ve got yourself a solopreneur operating system.

I tried that. It didn’t work.

Not because the tools are bad — they’re excellent at what they do. But for a one-person company, they create a specific problem: your tools don’t know about each other. Notion doesn’t know your content strategy. Zapier doesn’t know your brand voice. ChatGPT doesn’t remember what you decided last Tuesday.

So I built something different. My entire company runs from a single Git repository. One monorepo, one AI agent (Claude Code), and a 245-line file called CLAUDE.md that serves as the company’s operating manual.

5 weeks in: 139 commits, 12 bilingual blog posts, 10 AI agents, 10 orchestration commands, and a complete system for content production, signal detection, social distribution, and demand mining.

The Backstory: Two Products, Zero Users

I shipped two products in 10 days. A Chrome extension and a competitor analysis tool. Both worked. Neither found users.

The postmortem was uncomfortable. AI made building fast — I could ship an MVP in days instead of months. But building wasn’t the bottleneck. Distribution was.

During that period I was using the “standard” indie hacker stack: Notion for ideas, Zapier for auto-posting, ChatGPT for copy. The result was a collection of disconnected islands. One time Zapier auto-posted a tweet promoting a product feature — but I’d pivoted the product direction the day before. The tweet contradicted reality. I only found out when a stranger replied pointing out the inconsistency. My demand research in Notion had no way to flow into my content strategy.

That’s when I asked a different question: what if AI wasn’t a tool I use, but a team member who lives in my codebase?

The answer: it needs context. Not the fragmented kind where you re-explain your business every conversation. Complete, persistent, evolving context — your business logic, content strategy, brand voice, technical architecture, past decisions, and current state, all in one place.

The Architecture: Three Layers, One Repo

Ability Layer    agents/              engines that do the work
                   ↑ called by
Orchestration    .claude/commands/    chains abilities into tasks
                   ↑ packages
Asset Layer      brand/ + products/   everything user-facing

Everything lives in one Git repository. When Claude Code starts, it automatically loads CLAUDE.md from the root — a 245-line file that defines the company’s roles, collaboration rules, AI calling conventions, deployment standards, and memory architecture.

Ability Layer: 10 Agents

Three examples that show the range:

demand-mining (Python script) — scrapes Reddit and HN comment threads, uses LLM clustering to extract pain points, outputs a ranked list by emotional intensity. This is the only agent currently implemented as real code, because it needs web scraping and a database.

blog-writer (672-line prompt spec) — defines a 12-step writing pipeline: read content calendar → assess material type → SEO research → competitive scan → choose narrative strategy → write → taste check → launch separate subagent for adversarial review → GEO optimization → generate images → create social media drafts. Not a single line of Python — it’s all rules written for Claude Code. This article was produced using it.

radar (Python script) — daily automated scan of GitHub Trending, HN front page, ProductHunt, and Twitter. Scores signals on 4 dimensions (velocity × relevance × content potential × timeliness) and recommends projects worth studying or blogging about.

The remaining 7 cover social media replies, community participation, competitive analysis, opportunity assessment, cold-start strategy, and more — ranging from prompt specs to methodology templates.

This is deliberate. The maturity path for a capability is: prompt → template → script → full agent → product. You don’t need to over-engineer something that a well-written 400-line Markdown file can handle. Claude Code executes precise Markdown specs like it executes code.

Orchestration Layer: 10 Commands

10 commands cover the full lifecycle — /scan (quick-scan projects), /study (deep teardown), /blog (write blog posts), /prospect (demand mining funnel), /launch (product kickoff), /distribute (multi-channel distribution), and more. Each maps to a .claude/commands/<name>.md file.

Key design principle: commands call abilities, abilities don’t know about commands. The blog-writer agent doesn’t know whether /study or /blog invoked it — it only cares about the input type. This makes capabilities reusable across workflows.

All agent and command output lands in brand/ (blog site, 12 bilingual articles) and products/ (product directory, currently paused). Five toolkit modules support the infrastructure: a unified LLM calling layer, a Twitter posting CLI, a JSON-based scheduler, a design system, and an image optimizer.

Why This Beats Tool Stacking

I’m not saying Zapier and n8n are useless. They solve “how do apps connect to each other.” But a one-person company’s core problem isn’t connection — it’s decision-making with full context.

1. Context stays whole

When Claude Code runs /blog, it can read:

  • Brand voice definitions in CLAUDE.md
  • Pillar balance in content-calendar.json (tech-depth 42%, build-narrative 17%, trend-insight 42%)
  • All 12 published articles’ slugs, keywords, and internal link relationships
  • 8 narrative strategies and quality checklists in the blog-writer agent

All in the same file tree. No API connections, no middleware, no “sync.” The AI reads your codebase like a new hire reads the company wiki — except it actually reads all of it.

2. Capabilities compound automatically

After every Twitter engagement session, data gets written to engagement-log.json. Next time, Claude Code reads the history first, analyzes which reply styles got the most views, then adjusts the current session’s approach.

I didn’t write analytics code. A single rule in CLAUDE.md (“Step 0: review previous results”) makes the AI do this on every execution. Write the rule once, get smarter every time.

3. Cross-pollination is built in

CLAUDE.md contains this rule: “After completing a task, proactively think about cross-feeding — what other directory can this output feed into?”

So:

  • A /study session checks if reusable code emerged → proposes moving it to toolkit/
  • A /prospect finding related to an existing agent → proposes enhancing that agent
  • Narrative techniques refined during blog writing → feed back into twitter-engagement reply quality

In a SaaS tool stack, your Notion notes don’t automatically influence your Zapier workflows. In a monorepo, everything connects.

The AI Remembers Where It Left Off

The most surprising thing about this system isn’t efficiency — it’s continuity.

Every morning I open the terminal and say “continue.” Claude Code reads ROADMAP.md (strategic plan) and memory/MEMORY.md (snapshot of where we left off), then reports: “Last completed: Blog #3 adversarial review. Next step: social media drafts. No blockers.”

No re-explaining context. No scrolling through chat history. It remembers.

This is the real value of “AI living in your codebase” — not how smart any single conversation is, but the cross-session memory and continuous context accumulation. ROADMAP.md is strategic memory, MEMORY.md is operational memory, DECISIONS.md is decision memory. Three files, and the AI can resume from any interruption point.

5 Weeks of Output

MetricNumber
Git commits139
Blog posts12 (bilingual, across 3 pillars)
AI agents10 (from prompt specs to Python scripts)
Orchestration commands10
Toolkit modules5 (AI calling layer + tweet CLI + scheduler + design system + image optimizer)
Strategy pivots1 (product-first → brand-first)

Revenue? Still zero. This system is currently a content production and distribution machine. The monetization engine comes later. But the ceiling for a solo company is distribution — build the distribution capability first, worry about products after.

What You Need to Replicate This

Honest assessment — the barrier isn’t trivial:

  1. Claude Code (or similar code-aware AI). ChatGPT and Cursor work too, but Claude Code’s CLAUDE.md mechanism lets you version-track your company operating manual
  2. Git and monorepo thinking. Not because Git is magic, but because “everything in one tree” is the prerequisite for AI to have complete context
  3. Ability to write Markdown. This system is 80% Markdown files — CLAUDE.md, agent specs, command definitions, content templates. Code is only 20%

The most important thing isn’t tooling. It’s a mindset shift: AI isn’t an app you open. AI is a team member who lives in your codebase, and you need to give it a complete onboarding manual.

CLAUDE.md is that onboarding manual.


FAQ

How is this different from using Cursor or Windsurf?

Cursor and Windsurf are AI code editors — they excel at helping you write code. This approach isn’t primarily about writing code. It’s about having AI execute business processes: writing blog posts, analyzing demand, managing social media, evaluating opportunities. Claude Code’s CLAUDE.md + commands mechanism lets you define arbitrary workflows triggered by natural language. The fundamental difference: code editors help you develop; this system helps you operate.

Do I need programming experience?

You need basic Git and command line skills. But most of the “programming” is writing Markdown files that define rules and processes, not writing Python or TypeScript. If you can write a well-structured document, you can write an agent spec. Claude Code turns your spec into execution.

Is 245 lines of CLAUDE.md too long? Does the AI actually read all of it?

Yes, it reads every line. Claude Code force-loads CLAUDE.md on startup — this is a core mechanism, not optional. The file is negligible relative to the 200K context window. In practice: more detailed specs → fewer AI mistakes. According to Claude Code’s creator, Anthropic’s own team has every member contribute to a shared CLAUDE.md, adding rules whenever the AI makes an error — exactly how I use it.

Related Posts

Subscribe to the Newsletter

Navigate the AI era — original research, opinionated analysis, real-world lessons. Weekly, no fluff.

评论