If you write software for a living, AI isn't a nice-to-have anymore—it's the new baseline. The people who lean in will ship faster, learn faster, and iterate more. If you wait, you'll fall behind.
How I built this site (with AI, on purpose)
This site was built with a lot of AI help because it's simply faster than typing every character myself. From scaffolding components to wiring up APIs to wrangling CSS—AI accelerates the boring parts so I can focus on the interesting ones. Even the writing you're reading now is AI-assisted: ideation, drafting, and editing. But nothing goes out unchecked—every post is fully read and reviewed by me before it's published.
The overwhelming tool zoo
There are so many tools that it's hard to keep track. Let's have a look:
Editor/IDE assistants
Your editor is where momentum lives. I like Cursor for day-to-day work. GitHub Copilot inside VS Code is good, but I haven't used it much since switching to Cursor. If you're in IntelliJ, JetBrains AI Assistant fits naturally. Beyond that, there are plenty of other options: Amazon Q Developer and Kiro (currently a waitlist), Tabnine, and Windsurf.
CLI/code agents
On the command line, I use Claude Code today. I'll lean on Gemini CLI more as the tooling matures. I'm looking to try the OpenAI Codex CLI next. I'm also exploring running OpenCode locally with Ollama and Qwen3 on my MacBook. I don't use GitHub Copilot in the CLI, but it's available as well. Other tools include Aider and Warp, though I haven't tried them.
Background/autonomous agents
For longer-running tasks, I'm using Cursor Agents and GitHub Copilot Agents. Others in this space include Devin, the open-source OpenHands, and Replit AI / Agent. Reliability varies—keep the loop tight and review diffs.
Extensions for CLI agents
You can layer higher-level specs or helpers on top of your CLI agents to make them more reliable and composable. GitHub's Spec Kit provides a way to define structured specs that agents and CLIs can follow across repos and tools, and SuperClaude augments Claude-driven workflows with concise commands and orchestration for complex tasks.
MCP Servers
Model Context Protocol (MCP) servers expose tools and resources that agents can call through your editor or CLI. Think of them as plug-ins that let models read data, hit APIs, or run actions with a consistent interface. There's an explosion of MCP servers right now, and it's hard to keep track of which ones are high quality versus experimental.
Two cautions worth calling out:
- Security: MCP servers introduce real risks and attack surfaces (Equixly's MCP: The New Security Nightmare?).
- Effectiveness: giving a model too many tools can reduce performance—the "too many tools" problem.
In short: add MCP servers deliberately, review permissions, and prefer a small, well-understood set over a kitchen sink.
Models you can choose inside tools
Claude Sonnet 4.5, GPT-5, Gemini 2.5, Llama 4, Grok 4, Qwen, etc.
Every vendor says theirs is the best. In practice, what matters is: does it actually help you finish work faster with fewer mistakes? Alternatively, does it enable you to do something you otherwise couldn't?
My take: pick one per category and ship
Don't boil the ocean. Choose one (maybe two) in each category, get good with it, and only swap when there's a clear productivity win.
- Editor: pick one where you spend most of your time.
- CLI agent: pick one that reliably executes multi-step tasks in your codebase.
- Background agent: pick one that can run longer tasks without babysitting.
- Model defaults: pick a fast, cheap default and a "heavy hitter" for hard tasks.
Consistency beats constant tool-chasing. The key metric is whether you're more productive. Don't be afraid to drop something and move on if it's getting in your way.
What I'm using
Right now, my setup is simple:
- Editor: Cursor — tight integration with tools/agents and it stays out of my way.
- CLI: Claude Code — most mature CLI I've used; Claude Sonnet generally handles multi-step edits well.
- Background agents: Cursor Agents + GitHub Copilot Agents — new in my flow; useful for longer tasks when light supervision is fine or quick tasks when I'm away from a computer.
- MCP servers: Brave Search, Firecrawl, and Context7 — they provide a lot of power with minimal setup.
What I'm keeping an eye on
- CLI: Gemini CLI (waiting for it to improve), OpenAI Codex CLI (haven't had a chance to dive in yet)
- Other tools: SuperClaude, Spec Kit, and options for improved memory/added context for CLI tools
Use whatever makes you fastest end-to-end. If it saves you time and you trust it, it's a win.
A simple framework for adopting AI
- Start with a daily loop
- Triage tasks with an agent; let it draft plans and diffs.
- Use your editor assistant for inline code, tests, and docs.
- Use the CLI agent for multi-file edits, refactors, and scripted chores.
- Define a "done" bar
- Code compiles, tests pass, lint is clean, and you can explain the change.
- Keep human-in-the-loop
- Read everything. Don't rubber-stamp. Ask "What did it change and why?"
- Optimize for throughput
- Keep a scratch/sandbox repo for risky experiments.
- Save effective prompts as snippets/macros (like a Claude custom slash command).
- Favor smaller, testable steps over giant agent runs.
Where AI shines today
- Greenfield scaffolding: project setup, boilerplate, scaffolds.
- Migration and refactors: repetitive code changes across modules.
- Tests and docs: generating coverage and usage examples quickly.
- Data wrangling: JSON/CSV transforms, one-off scripts, shell glue.
- Research and ideation: surfacing approaches and trade-offs fast.
- Otherwise-unwritten code: small internal apps/tools that only exist because AI makes them fast enough to build.
Where you still need to be careful
- Integrations and tool calls: great when they work; brittle when they don't.
- Configuration sprawl (MCP servers, tool wiring): MCP servers are exploding in popularity, but each tool tends to want its own configuration. If I add a new MCP server to my workflow, I often have to wire it up separately in every editor, CLI, and agent I use. It'd be lovely to converge on a single, discoverable location and schema—something like
~/.ai/mcp.json—so tools can just pick it up automatically. - Security and privacy: mind secrets, licenses, and data boundaries.
- Subtle logic changes: always diff, test, and verify reasoning.
- Non-obvious constraints: make them explicit in your prompt or spec.
Editorial note
AI speeds up the work, but it doesn't replace taste or responsibility. No post goes live here without me reading it end-to-end. The same goes for code—AI can suggest, but I'm accountable for what ships.
In practice, this often feels like being my own editor-in-chief with a bench of tireless assistants. I can hand a rough idea to an agent, get back a more polished draft, and keep iterating—in between feeding hay bales to the animals, cooking dinner, or tackling projects around the house. The throughput boost is real because I can pause and resume the collaboration on my schedule.
That said, this isn't an argument for hollowing out newsrooms. A world where every publication fires its staff and one editor-in-chief talks to ChatGPT all day would be worse for readers and for the craft. Reporting, editing, fact-checking, and lived expertise matter. I'm just one person with a few things to say about software; I don't have a team. Leaning on LLMs to get to a result I'm happy with—while I keep the final say on what publishes or ships—is a win for me.
Closing thought
AI won't replace programmers, but programmers who use AI will outpace those who don't. Pick a stack, practice with it, and measure by outcomes: working software, shipped faster, with fewer mistakes. That's the goal.
- Key tools: choose one editor assistant, one CLI agent, one background agent, plus a default and "heavy" model.
- Process: small steps, tight loops, human review.
- Principle: productivity over hype—use what helps you ship.