The Autocomplete That Grew Up#
A year ago, AI coding assistance meant autocomplete. You’d type a function signature and Copilot would suggest the body. Useful, but limited. Now I run two AI tools daily — one that lives in my editor and another that runs in my terminal — and they’ve fundamentally changed how I work. Not by writing code for me, but by handling the mechanical parts so I can focus on the decisions that actually matter.
The workflow is simple: you drive, AI assists.
Two Tools, Different Jobs#
I use two AI coding tools daily, and they’re not interchangeable.
Cursor is an AI-powered IDE (VS Code fork). The AI lives inside your editor — inline completions as you type, select-and-edit with Cmd+K, multi-file generation with Composer. It’s fast, visual, and integrated. When I’m writing a feature, fixing a bug, or refactoring code, Cursor is what I open.
Claude Code is a terminal CLI. No GUI. You describe what you want, and it reads your codebase, plans an approach, creates and edits files, runs commands, and checks its own work. It’s autonomous in a way that Cursor isn’t. When I’m doing something large — setting up infrastructure, building a pipeline, scaffolding a project — Claude Code is the tool.
The split is simple: Cursor for writing code, Claude Code for everything else.
Cursor: The Daily Driver#
Most of my working hours are spent in Cursor. Here’s what that looks like:
Inline completion is the baseline. You start typing, the AI suggests the rest. Tab to accept. It’s Copilot-style autocomplete but with better context awareness — it reads your project structure, open files, and recent edits. I use this for boilerplate, repetitive patterns, and any code where I know exactly what I want and just need someone to type it faster.
Cmd+K is where it gets useful. Select a block of code, describe what you want: “add error handling,” “refactor to use the repository pattern,” “convert to TypeScript.” The AI modifies the code in-place. It’s the fastest path from “this code should be different” to “this code is different.”
Composer handles multi-file changes. Describe a feature — “add a user settings page with API endpoint, service layer, and tests” — and it generates or modifies code across files. This is where you need to be careful. Composer is confident. It will generate plausible-looking code that might not match your architecture. Review everything.
The Vim plugin makes Cursor usable for me. I can’t work without modal editing at this point (blame The Primeagen ). Cursor’s Vim mode isn’t NeoVim — no flash.nvim, no custom Lua plugins — but for reviewing and editing AI-generated code, it’s enough.
Claude Code: The Power Tool#
Claude Code is different. It’s not an autocomplete or an inline editor. It’s an agent that plans and executes multi-step tasks.
CLAUDE.md: Teaching the AI Your Project#
Every project gets a CLAUDE.md file — a markdown document that Claude Code reads at the start of every conversation. Build commands, code style, architecture decisions, front-matter schemas, banned patterns. Anything the AI can’t infer from the code alone.
The key insight: when Claude does something wrong, add a rule so it doesn’t repeat it. Over time, CLAUDE.md becomes a living document of project conventions. A project’s CLAUDE.md might include build commands, code style rules, front-matter schemas, architecture decisions, and known gotchas. The AI reads them before every session and follows them consistently.
Rules: Global Conventions#
Beyond per-project CLAUDE.md, I have global rules that apply to everything:
- Coding style — immutability, small files, functions under 50 lines
- Git workflow — conventional commits, PR process
- Testing — TDD, 80%+ coverage
- Security — no hardcoded secrets, parameterized queries
- Performance — model selection by task complexity
These rules are instructions the AI follows every session, in every project. They encode my preferences so I don’t have to repeat them.
Agent Orchestration#
Claude Code can spawn specialized sub-agents:
- Planner — creates implementation plans before writing code
- Code reviewer — reviews changes with severity levels
- TDD guide — enforces write-test-first workflow
- Security reviewer — scans for vulnerabilities
- Build error resolver — fixes build failures in a loop
I don’t use all of these for every task. The planner and code reviewer see the most action. For larger projects, I’ll run research agents in parallel to gather context from multiple sources simultaneously, then use the planner to structure the implementation before writing any code.
MCP: External Tool Integration#
Model Context Protocol connects Claude Code to external services. My setup includes Obsidian MCP — Claude can read and write notes directly to my vault. This means I can ask Claude to research a topic, pull relevant notes for context, and work with my existing knowledge base without copy-pasting anything. It also connects to Railway for deployments and Context7 for fetching up-to-date library documentation.
The Permission Model#
Claude Code has granular permissions. Read operations (git status, ls, cat) are auto-approved. Write operations prompt me first. This is important — I had a session where I accidentally auto-approved Obsidian writes and had to dial it back. The permission model prevents the “AI deleted my files” scenario while keeping the workflow fast for safe operations.
The Honest Assessment#
Where AI Saves Time#
- Boilerplate and scaffolding — generating file structures, config files, YAML manifests
- Researching and synthesizing — pulling sources, creating structured notes, comparing tools
- Multi-file generation — scaffolding a service with route, controller, service, and test files in one go
- Mechanical refactoring — rename a variable across 20 files, convert between formats
- Debugging — AI has gotten genuinely good at indexing a codebase and tracing bugs. Point it at an error, give it the relevant files, and it’ll often find the root cause faster than you would manually. This used to be a weakness — it’s now one of the strongest use cases
- First drafts — of documentation, READMEs, and technical specs where you’ll review and refine
Where AI Still Struggles#
- Complex architecture from scratch — AI generates plausible designs, but “plausible” isn’t “correct for your specific constraints.” It doesn’t know your team’s capabilities, your timeline, or the political reasons why the last rewrite failed. Architecture still needs a human who understands the full context
- False positives — AI is confident. Always. It will present a plausible explanation for a bug, a reasonable-sounding fix, or a convincing code review comment — and be completely wrong. You still need to verify that its reasoning aligns with what you know to be true. The danger isn’t that AI gives bad answers — it’s that bad answers look identical to good ones
- Unfamiliar territory — if you can’t evaluate the output, you can’t use it safely. This was my AI take and it hasn’t changed. AI that generates code you don’t understand is AI that generates bugs you can’t fix
The Rule#
If you can’t do it yourself, you probably shouldn’t be using AI to do it. Not because AI can’t produce the code — it can. But because you won’t know if the code is any good. AI is a force multiplier. It multiplies your ability to ship code, including your ability to ship bugs.
The Workflow That Works#
1. Understand the problem (human)
2. Plan the approach (human, optionally with AI planner)
3. Generate the code (AI — Cursor or Claude Code)
4. Review every line (human — this is non-negotiable)
5. Test it (AI writes tests, human verifies they test the right things)
6. Commit with clear message (human decides what to commit)
Steps 1, 2, 4, and 6 are human. Steps 3 and 5 are AI. The AI handles volume. You handle judgment.
That’s the split. That’s the workflow. It’s not glamorous, and “use AI but verify everything” doesn’t make for a viral tweet. But it’s honest, and it works.
