Why professional developers lead AI projects instead of serving them
Give AI the right prompt and it'll generate a working ASP.NET Core website in 20 minutes. Controllers, services, database migrations, authentication โ the whole stack. Click run, it works.
Six months later, that website is unmaintainable. Service lifetimes cause memory leaks under load. Configuration is hardcoded. Tests don't test anything meaningful. The architecture doesn't scale.
AI built exactly what you asked for. The problem was you didn't know what to ask for.
I use Anthropic's Claude Code every day. It generates boilerplate, refactors repetitive changes, writes first-draft implementations. It's made me significantly faster at shipping production code.
But it hasn't made me less valuable. If anything, the opposite.
AI is powerful, but you need to be its architect, not its typist.
The skill that matters isn't whether you can write code faster than AI โ you can't. It's whether you can lead a project that uses AI. Whether you can recognize good architecture, direct AI toward better solutions, and be a peer to the tools instead of a passive consumer.
โก What AI Can Actually Do
AI can generate a complete website in minutes:
CRUDcontrollers for your domain modelsEntity Frameworkmigrations andDbContextsetupRepository patternimplementationJWTauthentication middleware- Basic model validation
Swaggerdocumentation
And it'll work. You can run it locally, see data flowing, deploy a demo in under an hour.
Here's what AI can't tell you:
- Whether
SingletonorScopedis the right lifetime for your caching service - If your configuration will survive deployment to production
- Whether your error handling will make debugging impossible six months from now
- If your API structure will scale to 100 endpoints
- Whether your test strategy validates behavior or just checks that mocks return what you told them to
- If your data retention strategy will pass an audit
The difference:
AI optimizes for "make it work right now." You need "make it work at scale, under load, for two years, with three other developers maintaining it."
๐ฏ THE SHIFT
The bottleneck isn't "how fast can you type"
It's "how good is your judgment"
๐ฏ Being a Peer to AI
The Service That Leaks Memory
โ Vague Prompt:
"Create a caching service for user data"
AI gives you:
public class UserCacheService
{
private readonly ApplicationDbContext _db;
public UserCacheService(ApplicationDbContext db)
{
_db = db;
}
public User GetUser(int id)
{
// Cache logic here
return _db.Users.Find(id);
}
}
You register it as Singleton for performance:
builder.Services.AddSingleton<UserCacheService>();
builder.Services.AddDbContext<ApplicationDbContext>(...); // Scoped by default
It compiles. It runs. It even passes your tests.
In production: Memory leaks. Stale data. Random errors. Why?
The Singleton service captured a Scoped DbContext from the first request. That DbContext never gets disposed. Every future request uses a dead database connection.
โ
Architect Prompt:
"Create UserCacheService as a Singleton. Inject IDbContextFactory
That's being a peer. AI did the typing. You did the architectural thinking that prevents production incidents.
Directing Architecture
โ Vague: "Create a service to handle orders"
AI gives you:
public class OrderService
{
public void CreateOrder(Order order)
{
var db = new AppDbContext();
db.Orders.Add(order);
db.SaveChanges();
}
}
It works. Ship it.
It's also unmaintainable: Newing up DbContext directly, no dependency injection, synchronous, no error handling, exposing entities.
โ
Architect Prompt:
"Create IOrderService interface and OrderService implementation. Constructor inject IOrderRepository. Accept OrderDto not entity. Return Task
AI gives you professional-grade code because you directed it.
Catching Subtle Mistakes
AI generates this controller:
[HttpPost]
public IActionResult CreateOrder(Order order)
{
_service.CreateOrder(order);
return Ok();
}
Can you spot the issues?
- Exposing domain entity instead of
DTO - No validation
- Wrong
HTTPstatus code (should be201 Created) - No location header
- No error handling
- Synchronous
- No cancellation token
- No observability (logging belongs in the service layer, but nothing here is observable)
If you don't know these patterns, you ship it.
โ
Precise Fix:
"Refactor to accept OrderDto, return CreatedAtAction with location header, make async Task
Being a peer: AI types, you architect.
๐๏ธ My Architectural Governance System
I don't give AI vague prompts. I've built a governance system that Claude Code reads before touching my codebase.
The structure:
๐ ARCHITECTURAL GOVERNANCE
โ
โโโ CLAUDE.md (~1500 words)
โ โโโ Domain context (PHI + financial data)
โ โโโ 10 Inviolable Rules
โ โโโ Architecture pattern (Clean + CQRS)
โ
โโโ /rules/ (12 detailed files)
โ โโโ database.md
โ โโโ api-design.md
โ โโโ testing.md
โ โโโ error-handling.md
โ โโโ validation.md
โ โโโ authentication.md
โ โโโ configuration.md
โ โโโ deployment.md
โ โโโ ... (4 more)
โ
โโโ ๐ค 5 Validation Agents
โโโ Run on every commit
Before Claude Code generates a line, it knows:
- Architecture pattern (
Clean ArchitecturewithCQRS) - Layer boundaries (strictly enforced)
- Technology stack (
PostgreSQL,EF Core,Minimal APIs,Blazor Server) - Patterns we follow (
Result<T>for errors, no hard deletes, all tests pass before commit) - Domain constraints (healthcare systems handling
PHIand financial data)
I'm not sharing my actual rules โ they're specific to my domain and employer. But I'll show you how I thought through building it, so you can build yours.
๐ก Domain Context Shapes Everything
My CLAUDE.md starts with:
"We build internal tools for healthcare operations. Every application handles financial data, protected health information, or both. The standard is not 'good enough' โ it is beyond reproach."
That one statement tells AI:
- Mistakes have regulatory consequences (
HIPAA, financial audits) - Audit trails are non-negotiable
- Security failures are compliance violations
- "It works" isn't the bar, "it's auditable" is
Your context is different. Maybe you build e-commerce (Black Friday performance matters). Maybe internal tools (maintainability is critical). Maybe public APIs (backwards compatibility is sacred).
Write that context explicitly. It shapes every architectural decision.
๐ซ Inviolable Rules
My 10 non-negotiable rules:
- No layer reaches into another layer's internals โ All communication through defined interfaces and contracts
- All database access goes through data/infrastructure layer โ No other project writes directly to database
- No hard deletes โ Audit history is permanent (soft delete via
IsDeleted) - No secrets in code โ Environment variables or Key Vault only
- Mock data only in dev โ No real PHI or client data anywhere
- Handle exceptions at layer boundaries โ Translate to
Result<T>before returning - Seed scripts never run in production โ Production must structurally prevent this
- No self-merging PRs โ Peer review required
- Never commit to main โ Feature branches only
- All tests pass before commit โ No broken code, no exceptions
These aren't preferences. In regulated environments, violations mean failed audits, HIPAA violations, financial penalties.
AI doesn't know my domain. It doesn't know deleting claims data violates retention. It doesn't know PHI in development is a compliance violation. It doesn't know silent exceptions make audits impossible.
I tell it once, in CLAUDE.md. It follows it consistently โ and the validation agents catch the times it doesn't.
๐ฏ What a Production Prompt Looks Like
Here's a real prompt I use for data extraction in production systems. Notice the specificity - explicit workflow, routing rules, verification checklist, anti-patterns called out. This is what "directing AI" means in practice.
You are a data extraction and verification assistant.
FILE: {fileName}
SOURCE FORMAT: {excel sheets as `CSV` / `PDF` pages as text}
FISCAL YEAR NOTE: If the filename contains a year range (e.g. "2024 - 2025"),
FiscalYearNumber is the earlier year. If single year, use that.
WORKFLOW (follow this exactly):
1. Read schema.json and ALL .csv files in ./sheets/ directory
2. Extract data from sheets into schema tables
3. Verify: check layer routing, suffix stripping, data types, completeness
4. Write final verified output to ./output.json
5. Write verification notes to ./verification-details.txt
Do NOT re-read files after step 1. Do NOT rewrite output.json after writing it.
LAYER ROUTING:
- NO suffix โ Quarterly tables (default)
- "- Agg" or "- Aggregate" suffix โ Aggregate tables
IMPORTANT: Strip the layer suffix before storing Customer.
ROW CONSOLIDATION: When multiple sheets contain different metrics
for the same Customer + Year, merge into a SINGLE row. Each sheet
populates its columns on that one row.
EXTRACTION RULES:
1. Extract data into schema tables
2. Only include tables where data was found โ omit empty tables
3. Omit columns null/empty for ALL rows
4. Numbers as plain numbers. Dates as `ISO 8601` (YYYY-MM-DD)
5. Data that doesn't fit schema โ add "recommendations" array
VERIFICATION CHECKLIST:
1. LAYER ROUTING correct
2. SUFFIX STRIPPING โ no layer suffixes in Customer values
3. DATA COMPLETENESS โ key data from source captured
4. DATA TYPES โ numbers not strings, dates `ISO 8601`
5. TABLE NAMES โ match schema exactly
6. FiscalYearNumber โ matches filename year
7. ROW CONSOLIDATION โ one row per Customer + Year per layer
Output format:
`{"tables":{"TableName":[{"Col":"val"}]},"recommendations":[...]}`
Write verification summary to ./verification-details.txt
Do NOT include any explanation in stdout โ only file writes.
This isn't a beginner prompt. It encodes decisions about data normalization, error handling, output format, and edge cases. AI executes the pattern. I designed the pattern.
โ๏ธ Pragmatic Tradeoffs
My guidance evolves with project reality.
Example: PostgreSQL best practice is snake_case. My team's data warehouse is 70% complete in PascalCase.
I could:
- Insist on "best practice" โ create mapping hell
- Force data team to refactor 70% of their work
- Live with dual naming in every query
I adapted: Updated CLAUDE.md to enforce PascalCase for this project.
๐ ARCHITECTURAL JUDGMENT
AI knows best practices from documentation.
You know context from maintaining the system.
Sometimes the right answer is breaking the "rule" because consistency costs outweigh theoretical purity.
๐ค Validation Agents
Rules in a file are a starting point. But rules you don't enforce are suggestions โ and suggestions get ignored under deadline pressure.
I built five automated checks that validate compliance with my architectural rules. They run as Claude Code hooks or standalone prompts โ not manual code review, not hoping the LLM remembers the rules.
Agent 1: Service Lifetime Validator
Catches captive dependencies โ Singletons that inject Scoped services like DbContext. Also flags IOptionsSnapshot<T> in Singleton services (should be IOptionsMonitor<T>). ASP.NET Core's scope validation catches some of these at startup, but not all โ especially in BackgroundService and factory-created scopes.
Agent 2: Boundary Guardian
Ensures controllers never return entities. Every API response and request body must use DTOs. If an entity type appears in a controller's return type or [FromBody] parameter, this agent flags it before it ships.
Agent 3: Error Flow Checker
Catches exceptions used for control flow โ throw new NotFoundException() instead of Result<T>.NotFound(). Also flags bare catch blocks that swallow errors and try/catch in controllers (unhandled exceptions belong in global middleware).
Agent 4: Async & CancellationToken Checker
Flags .Result, .Wait(), synchronous EF Core calls (.ToList() instead of .ToListAsync()), and controller actions missing CancellationToken. These are invisible under normal load and catastrophic under high load.
Agent 5: Test Coverage Gate
Verifies new service methods have tests โ and that the tests actually validate behavior. A test that sets up a mock to return X, then asserts the result is X, proves nothing. This agent catches false-confidence tests that pass regardless of whether the code works.
What an agent looks like in practice
Here's Agent 4 as a Claude Code hook that runs automatically after every file write:
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "prompt",
"prompt": "Check the file that was just written. If it's a controller or service: (1) Flag .Result, .Wait(), .GetAwaiter().GetResult(). (2) Flag synchronous EF Core calls (ToList, Find, SaveChanges โ should be async). (3) Verify controller actions accept CancellationToken. (4) Verify CancellationToken is passed to service calls. Report violations."
}
]
}
]
}
}
This runs every time Claude Code writes a file. If it introduces a synchronous database call or forgets a CancellationToken, the hook catches it immediately โ before it ever reaches a commit.
The other agents work the same way: encode the rule as a prompt, attach it to a hook event, let the system enforce what you'd otherwise catch in code review. The rules are yours. The automation means they actually get followed.
This is "being the architect" at scale. Don't catch mistakes manually โ build systems that prevent them. (In compliance-heavy domains, I add domain-specific agents too โ audit field verification, soft delete enforcement, data retention checks. Your agents should reflect your domain's non-negotiables.)
๐ ๏ธ How You Build Your Own
You don't need my rules. You need your rules based on your experience and domain.
Step 1: Audit Your Patterns
For one week, notice:
- What patterns do you use consistently?
- What architectural decisions do you want enforced?
- What mistakes have juniors made repeatedly?
- What broke in production that should have been prevented?
Write them down. These become your rules.
Step 2: Identify What's Inviolable
Ask for each pattern:
- Does violating this cause production issues?
- Does this protect data integrity, security, or compliance?
- Would I reject a
PRthat ignored this? - Have I debugged this mistake more than once?
Yes to multiple = inviolable.
Usually qualify: Exception handling strategy, configuration management, error propagation, testing requirements.
Usually don't: Naming conventions, comment formatting, logging verbosity.
Step 3: Start Simple, Evolve
๐ก START HERE
You don't need 1500 words on day one.
Five rules is enough to direct AI meaningfully.
Your first 5 rules:
- Service lifetimes (Scoped for most, Singleton only when thread-safe)
- Configuration via
IOptions<T>, never hardcoded - DTOs at API boundaries, entities internal
- Async all the way (no
.Resultor.Wait()) - All tests pass before commit
Then iterate:
- AI does something wrong? Add a rule.
- Correcting the same mistake repeatedly? Codify it.
- Discover a pattern that works? Document it.
Your guidance should be living documentation capturing lessons learned, not static dogma.
๐ผ Why This Makes You More Valuable
The market is splitting:
Group 1: AI as Magic Box
- Prompt โ Code โ Ship
- Can't explain structure
- Can't identify flaws
- Can't debug failures
- Stuck when AI produces wrong output
Group 2: AI as Force Multiplier
- Know good architecture
- Direct AI toward solutions
- Review critically
- Catch issues before production
- Move faster, think harder
Which gets hired? Promoted? Handles 2 AM incidents?
Companies I've worked at โ healthcare processing millions of claims, real estate platforms serving tens of millions of homes, healthcare operations handling PHI โ they need developers who lead projects, make architectural decisions, deliver systems that scale.
AI hasn't changed that. It's made it more important.
Bad code can now be generated at scale. A developer with AI creates 50 endpoints in a day โ all with the same flaw, all compiling, all failing in production.
โก THE NEW BOTTLENECK
From "how fast can you type"
To "how good is your judgment"
๐ What This Means for Your Career
I'm speaking about AI in software development at a university panel April 23rd. Students ask: "What jobs exist in five years if AI can code?"
Jobs that exist:
- Design systems, not just implement features
- Make architectural decisions, not accept AI defaults
- Lead projects using AI as a tool
- Review AI code with informed judgment
- Debug production regardless of who wrote it
- Build governance preventing mistakes
Jobs that disappear:
- Can only do what AI does
- Accept code without understanding
- Can't evaluate quality
- Can't debug others' code
- Treat development as typing
- Have no architectural principles
The patterns in this blog โ DI, configuration, WebAPI design, error handling, testing โ aren't about speed. They're about decisions.
AI writes code in seconds. You need to know if it's the right code.
Learn the patterns. Use AI to accelerate. But be the architect, not the typist.
That's the career that scales.
๐งญ Key Takeaways
- AI amplifies judgment โ good or bad
- Being a peer means directing with architectural knowledge
- Build a governance system encoding your decisions
- Start with domain context, inviolable rules, core patterns
- Organize by concern, not technology
- Evolve guidance as you learn
- Validate automatically, not manually
Next steps:
Start your governance file today. Five rules:
- What service lifetimes and why?
- How do you handle configuration?
- How do you expose data at boundaries?
- What's your error handling strategy?
- What must pass before commit?
That's enough to direct AI meaningfully instead of hoping it guesses right.
In the next post: exception handling in production โ why exceptions for control flow fail audits, how Result<T> patterns prevent it, and building error handling that survives regulatory review.