ATLAS + GOTCHA -- Part 7
Advanced Patterns and the Top 10 Anti-Gotchas
The Problem
You’ve been using ATLAS (the 5-phase architecture checklist) and GOTCHA (the 6-layer AI prompt format) for a few weeks now. The workflow is faster. The AI output is better. You’re spending less time fixing generated code and more time reviewing it.
But as projects get bigger and teams grow, new problems appear. Problems that these frameworks can prevent — if you know where they hide.
I call them anti-gotchas. Not bugs in your code. Not missing features. Mistakes in how you use the structured approach. Patterns that look fine on small projects but break at scale. Ways to write prompts that seem complete but leave critical gaps the AI fills with assumptions.
Over six articles, you’ve seen the frameworks applied to a single API and a single pipeline. Real projects aren’t single services. They’re multi-tenant SaaS platforms. They’re K8s operators managing complex state. They’re serverless functions replacing legacy monoliths. The frameworks scale — but only if you avoid the traps.
The Solution
Ten anti-gotchas. Then three advanced patterns with prompts. Then a library of 20 ready-to-use prompts you can adapt for your own work.
The Top 10 Anti-Gotchas
1. Skipping the Architect Phase Because “It’s Obvious”
The most common mistake. You know what you’re building, so you skip ATLAS Architect and jump to Trace. Two days later you discover the system needs to handle two use cases you didn’t define, the scope has doubled, and your AI-generated code is structured for use case A, not B.
The fix: always write “out of scope” explicitly. The AI will build what you describe. If you don’t say what you’re NOT building, it guesses — and usually guesses too much.
2. Goals That Describe Implementation, Not Outcomes
# Bad: this is implementation, not a goal
GOALS: Use the repository pattern with async/await,
return DTOs from controllers, validate with FluentValidation.
# Good: this is a goal
GOALS: Build a REST API that registers users, authenticates with JWT,
and returns RFC 7807 error responses. Handles 500 concurrent requests.
Goals answer “what does done look like?” Repository pattern is a heuristic. JWT expiry time is an arg. Keep Goals clean.
3. Context Without Your Actual Constraints
Developers write Context that sounds detailed but contains no real constraints:
# Vague context (useless)
CONTEXT: This is a microservices architecture deployed to the cloud.
We follow standard patterns and best practices.
# Real context (useful)
CONTEXT: 10 services, all on AKS West Europe. Database per service.
Service Bus for async. Managed identities — no connection strings.
All logs to Azure Monitor via Serilog. EU data residency required.
“Standard patterns and best practices” means nothing to the AI. “EU data residency required” means no data leaves West Europe region — and the AI will respect that in every storage and integration decision.
4. Heuristics That Are Too Vague to Check
# Vague (the AI can't verify this)
DON'T: Write bad code.
# Specific (the AI can check each one)
DON'T:
- No synchronous calls to external services (no .Result or .Wait())
- No secrets in code — all from environment variables
- No catching Exception base class — catch specific exceptions only
Every heuristic should be a yes/no check: “Did I do this?” If you can’t check it, rewrite it.
5. Missing Args for Anything That Has a Number
If there’s a number in your system — a timeout, a retry count, a replica count, a max request size, a token expiry — it belongs in Args. If you don’t put it there, the AI invents a value. Sometimes it invents a reasonable one. Sometimes it invents 2 retries when you need 5, or 30 minutes when JWT should expire in 1 hour.
Go through your Stress-test scenarios. Every number in those scenarios is an Arg.
6. One Prompt for the Entire System
GOTCHA works best at the service level, not the system level. One prompt for a whole e-commerce platform will produce something generic. One prompt per service — Order Service, Notification Service, Users API — produces something specific.
Use ATLAS to design the whole system. Then write one GOTCHA prompt per component. Each prompt is focused, complete, and scoped.
7. Not Updating GOTCHA When Requirements Change
You write a GOTCHA prompt. The AI generates code. Three weeks later the requirements change — you add refresh tokens, you change from PostgreSQL to Cosmos DB, you add a new event type. You update the code directly and forget to update the GOTCHA prompt.
Next time you ask the AI to extend the system, it works from the old prompt. It reintroduces the old patterns. It uses the old database.
Keep your GOTCHA prompts in the repository. Treat them like documentation — because that’s what they are.
8. Ignoring the “What We Adjusted” Step
In Articles 5 and 6, I showed three things the AI got wrong in each project. That section is not optional. After every AI-generated output, spend 10 minutes checking against each GOTCHA layer:
- Does the code match the Goals?
- Is the build order in the code consistent with Orchestration?
- Are all the Tools present and correctly versioned?
- Does the code reflect the Context (conventions, managed identities, etc.)?
- Does the code follow all the Heuristics?
- Are all the Args used correctly?
Most of the AI’s mistakes are small and fixable in minutes. But you only catch them if you look.
9. Using GOTCHA as a Shortcut, Not a Framework
“I’ll just write a long prompt and call it GOTCHA.” This misses the point. The value of GOTCHA is the separation of concerns between layers. Goals separate from Heuristics. Tools separate from Context. When you mix them, the AI also mixes them — and you get code that’s trying to be a goal, a constraint, and a tool at the same time.
If you can’t easily assign a sentence to exactly one GOTCHA layer, rewrite it.
10. Skipping ATLAS for “Small” Tasks
“This is just a small feature — I don’t need ATLAS.” Small features in large systems are not small. Adding a new field to a database schema can require a migration, an API change, a contract update, a frontend update, and a re-deploy of three services. That’s five ATLAS phases worth of thinking.
You don’t need a full ATLAS document for every task. But you should ask the five questions, even informally: What are the boundaries? What’s the flow? What connects to what? What’s the build order? How do I validate?
Two minutes of thinking. Hours of debugging saved.
Advanced Patterns
Pattern 1: Multi-Tenant SaaS
Multi-tenancy is where most AI-generated code fails. The AI generates single-tenant code by default. You have to be explicit.
Key additions to your GOTCHA prompt:
=== CONTEXT ===
Multi-tenant SaaS. TenantId on every database entity.
All queries must include WHERE tenant_id = @tenantId.
TenantId extracted from JWT claim 'tid' in middleware.
Never return data from one tenant to another — this is a security boundary.
EF Core global query filter: .HasQueryFilter(e => e.TenantId == _tenantContext.TenantId)
=== HEURISTICS ===
DO:
- Inject ITenantContext into every repository
- Set TenantId from ITenantContext in every entity before saving
- Test cross-tenant data isolation explicitly (Scenario: TenantA token accessing TenantB data → 404)
DON'T:
- Don't accept TenantId from the request body — always from JWT
- Don't write any query without the tenant filter
The EF Core global query filter is the most important piece. Without it, you rely on every developer remembering to add the tenant filter to every query. With it, it’s automatic and impossible to forget.
Pattern 2: Kubernetes Operators
Kubernetes operators are complex — they manage custom resources and reconcile state continuously. ATLAS + GOTCHA is especially useful here because the scope is hard to define without it.
Key ATLAS additions:
[T] TRACE (reconciliation loop)
1. Controller watches MyResource CRD for create/update/delete events
2. On event: fetch current state from cluster
3. Compare with desired state (spec)
4. If drift: apply corrective actions (create/update/delete child resources)
5. Update status subresource with current phase + conditions
6. Requeue after N minutes for periodic drift check
[A] ASSEMBLE
Phase 1: CRD definition (schema + validation)
Phase 2: Reconciler skeleton (kubebuilder scaffolding)
Phase 3: Reconcile logic (desired vs actual state comparison)
Phase 4: Status conditions (ready, degraded, progressing)
Phase 5: Error handling (permanent vs transient errors, max retries)
The key Heuristic for operators:
=== HEURISTICS ===
DO:
- Make reconcile idempotent (running it twice must produce the same result)
- Use status conditions to communicate state to users (not just logs)
- Distinguish permanent errors (don't retry) from transient errors (requeue with backoff)
- Always update status before returning an error from Reconcile
Idempotency is the critical one. An operator that’s not idempotent will create duplicate resources, break on retry, and be impossible to debug.
Pattern 3: Serverless Migration
Migrating a monolith to serverless (Azure Functions, for example) is a common project where ATLAS Trace is essential and often skipped.
The monolith has implicit dependencies — a method that calls another method that writes to a database that reads from a cache. When you break those into separate functions, each implicit dependency becomes an explicit integration. Miss one and you have a silent data consistency bug.
Key ATLAS additions for serverless migration:
[A] ARCHITECT
Out of scope: Don't migrate everything at once. This iteration:
only the order processing path. Leave payments and notifications in the monolith.
[T] TRACE
For each function to migrate, trace:
- What triggers it? (HTTP, queue message, timer, event)
- What data does it read? Where from?
- What does it write? Where?
- What does it call? (other functions, external APIs, databases)
- What happens on failure?
[L] LINK
(Map each implicit monolith call to an explicit Azure service)
| Was implicit call | Becomes |
| Direct DB call | Durable Function with retry |
| In-process event | Service Bus message |
| Shared cache | Azure Cache for Redis (shared) |
The 20-Prompt Library
Here are 20 ready-to-use GOTCHA prompts for common tasks. Each is a template — fill in the [YOUR VALUES] sections.
1. CRUD REST API (.NET)
GOALS: Build a .NET 10 REST API for [ENTITY] management. CRUD + JWT auth. RFC 7807 errors.
ORCHESTRATION: Entity → Repository → Service → Controller → Validators → Middleware
TOOLS: .NET 10, EF Core 10, Npgsql, BCrypt.Net-Next, FluentValidation, Serilog
CONTEXT: Part of [SYSTEM]. Deploys to AKS. Managed identities. Soft delete via DeletedAt.
HEURISTICS: Repository pattern. Return DTOs not entities. Async everywhere. No logic in controllers.
ARGS: DB_CONNECTION env var. JWT_SECRET env var. Port 8080. K8s replicas: [N].
2. Event-Driven Worker (.NET + Service Bus)
GOALS: .NET 10 worker service. Consumes [EVENT] from Service Bus. Processes and writes to [DESTINATION].
ORCHESTRATION: Consumer → Deserialize → Validate → Process → Persist → Acknowledge
TOOLS: .NET 10, Azure.Messaging.ServiceBus, EF Core, Serilog, Polly
CONTEXT: [SYSTEM]. Messages are idempotent — deduplicate by [FIELD]. At-least-once delivery.
HEURISTICS: Acknowledge only after successful processing. Retry transient errors with backoff. Log correlation ID.
ARGS: SERVICEBUS_CONNECTION env var. Topic: [TOPIC]. Subscription: [SUB]. Max concurrent: [N].
3. Azure DevOps CI/CD Pipeline
GOALS: Azure DevOps pipeline for [SERVICE]. Build → test → scan → push to ACR → deploy to AKS.
ORCHESTRATION: 5 stages: build, scan (Trivy HIGH/CRITICAL), image push, deploy, smoketest.
TOOLS: Azure DevOps YAML, dotnet 10 SDK, Docker, Trivy, kubectl, kubelogin.
CONTEXT: ACR: [NAME]. AKS: [CLUSTER], namespace [NS]. Secrets from variable group.
HEURISTICS: Deploy only on main. Use dependsOn. Never use latest tag. rollout status before smoketest.
ARGS: Pool: ubuntu-latest. Image tag: BuildId+gitsha. Trivy: HIGH,CRITICAL. Rollout timeout: 5m.
4. PostgreSQL Schema Migration
GOALS: EF Core migration to add [FEATURE] to [TABLE]. Zero downtime. Backward compatible.
ORCHESTRATION: New nullable column → deploy new code → backfill data → add NOT NULL constraint.
TOOLS: EF Core 10, Npgsql, dotnet ef migrations.
CONTEXT: [SERVICE]. Production table has [N] million rows. Migration runs during rolling deploy.
HEURISTICS: Never drop columns in this migration. New columns nullable until backfill. Index on [FIELD].
ARGS: Migration name: Add[Feature]To[Table]. Backfill batch size: 1000. DB_CONNECTION env var.
5. JWT Authentication Middleware
GOALS: ASP.NET Core JWT middleware. Validate HS256 tokens. Extract sub, email, [CLAIMS] as user context.
ORCHESTRATION: Register → configure validation → inject ICurrentUser into DI.
TOOLS: .NET 10, System.IdentityModel.Tokens.Jwt, Microsoft.AspNetCore.Authentication.JwtBearer.
CONTEXT: Tokens issued by [SERVICE]. Must validate issuer, audience, expiry, and signature.
HEURISTICS: Never trust claims without signature validation. Return 401 (not 403) on invalid token.
ARGS: JWT_SECRET env var. Issuer: [ISSUER]. Audience: [AUDIENCE]. Clock skew: 30s.
6. React Form with Validation
GOALS: React form for [ACTION]. Client-side validation + server error display. Accessible.
ORCHESTRATION: Form state → validate on submit → POST to [ENDPOINT] → handle success/error.
TOOLS: React 19, React Hook Form, Zod, fetch API, Tailwind CSS.
CONTEXT: Part of [APP]. API returns RFC 7807 errors. Auth via Bearer token in localStorage.
HEURISTICS: Field-level errors from server mapped to form fields. No double submit. ARIA labels.
ARGS: API endpoint: [URL]. Token: localStorage.getItem('token'). Redirect on success: [PATH].
7. Kubernetes Deployment Manifest
GOALS: K8s manifests for [SERVICE]. Deployment + Service + Ingress. Production-ready.
ORCHESTRATION: Namespace → Secret → ConfigMap → Deployment → Service → Ingress.
TOOLS: Kubernetes 1.29+, NGINX Ingress Controller, cert-manager (TLS).
CONTEXT: AKS West Europe. Image in ACR. Secrets from K8s Secret. Azure managed identity via workload identity.
HEURISTICS: runAsNonRoot. allowPrivilegeEscalation: false. Liveness + readiness probes. Resource limits.
ARGS: Image: [ACR]/[SERVICE]:[TAG]. Namespace: [NS]. Replicas: [N]. CPU: [REQ]/[LIM]. Memory: [REQ]/[LIM].
8. Azure Function HTTP Trigger
GOALS: Azure Function v4 (Node/TypeScript or .NET) for [PURPOSE]. HTTP trigger. Returns [RESPONSE].
ORCHESTRATION: Validate input → [BUSINESS LOGIC] → return result.
TOOLS: Azure Functions v4, [SDK FOR DEPENDENCIES], Application Insights.
CONTEXT: Part of [SYSTEM]. Consumes [INPUT]. Connects to [SERVICES].
HEURISTICS: Anonymous auth level. CORS from [ALLOWED_ORIGIN]. Structured logging. No secrets in code.
ARGS: PORT: 7071 local. Env vars: [LIST]. CORS: [ORIGIN].
9. Integration Test Suite
GOALS: Integration test suite for [SERVICE] using Testcontainers. Real PostgreSQL, no mocks for DB.
ORCHESTRATION: Spin up PostgreSQL → run migrations → seed data → execute tests → teardown.
TOOLS: xUnit, Testcontainers.PostgreSql, Microsoft.AspNetCore.Mvc.Testing, Bogus (fake data).
CONTEXT: Tests run in CI (Azure DevOps, Ubuntu agent). Must complete in under 2 minutes.
HEURISTICS: One container per test collection (not per test). Reset DB between test classes. Test happy + sad paths.
ARGS: PostgreSQL image: postgres:16. Test DB name: [SERVICE]_test. Parallel: false.
10. OpenAPI Specification
GOALS: OpenAPI 3.0 spec for [SERVICE]. All endpoints, request/response schemas, auth, error responses.
ORCHESTRATION: Info → servers → security schemes → paths → components.
TOOLS: OpenAPI 3.0 YAML. Validated against [openapi-generator or similar].
CONTEXT: [SERVICE] endpoints: [LIST]. JWT Bearer auth on all except [PUBLIC ENDPOINTS].
HEURISTICS: All 4xx/5xx responses include RFC 7807 schema. Response schemas never include passwords or secrets.
ARGS: API version: [VERSION]. Server: [URL]. Security scheme: Bearer JWT.
11–20 (Quick Reference)
11. Dockerfile (multi-stage, .NET): FROM sdk → restore → build → FROM runtime → copy → runAsUser 1000
12. Helm Chart (K8s): deployment + service + ingress + values.yaml — parameterized for dev/prod
13. Terraform module (Azure Storage + Function): idempotent, parameterized, managed identity via azurerm provider
14. GitHub Actions workflow: same structure as Azure DevOps article — adapt trigger syntax
15. Rate limiting middleware (.NET): per-IP + per-user limits, return 429 with Retry-After header
16. Distributed cache (Redis): cache-aside pattern, TTL per entity type, invalidation on update
17. Audit log table: append-only, records who changed what when, EF Core interceptor
18. Health check endpoint: DB ping + downstream services + disk space — detailed for /health, simple for /healthz
19. Multi-tenant EF Core: global query filter + TenantId on all entities + ITenantContext
20. CORS policy (.NET): allow specific origins from config, handle OPTIONS preflight, credentials support
For prompts 11–20, the pattern is the same: Goals (what + measurable outcome), Orchestration (order), Tools (specific), Context (your environment), Heuristics (dos/don’ts), Args (concrete values). The templates above give you the structure — fill in your specifics.
Template
The meta-template: how to write any new GOTCHA prompt from scratch.
Before writing the prompt, answer these five questions:
1. What does "done" look like? (→ Goals)
2. What gets built first, second, third? (→ Orchestration)
3. What technologies are we using? (→ Tools)
4. What does the AI need to know about our environment? (→ Context)
5. What rules must the AI follow? What must it never do? (→ Heuristics)
6. What are the concrete values — numbers, names, connection strings? (→ Args)
If you can't answer all six, you're not ready to prompt yet. Go back to ATLAS.
Challenge
Before Article 8, do a quick audit of the last AI-assisted task you completed. Go through the ten anti-gotchas. How many applied? Which ones cost you time?
In Article 8, we’ll close the series with a self-assessment, a 90-day roadmap for turning this into a habit, and a look at where ATLAS + GOTCHA can take you next.
If this series helps you, consider buying me a coffee.
Loading comments...