ATLAS + GOTCHA -- Part 8

Your 90-Day Roadmap: From Developer to ATLAS Architect

#ai #atlas #gotcha #career #roadmap #architecture

The Problem

Eight articles. Two frameworks. Six hands-on projects.

You’ve learned ATLAS and GOTCHA. You’ve seen them applied to a users API, a notification service, a CI/CD pipeline. You’ve read the anti-gotchas. You have 20 prompt templates.

But knowing and doing are different things. Reading about swimming doesn’t make you a swimmer. The frameworks only work if you use them — consistently, on real projects, under real pressure.

The problem with most techniques you learn from articles is that you try them once, they work reasonably well, and then you forget them. Two weeks later you’re back to writing one-line prompts because that’s faster in the moment.

ATLAS + GOTCHA needs to become a habit. Not a technique you remember to use — a way of working you don’t have to think about anymore. The 5 ATLAS questions become automatic. The 6 GOTCHA layers become the default format for every prompt.

This article gives you a concrete plan for making that happen in 90 days.

The Solution

A self-assessment, a 90-day roadmap, and a clear picture of what “ATLAS Architect” looks like in practice.

Self-Assessment: Where Are You Now?

Answer honestly. There are no wrong answers — this is a starting point, not a grade.

Level 0 — Unstructured

  • You write AI prompts as single paragraphs
  • You get inconsistent results and spend time fixing generated code
  • You rarely think about architecture before you start coding
  • You’ve read about ATLAS + GOTCHA but haven’t used them on a real project

Level 1 — Aware

  • You know the ATLAS phases and GOTCHA layers by name
  • You’ve tried the frameworks on one or two tasks
  • You sometimes skip phases because “it’s obvious” or “it’s too small”
  • Your GOTCHA prompts are incomplete — usually missing Context or Args

Level 2 — Practicing

  • You use ATLAS + GOTCHA on most significant tasks
  • You have the templates open in a tab while you work
  • You catch AI mistakes during the review step (the “what we adjusted” part)
  • You still struggle with Trace and Link — the integration phases

Level 3 — Fluent

  • ATLAS questions are automatic: you think through them before touching any tool
  • You write GOTCHA prompts in 5–10 minutes without needing the template
  • You recognize which anti-gotchas you’re most prone to and guard against them
  • You’ve adapted the frameworks to your team’s stack and conventions

Level 4 — Architect

  • You use ATLAS to design systems, not just single services
  • You coach teammates on GOTCHA prompts
  • You maintain a project-specific prompt library
  • You spot the gap between “what the AI generated” and “what the system needs” immediately
  • You know when NOT to use AI — when the task needs human judgment, not code generation

Most developers reading this series are at Level 1 or Level 2. The 90-day plan is designed to get you to Level 3. Level 4 comes with practice over months.

The 90-Day Roadmap

Month 1: Build the Habit

Goal: Use ATLAS + GOTCHA on every AI task, even small ones. Speed doesn’t matter yet — completeness does.

Week 1–2:

  • Print the ATLAS checklist and the GOTCHA template. Put them next to your screen.
  • For every AI task this week, fill in at least 3 ATLAS phases before prompting.
  • Don’t skip Trace and Link. These are the phases you’ll be tempted to skip.

Week 3–4:

  • Start keeping a prompt log. When you write a GOTCHA prompt, save it. Note what the AI got right and what it got wrong.
  • After each AI task, do the 6-layer review: Goals ✓, Orchestration ✓, Tools ✓, Context ✓, Heuristics ✓, Args ✓.
  • Identify which anti-gotcha you hit most often. Write it on a sticky note.

End of Month 1 goal: You’ve used ATLAS + GOTCHA on at least 10 tasks. You know which phases you skip by default. You have a growing prompt log.

Month 2: Apply to a Real Project

Goal: Run ATLAS on a complete feature or service, end to end.

Week 5–6:

  • Pick a real project — a new feature, a service you’re refactoring, a new API. Not a toy project.
  • Fill in the full ATLAS checklist. All five phases. For a real project this takes 30–45 minutes.
  • Write the GOTCHA prompt. All six layers. Keep it in your repository.

Week 7–8:

  • Use the prompt. Build the feature with AI assistance.
  • Document what the AI generated, what you adjusted, and why.
  • Update the GOTCHA prompt to reflect what you changed (the adjusted prompt is now better than the original).

End of Month 2 goal: You have one complete ATLAS document and one refined GOTCHA prompt for a real project. You know the difference between a first-draft prompt and a production prompt.

Month 3: Scale and Share

Goal: Apply the frameworks to a multi-service project. Teach someone else.

Week 9–10:

  • Design a system with 2–3 services using ATLAS. One ATLAS document for the system, one GOTCHA prompt per service.
  • Build the integration map (Link phase) carefully. This is where multi-service designs break.
  • Pay attention to how GOTCHA Context changes between services — each one has different dependencies.

Week 11–12:

  • Teach the frameworks to one person on your team. You don’t need to be an expert. Show them ATLAS, show them GOTCHA, do one task together.
  • Teaching forces you to articulate what you know. The questions you can’t answer clearly are the gaps you still need to fill.
  • Review the anti-gotchas list with them. Ask which ones they recognize from their own experience.

End of Month 3 goal: You’re at Level 3. The frameworks are a habit, not a technique. You’ve applied them to a multi-service system. You’ve shared them with at least one person.

What Comes Next

ATLAS + GOTCHA are frameworks for thinking. They apply beyond AI prompts.

System design reviews. Use ATLAS as a review checklist. When a colleague presents an architecture, run it through the five phases mentally: Are the boundaries defined? Is the data flow documented? Are the integrations mapped? Is there a build plan? Is there a validation plan?

Technical interviews. Architecture questions in interviews follow exactly the ATLAS structure. “Design a notification system” — that’s ATLAS. Define boundaries, trace the flow, map the integrations, decide the build order, validate under load.

Incident reviews. Most production incidents trace back to a gap in one of the five ATLAS phases. A circular dependency nobody traced. An integration that had no failure mode defined. A load scenario that was never stress-tested. ATLAS is also a post-mortem framework.

Writing technical specs. A good technical spec answers the same five questions ATLAS asks. Use it as an outline: problem (Architect), data flow (Trace), integrations (Link), implementation plan (Assemble), acceptance criteria (Stress-test).

The Next Series

This series covered the foundations: why AI prompts fail, how ATLAS structures your thinking, how GOTCHA structures the AI’s instructions, and how to apply both to real projects.

The next series is AI-Driven Cloud DevSecOps. You’ll learn how to use Claude CLI as a conversational interface for your entire infrastructure — Terraform, Kubernetes, CI/CD pipelines, security scanning — all through dialogue instead of tab-switching. Same approach: structured thinking before tools, real code, honest about what works and what doesn’t. And yes, ATLAS + GOTCHA will be there too — as the governance layer for what the AI can and cannot do.

If you want to be notified when that series starts, subscribe below.

Template

The complete ATLAS + GOTCHA quick reference. Print this. Keep it with you.

═══════════════════════════════
ATLAS — THINK BEFORE YOU PROMPT
═══════════════════════════════

[A] ARCHITECT
  What am I building? What are the boundaries?
  Who uses it? What are the constraints?
  What is explicitly OUT OF SCOPE?

[T] TRACE
  Follow one request from start to finish.
  Every step is a decision point.

[L] LINK
  How do components talk to each other?
  What's the protocol? The contract?
  What happens when a dependency fails?

[A] ASSEMBLE
  What gets built first? In what order?
  What depends on what?

[S] STRESS-TEST
  How do I know it works?
  Load scenario. Failure scenario. Data integrity scenario.

═══════════════════════════════════
GOTCHA — PROMPT FOR THE AI
═══════════════════════════════════

=== GOALS (from Architect) ===
What must the AI achieve? Measurable outcomes.

=== ORCHESTRATION (from Trace + Assemble) ===
What gets built first? What depends on what?

=== TOOLS (from Link — technology names) ===
Frameworks, libraries, APIs. Specific versions.

=== CONTEXT (from Link + Trace — environment) ===
Your system. Your conventions. Your constraints.

=== HEURISTICS (from Assemble — rules) ===
DO:
DON'T:

=== ARGS (from Stress-test — concrete values) ===
Numbers, names, connection strings, limits.

═══════════════════════════════
THE MAPPING
═══════════════════════════════

Architect → Goals
Trace     → Orchestration
Link      → Tools + Context
Assemble  → Heuristics
Stress    → Args

Proof It Works: ScraperAgent

Everything in this series came from real work. And the best proof is a real project built with ATLAS + GOTCHA from day one.

ScraperAgent is an AI-powered market intelligence platform I built using the exact workflow described in these articles. It scrapes expert Twitter/X accounts across financial markets and crypto, runs sentiment analysis through Azure OpenAI, and delivers structured daily reports to subscribers by email.

The tech stack is exactly what you’ve seen in this series: ASP.NET Minimal API, Channel<T> + BackgroundService for async jobs, Entity Framework Core with PostgreSQL, Next.js frontend, QuantumID for authentication, and Kubernetes on Scaleway Kapsule for deployment. The goals/ folder in the repo contains the actual ATLAS documents and GOTCHA prompts used to build each feature — not cleaned-up versions for the article, the real ones.

What makes it a good example is that it’s not a toy project. It handles subscription management with Mollie payments, has a cron scheduler for automated analysis runs, and stores reports as JSONB in PostgreSQL. Every component was designed with ATLAS and generated with GOTCHA. The adjustments made after AI generation — the “what we adjusted” step from Articles 5 and 6 — are visible in the commit history.

If you want to see what ATLAS + GOTCHA looks like applied to a real product, not just a tutorial API, browse the repo. Fork it, read the prompts, and try extending it with your own GOTCHA prompt.

Final Words

When I started using AI for development, my prompts were as vague as anyone’s. “Create an API for user management.” “Add authentication.” “Write a pipeline.”

The results were frustrating. Not because the AI was bad — because I hadn’t done my homework. I was asking a probabilistic system for deterministic results, without giving it enough information to narrow the probability space.

ATLAS is the homework. GOTCHA is the format. Together, they don’t eliminate the probabilistic nature of AI models — nothing does. But they dramatically reduce the space of possible answers by being precise about what you need.

I’m a cloud architect working with Azure, Kubernetes, and distributed systems. These frameworks came from real experience — from the frustration of getting wrong answers and the satisfaction of getting right ones. They work on the kind of projects I work on every day: multi-service platforms, enterprise deployments, regulated environments.

They’ll work on yours too. Start with the checklist. Fill in all five phases, even when it feels obvious. Write the prompt in six layers, even when it feels like extra work. Review the output against the prompt, even when the code looks fine.

Do it ten times. Then it starts to feel natural. Do it thirty times. Then it’s a habit. Do it a hundred times. Then you’re an ATLAS Architect.

See you in the next series.

Victor

If this series helps you, consider buying me a coffee.

Comments

Loading comments...