ATLAS + GOTCHA -- Part 4

The Perfect Mapping: ATLAS Meets GOTCHA

#ai #atlas #gotcha #mapping #prompts

The Problem

You now know two frameworks. ATLAS makes you think. GOTCHA organizes that thinking for the AI. But so far, we’ve treated them separately. ATLAS in one article, GOTCHA in another.

In practice, developers struggle with the gap between the two. They fill in ATLAS, then stare at an empty GOTCHA template and wonder: “Where does this go? Is my data flow a Goal or a Context? Is my build order Orchestration or Heuristics?”

This confusion is normal. It happens because ATLAS and GOTCHA use different words for related concepts. ATLAS speaks in architecture language: phases, flows, integrations. GOTCHA speaks in AI language: goals, context, heuristics. The connection between them isn’t obvious until you see it.

That’s what this article is about. The 1:1 mapping between ATLAS and GOTCHA. Once you see it, you’ll never struggle to fill in a GOTCHA prompt again — because your ATLAS checklist already has every answer.

The Solution

Here’s the complete mapping:

ATLAS PhaseGOTCHA LayerYou decide…The AI receives…
ArchitectGoalsWhat to build and the boundariesThe mission and success criteria
TraceOrchestrationHow data flows through the systemThe sequence and dependencies
LinkTools + ContextHow components connectWhat to use and the environment
AssembleHeuristicsHow to build it rightThe rules and quality gates
Stress-testArgsHow to validate itThe concrete inputs and targets

Five ATLAS phases. Six GOTCHA layers. The mapping is not one-to-one in a strict sense — Link maps to two GOTCHA layers (Tools and Context). But every ATLAS decision has a clear home in GOTCHA.

Let’s go through each mapping.

Architect → Goals

Your Architect phase defines what you’re building and what’s out of scope. That becomes the AI’s mission.

You wrote in ArchitectIt becomes in Goals
”Notification service for e-commerce platform""Build a .NET 10 worker service that sends notifications"
"Must process events within 30 seconds""Processes each event within 30 seconds of receipt"
"At-least-once delivery""Guarantees at-least-once delivery"
"Out of scope: marketing emails”(You leave it out of Goals — the AI only builds what’s listed)

The “out of scope” from Architect is powerful. By not including something in Goals, you prevent the AI from building it. You don’t need to say “don’t build marketing emails.” You just don’t mention them.

Trace → Orchestration

Your Trace phase follows data from start to finish. That sequence becomes the AI’s build order.

You wrote in TraceIt becomes in Orchestration
”Event arrives at Service Bus""Step 1: Service Bus consumer"
"Service queries user preferences""Step 3: User preference service"
"Load template, render, send""Steps 4-6: Template engine, dispatcher, providers"
"Retry 3 times if fail""Step 7: Retry logic"
"Acknowledge after processing”(Implicit in the consumer step)

Trace gives you the flow. Orchestration gives the AI the build sequence. They’re the same information in different formats. The flow tells you what happens at runtime. The orchestration tells the AI what to build first.

Link is the only ATLAS phase that maps to two GOTCHA layers. This makes sense because integrations have two parts: WHAT you use (Tools) and HOW your environment works (Context).

From the same Link table:

You wrote in LinkTools getsContext gets
”AMQP subscription to Service Bus”Azure.Messaging.ServiceBus”Inter-service communication via Service Bus topics"
"TCP/EF Core to PostgreSQL”Entity Framework Core + Npgsql”Database per service pattern"
"HTTPS REST to ACS Email”Azure.Communication.Email”Managed identities for Azure access"
"Circuit breaker, 5s timeout”(not a tool)“Failure mode: circuit breaker after 5 failures”

See the split? The technology names go to Tools. The architectural patterns and environment details go to Context. The same Link row produces information for both layers.

This is why Link is the most important ATLAS phase. It feeds the two GOTCHA layers that prevent the most mistakes. Wrong tools = wrong code. Wrong context = code that works in isolation but fails in your environment.

Assemble → Heuristics

Your Assemble phase defines how things get built. The rules, the patterns, the quality standards. That becomes the AI’s behavioral guardrails.

You wrote in AssembleIt becomes in Heuristics
”Repository → service → controller""Use the repository pattern for all database access"
"Unit tests for each layer""Write unit tests for services, integration tests for repositories"
"Phase 4: Reliability — retry, dead letter, idempotency""Check notification_log before sending (idempotency)”

Assemble is about build order AND build quality. The order goes to Orchestration (from Trace). The quality rules go to Heuristics. When you fill in Assemble and find yourself writing rules like “always use async/await” or “no business logic in controllers,” those are heuristics.

Stress-test → Args

Your Stress-test phase defines concrete validation scenarios with specific numbers. Those numbers become the AI’s parameters.

You wrote in Stress-testIt becomes in Args
”500 concurrent users""K8s: 2 replicas, 250m/500m CPU"
"P95 latency < 200ms""Health: /healthz on port 8080"
"1000 events in 60 seconds""Service Bus: max concurrent calls 10"
"Replay events → no duplicates”(Already a heuristic: idempotency)

Stress-test gives you the numbers. Args gives those numbers to the AI. Connection strings, replica counts, memory limits, port numbers — all the concrete values that turn a generic service into YOUR service.

Execute

Let’s do the full exercise. We’ll take the notification service ATLAS from Article 2 and map it to a GOTCHA prompt, step by step.

Here’s the ATLAS checklist (condensed):

[A] ARCHITECT
  Notification service. Email, SMS, push. 30s processing. At-least-once.

[T] TRACE
  Event → consumer → preferences → template → dispatch → provider → log

[L] LINK
  Service Bus (AMQP), PostgreSQL (EF Core), ACS Email, ACS SMS,
  Notification Hubs. Managed identities. Circuit breakers.

[A] ASSEMBLE
  Foundation → Core → Providers → Reliability → Deployment.
  Repository pattern. Async everywhere. DI. Tests per layer.

[S] STRESS-TEST
  1000 events/60s. Provider failure handling. Idempotency.
  2 replicas, 250m-500m CPU, 256Mi-512Mi memory.

And here’s where each piece lands in GOTCHA:

graph LR
    A[Architect] -->|defines mission| G[Goals]
    T[Trace] -->|defines sequence| O[Orchestration]
    L[Link] -->|defines tools| TL[Tools]
    L -->|defines environment| C[Context]
    AS[Assemble] -->|defines rules| H[Heuristics]
    S[Stress-test] -->|defines values| AR[Args]

    style A fill:#3b82f6,color:#fff
    style T fill:#3b82f6,color:#fff
    style L fill:#3b82f6,color:#fff
    style AS fill:#3b82f6,color:#fff
    style S fill:#3b82f6,color:#fff
    style G fill:#10b981,color:#fff
    style O fill:#10b981,color:#fff
    style TL fill:#10b981,color:#fff
    style C fill:#10b981,color:#fff
    style H fill:#10b981,color:#fff
    style AR fill:#10b981,color:#fff

Blue is ATLAS (human thinking). Green is GOTCHA (AI instructions). Each arrow is a mapping. Link has two arrows because it feeds both Tools and Context.

The result? A complete GOTCHA prompt where every line traces back to an ATLAS decision. Nothing is invented. Nothing is guessed. The AI receives structured instructions that come from structured thinking.

Here’s a side-by-side comparison of the same decision flowing through both frameworks:

DecisionATLAS (you think)GOTCHA (AI reads)
“Use Azure Communication Services for email”Link: Notification Service → ACS Email, HTTPS RESTTools: Azure.Communication.Email. Context: managed identities for Azure access
”Retry 3 times with backoff”Assemble: Phase 4 ReliabilityHeuristics: DO retry 3x with exponential backoff
”2 replicas on Kubernetes”Stress-test: Scenario 1 throughputArgs: K8s replicas: 2, CPU 250m/500m
”Process events within 30 seconds”Architect: ConstraintsGoals: processes each event within 30 seconds
”Event → consumer → preferences → send”Trace: Steps 1-7Orchestration: build consumer first, then preferences, then providers

Five decisions. Five clear paths from human thinking to AI instruction. No ambiguity about where things go.

Template

Here’s the master prompt template. It combines ATLAS and GOTCHA into a single workflow:

=== MASTER PROMPT: ATLAS → GOTCHA ===

Step 1: Fill in ATLAS (your thinking)
─────────────────────────────────────
[A] ARCHITECT → What + boundaries + constraints
[T] TRACE → Data flow from start to finish
[L] LINK → Integrations + protocols + failure modes
[A] ASSEMBLE → Build order + patterns + quality rules
[S] STRESS-TEST → Validation scenarios + concrete numbers

Step 2: Map to GOTCHA (AI instructions)
─────────────────────────────────────
=== GOALS (from Architect) ===
(Mission + measurable success criteria)

=== ORCHESTRATION (from Trace) ===
(Build sequence with dependencies)

=== TOOLS (from Link — technology names) ===
(Frameworks, libraries, APIs with versions)

=== CONTEXT (from Link — environment details) ===
(Architecture patterns, conventions, constraints)

=== HEURISTICS (from Assemble — quality rules) ===
DO:
-
DON'T:
-

=== ARGS (from Stress-test — concrete values) ===
(Connection strings, ports, limits, replicas)

Print this. Use it every time you work with an AI on a development task. The first time takes 10-15 minutes. After a few projects, you’ll fill it in in 5 minutes. And the quality of the AI’s output will be different every time — because you stopped guessing and started thinking.

Challenge

Before Article 5, do this: take a real project — not a tutorial, a real one you’re building or maintaining — and run it through the full ATLAS → GOTCHA workflow. Fill in all five ATLAS phases. Map them to all six GOTCHA layers. Write the complete prompt.

Then send that prompt to your AI tool. Compare the result to what you’d get with a one-line prompt. The difference will be obvious.

In Article 5, we stop talking about frameworks and start building. We’ll take the ATLAS + GOTCHA workflow and build a complete users API — PostgreSQL, JWT authentication, .NET, deployed to Kubernetes. Every phase, every layer, every line of code. From checklist to production.

If this series helps you, consider buying me a coffee.

Comments

Loading comments...