ATLAS + GOTCHA -- Part 3

GOTCHA: The 6 Layers That Make AI Think

#ai #gotcha #prompts #framework

The Problem

In Article 2, you learned ATLAS — the checklist that structures your thinking before you talk to the AI. You filled in the five phases. You know what you’re building, how data flows, how components connect, in what order to build, and how to validate.

Now what?

You have a notebook full of decisions. But the AI doesn’t read notebooks. It reads prompts. And here’s the gap: most developers take their ATLAS thinking and compress it back into a single paragraph. “Build a notification service that listens to Service Bus events and sends emails and SMS using Azure Communication Services, with retry logic and idempotency.”

That’s better than “build a notification service.” But it’s still one block of text. The AI has to figure out what’s a goal, what’s a constraint, what’s a tool, and what’s a rule. It has to guess priorities. And guessing is where things go wrong.

GOTCHA solves this. It takes your ATLAS decisions and organizes them into six layers that match how AI models process information. Each layer answers a different question. Each layer gives the AI a specific type of instruction.

Think of it this way: ATLAS is the homework you do. GOTCHA is the format you use to hand it in.

The Solution

GOTCHA has six layers. The order follows the acronym, and each layer builds on the previous one.

G — Goals

What the AI must achieve.

Goals is the mission statement. It tells the AI what success looks like. Not how to get there — just where to arrive.

A good Goals section is short, specific, and measurable. The AI should be able to read it and answer: “Do I know what done looks like?” If the answer is no, your Goals section needs work.

Bad Goals:

Build a notification service.

Too vague. What kind of notifications? What triggers them? What does “done” mean?

Good Goals:

Build a .NET 10 worker service that:
- Listens for OrderConfirmed and ShipmentDispatched events from Azure Service Bus
- Sends email and SMS notifications to users via Azure Communication Services
- Sends push notifications via Azure Notification Hubs
- Processes each event within 30 seconds of receipt
- Guarantees at-least-once delivery (no lost notifications)

Five bullet points. Each one is testable. The AI knows exactly what “done” means.

The key rule for Goals: don’t put implementation details here. “Use the repository pattern” is not a goal — it’s a heuristic. “Send notifications within 30 seconds” is a goal. Keep them separate.

O — Orchestration

How components coordinate and in what order.

Orchestration tells the AI the sequence. What happens first, what happens next, what depends on what. Without this, the AI picks its own order — and it might build the roof before the foundation.

Orchestration is where your ATLAS Assemble phase becomes instructions:

Build in this order:

1. Event schemas (shared contract library)
2. Service Bus consumer (event listener + deserialization)
3. User preference service (PostgreSQL lookup)
4. Template engine (render notification content)
5. Channel dispatcher (route to email, SMS, or push provider)
6. Provider implementations:
   a. Azure Communication Services Email
   b. Azure Communication Services SMS
   c. Azure Notification Hubs (can return mock response initially)
7. Retry logic with exponential backoff
8. Idempotency guard (check notification_log before sending)
9. Dead letter handling
10. Health check endpoint and Prometheus metrics

Ten steps. Each one is clear. The AI won’t try to build the retry logic before the providers exist. It won’t wire up metrics before the core service works.

The key rule for Orchestration: be explicit about dependencies. If step 5 depends on step 3, say so. The AI respects sequence when you give it sequence.

T — Tools

What the AI can use.

Tools is the shopping list. It tells the AI which frameworks, libraries, APIs, and platforms to use. Without it, the AI picks whatever it was trained on most — which might be the wrong version, the wrong library, or something you don’t want in your project.

- .NET 10 (worker service template)
- Azure.Messaging.ServiceBus (NuGet)
- Azure.Communication.Email (NuGet)
- Azure.Communication.Sms (NuGet)
- Microsoft.Azure.NotificationHubs (NuGet)
- Entity Framework Core 10 with Npgsql provider
- PostgreSQL 16
- xUnit + Moq for testing
- Docker for local development
- Kubernetes for deployment

Be specific about versions when it matters. ”.NET 10” not just ”.NET”. “PostgreSQL 16” not just “PostgreSQL”. The AI knows all of these tools, but by listing them explicitly you make sure it doesn’t substitute something else.

The key rule for Tools: if you don’t list it, the AI might pick something you don’t want. If you list something that contradicts another tool, the AI will get confused. Be complete and consistent.

C — Context

Background knowledge, constraints, and existing patterns.

Context is everything the AI needs to know about your environment but can’t guess. Your existing code structure. Your naming conventions. Your deployment target. Your team’s rules.

This is the most underrated layer. Most developers skip it because they think the AI “just knows.” But the AI doesn’t know your project. It doesn’t know you deploy to Azure Kubernetes Service. It doesn’t know your team uses a specific folder structure. It doesn’t know your company requires all services to log to a central platform.

- This service is part of an e-commerce microservices platform
- Other services: Order Service, Inventory Service, Payment Service, Shipment Service
- All services communicate via Azure Service Bus topics
- All services deploy to AKS (Azure Kubernetes Service) in West Europe
- Database per service pattern: this service owns user_preferences and notification_log tables
- Existing naming convention: PascalCase for C# classes, kebab-case for K8s resources
- All services use structured logging with Serilog → Azure Monitor
- Authentication between services uses managed identities, not connection strings
- Current environment: Development (local Docker Compose) and Production (AKS)

Nine lines. Each one prevents a mistake. Without “managed identities, not connection strings,” the AI would generate code with hardcoded connection strings. Without “Serilog → Azure Monitor,” it would use Console.WriteLine.

The key rule for Context: include anything that’s specific to YOUR project. If another developer joining your team would need to know it, the AI needs to know it too.

H — Heuristics

Behavioral guardrails, rules-of-thumb, and quality gates.

Heuristics are the rules the AI must follow while building. Not what to build (Goals), not in what order (Orchestration), but HOW to build it. Code style, patterns, error handling strategies, things to always do and things to never do.

DO:
- Use the repository pattern for all database access
- Use async/await for all I/O operations
- Return structured error responses (not exceptions to the caller)
- Log every notification attempt with correlation ID
- Make all provider calls idempotent (check notification_log first)
- Use dependency injection for all services
- Write unit tests for service layer, integration tests for repositories

DON'T:
- Don't put business logic in the event consumer (delegate to services)
- Don't catch and swallow exceptions silently
- Don't use static methods for anything that touches I/O
- Don't store secrets in code or config files (use Azure Key Vault)
- Don't create synchronous wrappers for async methods

The DO/DON’T format works well for Heuristics. It’s clear, scannable, and leaves no room for interpretation.

Heuristics are where your team’s coding standards become AI instructions. Every “we always do X” and “we never do Y” goes here. The more specific, the better. “Write clean code” is useless. “Use dependency injection for all services” is actionable.

The key rule for Heuristics: be specific enough that the AI can check its own work. Each heuristic should be a yes/no question: “Did I use the repository pattern?” Yes or no. “Is the code clean?” That’s not checkable.

A — Args

Parameters, inputs, and configuration for this specific execution.

Args is the concrete data. Everything else in GOTCHA is structural — it could apply to many projects. Args is what makes it specific to THIS run, THIS environment, THIS deployment.

Service Bus:
  Connection: from environment variable SERVICEBUS_CONNECTION
  Topic: order-events
  Subscription: notification-service
  Max concurrent calls: 10

PostgreSQL:
  Connection: from environment variable DB_CONNECTION
  Schema: notifications
  Tables: user_preferences, notification_log

Azure Communication Services:
  Connection: from environment variable ACS_CONNECTION
  Email sender: notifications@contoso.com

Azure Notification Hubs:
  Connection: from environment variable NH_CONNECTION
  Hub name: ecommerce-notifications

Kubernetes:
  Namespace: ecommerce
  Replicas: 2 (minimum)
  CPU request: 250m, limit: 500m
  Memory request: 256Mi, limit: 512Mi
  Health check: /healthz on port 8080

These are the values the AI plugs into the code it generates. Without them, it invents connection strings, guesses port numbers, and uses placeholder values you’ll have to replace later.

The key rule for Args: every value the AI needs to generate working code should be here. If you want the AI to use environment variables instead of hardcoded values, say so explicitly.

Execute

Let’s see all six layers working together. Here’s the complete GOTCHA prompt for the notification service we designed with ATLAS in Article 2.

=== GOALS ===
Build a .NET 10 worker service that:
- Listens for OrderConfirmed and ShipmentDispatched events from Azure Service Bus
- Sends email and SMS notifications via Azure Communication Services
- Sends push notifications via Azure Notification Hubs
- Processes each event within 30 seconds of receipt
- Guarantees at-least-once delivery (no lost notifications)
- Respects user notification preferences (opt-out per channel)

=== ORCHESTRATION ===
Build in this order:
1. Event schemas (shared contract library, NuGet package)
2. Service Bus consumer (listener + JSON deserialization)
3. User preference service (EF Core repository → PostgreSQL)
4. Template engine (load + render per event type and channel)
5. Channel dispatcher (routes to correct provider based on preference)
6. Providers: ACS Email, ACS SMS, Notification Hubs
7. Retry logic (exponential backoff, 3 attempts per provider)
8. Idempotency guard (check notification_log before each send)
9. Dead letter handler (move poison messages, alert via log)
10. Health endpoint (/healthz) and Prometheus metrics

=== TOOLS ===
- .NET 10 worker service
- Azure.Messaging.ServiceBus
- Azure.Communication.Email + Azure.Communication.Sms
- Microsoft.Azure.NotificationHubs
- Entity Framework Core 10 + Npgsql
- PostgreSQL 16
- Serilog + Serilog.Sinks.AzureMonitor
- xUnit + Moq
- Docker + Kubernetes

=== CONTEXT ===
- Part of an e-commerce microservices platform on AKS (West Europe)
- Other services: Order, Inventory, Payment, Shipment
- All inter-service communication via Service Bus topics
- Database per service pattern (this service owns its tables)
- Managed identities for Azure resource access (no connection strings in code)
- Structured logging to Azure Monitor via Serilog
- Naming: PascalCase for C# classes, kebab-case for K8s resources

=== HEURISTICS ===
DO:
- Repository pattern for all database access
- Async/await for all I/O
- Dependency injection for all services
- Correlation ID on every log entry
- Check notification_log before sending (idempotency)
- Unit tests for services, integration tests for repositories

DON'T:
- No business logic in event consumer
- No silent exception swallowing
- No static I/O methods
- No secrets in code (Azure Key Vault only)
- No synchronous wrappers for async methods

=== ARGS ===
Service Bus: env SERVICEBUS_CONNECTION, topic order-events, sub notification-service
PostgreSQL: env DB_CONNECTION, schema notifications
ACS: env ACS_CONNECTION, sender notifications@contoso.com
Notification Hubs: env NH_CONNECTION, hub ecommerce-notifications
K8s: namespace ecommerce, 2 replicas, 250m/500m CPU, 256Mi/512Mi memory
Health: /healthz on port 8080

That’s one prompt. Every decision you made in ATLAS is now structured for the AI. Goals come from Architect. Orchestration comes from Assemble. Tools come from Link. Context comes from Trace and Link. Heuristics come from Assemble. Args come from Stress-test.

No guessing. No “I assumed you wanted…” No wrong framework. No missing retry logic.

Template

Here’s the GOTCHA template you can copy and fill in for any prompt:

=== GOTCHA PROMPT TEMPLATE ===

=== GOALS ===
(What must the AI achieve? Measurable outcomes only.)

=== ORCHESTRATION ===
(In what order? What depends on what?)

=== TOOLS ===
(Frameworks, libraries, APIs, platforms. Be specific about versions.)

=== CONTEXT ===
(Your project's environment, conventions, constraints.)

=== HEURISTICS ===
DO:
- (rules to follow)

DON'T:
- (rules to avoid)

=== ARGS ===
(Concrete values: connection strings, ports, namespaces, limits.)

Use it together with the ATLAS checklist from Article 2. ATLAS fills your brain. GOTCHA fills the prompt.

Challenge

Before Article 4, try this: take the ATLAS checklist you filled in from the Article 2 challenge and translate it into a GOTCHA prompt. Use the template above. Map each ATLAS phase to its GOTCHA layer.

Don’t worry if some layers feel thin. That’s normal. Goals and Heuristics are usually the longest. Args can be short if your project is simple. The important thing is to separate the layers — don’t mix goals with heuristics, don’t put tools in the context.

In Article 4, we’ll put ATLAS and GOTCHA side by side and show the exact mapping. You’ll see how every human decision has a place in the AI’s instructions — and we’ll build a master prompt template you can use on any project.

If this series helps you, consider buying me a coffee.

Comments

Loading comments...