ATLAS + GOTCHA -- Part 1

Why AI Prompts Fail and What to Do About It

#ai #atlas #gotcha #prompts

The Problem

As developers, we are trained to think in deterministic terms. You write code, you compile it, you run the tests. Same input, same output. Every time. The compiler doesn’t “feel creative” on a Tuesday morning and decide to generate different bytecode. Your test suite doesn’t pass today and fail tomorrow with the same code. Determinism is the foundation of everything we build.

Then we sit in front of an AI model and expect the same thing.

But AI language models are not deterministic systems. They are probabilistic. Every response is a weighted selection from millions of possible outputs. Temperature settings, token sampling, context window size — all of these introduce variance. The same prompt, sent twice, can produce two different results. This is not a bug. This is how they work. And that mismatch — expecting deterministic results from a probabilistic system — is the root cause of most prompt frustration.

So what do developers do? They write vague prompts and hope for the best. You open your AI tool. You type “create an API for user management”. You get… something. Maybe it works. Maybe it doesn’t. Maybe it uses a framework you don’t want. Maybe the database schema makes no sense for your project.

This happens every day. Developers write prompts like wishes — vague, hopeful, and disconnected from their real architecture. The AI does its best, but “its best” without context is just a guess. You can’t eliminate the probabilistic nature of the model, but you can drastically reduce the space of possible answers by being precise about what you need.

I’ve been there. I’m a cloud architect working with Azure, Scaleway, Sovereign Cloud infrastructure, and .NET in enterprise environments. When I started using AI for development, my prompts looked like everyone else’s: short, generic, and frustrating.

Then I found a pattern. Actually, two patterns that work together.

The Solution

ATLAS is a human checklist. It tells YOU what to think about before you even talk to the AI:

  • Architect — Define the problem and solution boundaries
  • Trace — Map data flows and dependencies
  • Link — Connect components and integrations
  • Assemble — Build and wire everything together
  • Stress-test — Validate under real conditions

GOTCHA is an AI instruction framework. It tells the AI HOW to process your request:

  • Goals — What the AI must achieve
  • Orchestration — How components coordinate
  • Tools — What the AI can use
  • Context — Background knowledge and constraints
  • Heuristics — Behavioral guardrails and quality gates
  • Args — Parameters and specific inputs

Together, ATLAS makes sure you’ve done your homework. GOTCHA makes sure the AI understands the assignment.

Execute

Here’s a real before/after. Same task: create a users API.

Before (typical prompt):

Create a users API with CRUD operations

That’s it. No boundaries, no data flows, no constraints. The AI will guess everything.

After — Step 1: YOU think with ATLAS

First, you go through the ATLAS checklist. This is your homework, before you even open the AI tool:

ATLAS PhaseWhat you decide
ArchitectREST API for user management. JWT auth. Must handle 1000 concurrent users. PostgreSQL as database.
TraceRequest comes from API Gateway -> Controller -> Service -> Repository -> PostgreSQL. JWT validated in middleware.
LinkPart of a microservices architecture on Kubernetes. Other services talk via message queue. API sits behind a gateway.
AssembleBuild order: database schema first, then repository layer, service layer, controllers, middleware, and tests last.
Stress-testValidate with 1000 concurrent connections. Check JWT expiration edge cases. Test error responses follow RFC 7807 problem details.

Now you know exactly what you need. You’ve made the decisions. The AI doesn’t have to guess.

After — Step 2: Map ATLAS to GOTCHA for the AI

Each ATLAS decision maps to a GOTCHA layer. This is how you translate your thinking into a prompt the AI can process:

=== GOALS (from Architect) ===
Build a REST API for user management with JWT authentication.
Must handle 1000 concurrent users. PostgreSQL backend.

=== ORCHESTRATION (from Assemble) ===
Build in this order: database schema, repository layer,
service layer, controllers, middleware, tests.

=== TOOLS (from Link) ===
.NET 10 Web API, Entity Framework Core, PostgreSQL,
FluentValidation, xUnit.

=== CONTEXT (from Trace + Link) ===
This is part of a microservices architecture on Kubernetes.
The API will be behind an API Gateway. Other services
communicate via message queue.

=== HEURISTICS (from Assemble) ===
- Follow repository pattern
- All endpoints return standard problem details on error
- Use async/await everywhere
- No business logic in controllers

=== ARGS (from Stress-test) ===
Connection string: from environment variable DB_CONNECTION
JWT secret: from Kubernetes secret
Port: 8080
Target: 1000 concurrent users

See what happened? ATLAS made you think. GOTCHA organized that thinking for the AI. The mapping is 1:1.

The tools in that prompt — Entity Framework Core, PostgreSQL, FluentValidation — are things the AI already knows. But by naming them explicitly in the TOOLS section, you make sure it doesn’t pick something else.

The result: the AI gets everything it needs. No guessing. No wrong framework. No missing context.

Template

In the next articles, we’ll go deep into each ATLAS phase and each GOTCHA layer. By the end of this series, you’ll have:

  • A complete ATLAS checklist you can use on any project
  • A GOTCHA prompt template that works with any AI tool
  • Real code examples from enterprise projects
  • A library of 20+ pre-built prompts for common tasks

Challenge

Before the next article, try this: take your last AI prompt and rewrite it using the GOTCHA structure above. Just the 6 sections. Don’t worry about getting it perfect — just separate your request into Goals, Orchestration, Tools, Context, Heuristics, and Args.

See the difference? That’s what we’ll build on in Article 2.

Comments

Loading comments...