ATLAS + GOTCHA -- Part 10

A Scaffold Generator for Claude — Built on Gartner's Seven Invariants

#claude-code #scaffolding #security #governance #gartner #atlas #gotcha #hooks

The Problem

You start a new project with Claude. You open a folder, type claude, and ask it to build something. Claude is happy to help. Too happy.

In the first 20 minutes you have:

  • A package.json created with npm init (you wanted pnpm)
  • A terraform apply run against your dev subscription (you didn’t authorize that)
  • A commit signed with “Co-Authored-By: Claude” (your company forbids this)
  • An API key hardcoded in appsettings.json because “it’s just for testing”
  • A storage account with allow_blob_public_access = true because the AI saw it in a stack overflow answer from 2019
  • No tests, because you didn’t ask for them

Each of these is a small thing. Together, they are the reason your security team blocks AI tools.

The fix is not “be careful”. The fix is make it impossible. Not for Claude — for the project. From the very first file.

The Solution

I built scaffold-generator, a public repo on GitHub that does one thing: it generates a new project structure with all the guardrails already in place.

Not a template. Not a starter kit. A deterministic interview. You open the repo with Claude, and Claude asks you questions — one at a time — about what you want to build. Tech stack, architecture, requirements, CI/CD platform, ATLAS+GOTCHA methodology. Then, and only then, it generates the scaffolding.

The scaffolding includes:

  • A CLAUDE.md with the rules of your specific project
  • 14 hooks in .claude/hooks/ that block dangerous actions before they reach disk
  • A governance.md document with the seven invariants every artifact must pass
  • CI/CD pipelines for your platform (GitHub Actions, GitLab CI, or Azure DevOps)
  • A complete .NET Clean Architecture structure (or React, or Terraform, depending on your answers)
  • Real tests scaffolded — not stubs

The interview matters. Claude does not assume things. It asks. If you don’t know an answer, it gives you options. If you give a vague answer, it asks for more detail. The whole point is: no design decisions made by the AI without your consent.

Where Gartner fits in

This is not made up. The architecture follows the Gartner research note How to Govern Anthropic’s Claude Code at Scale (G00850426, March 2026). Gartner defines seven invariants that every AI-generated artifact must satisfy:

#InvariantWhat it means
1FunctionalImplements the spec, passes static analysis and contract tests
2TestedUnit + integration + E2E tests, ≥75% coverage
3SecureNo secrets, no CVEs, no SAST findings, no insecure flows
4ScalableNo N+1 queries, async I/O, idempotent handlers
5PerformantMeets P95 latency budget, no regressions
6ObservableStructured logs, metrics, traces from day one
7AuditableConventional commits, SBOM, signed artifacts, prompt lineage

And three checkpoints where these invariants must be enforced:

  1. Generation — Claude hooks block bad actions as the file is being written
  2. Pre-commit / pre-push — Git hooks run the same checks before code leaves your machine
  3. CI/CD pipeline — The pipeline runs them again, with the same configuration

The same tools at every checkpoint. Skipping any one creates an exploitable gap.

Most teams stop at the CI pipeline. By the time the pipeline catches a hardcoded secret, the secret is already in git history. The whole point of generation-time enforcement is to never let the bad code exist in the first place.

Execute

Pre-flight checklist before takeoff — every question answered before the project starts

Let’s see what’s actually inside the repo.

The interview flow

When you open the repo and ask Claude to scaffold a project, it follows goals/scaffold.md. The first thing it does is detect your language. If you write in Spanish, the entire interview is in Spanish. If you switch to English mid-interview, it switches too.

Then it asks, in order:

Phase 1 — Project Identity
  Q1: Project name (PascalCase)
  Q2: Description (1-2 sentences)

Phase 2 — Requirements
  Q3: Architecture description
  Q4: Functional requirements
  Q5: Non-functional requirements

Phase 3 — Project Type
  Q6: Web, mobile, infra, API, or combinations?

Phase 4 — DevOps
  Q7: GitHub, GitLab, or Azure DevOps?
  Q8: Issue tracking platform

Phase 5 — Methodology
  Q9: Apply ATLAS + GOTCHA + SDD?

Only after you confirm with “SÍ” / “YES” does generation start. The generation order is also deterministic — files are created in a specific sequence so dependencies always exist when something references them.

The hooks that protect you

Hooks catching dangerous actions before they hit the disk

Inside .claude/hooks/ there are 14 shell scripts. Each one fires on specific Claude tool calls (Write, Edit, Bash) and inspects what’s about to happen. If something looks wrong, the hook blocks the action and tells Claude why.

Here’s a sample:

# block-secrets.sh — fires on Write/Edit
# Blocks AWS keys, GitHub PATs, Azure connection strings,
# private keys, hardcoded passwords, JWTs in source files

PATTERNS=(
    'AKIA[0-9A-Z]{16}'                          # AWS Access Key
    'ghp_[A-Za-z0-9]{36}'                       # GitHub PAT
    'AccountKey=[A-Za-z0-9+/=]{40,}'            # Azure storage
    '-----BEGIN (RSA |EC |OPENSSH )PRIVATE KEY-----'
    '(password|passwd|pwd)[[:space:]]*[:=][[:space:]]*["\x27][^"\x27]{6,}["\x27]'
)

Or this one, which blocks the most expensive Terraform mistake — a public storage account:

# block-tf-public-exposure.sh — fires on Write/Edit for *.tf files
declare -a PATTERNS=(
    '0\.0\.0\.0/0'
        "Open CIDR (0.0.0.0/0). Restrict to known ranges."
    'public_network_access_enabled[[:space:]]*=[[:space:]]*true'
        "Use private endpoints instead."
    'allow_blob_public_access[[:space:]]*=[[:space:]]*true'
        "Storage must be private."
    'min_tls_version[[:space:]]*=[[:space:]]*"TLS1_0"'
        "TLS1_0 is deprecated. Use TLS1_2 minimum."
    'skip_final_snapshot[[:space:]]*=[[:space:]]*true'
        "Production data must be snapshotted."
)

The complete list of hooks:

HookWhat it blocks
block-npm.shAny npm install / npm run — pnpm only
block-git-push.shgit push without explicit user authorization
block-terraform-apply.shterraform apply from your laptop — pipelines only
block-no-verify.shgit commit --no-verify — fix the hook, don’t skip it
block-claude-attribution.shCo-Authored-By: Claude in commits
block-destructive-actions.shrm -rf, destructive az CLI commands
block-secrets.shHardcoded credentials in any file
block-tf-public-exposure.shPublic storage, open CIDRs, weak TLS in Terraform
enforce-invariants.shFiles that violate the seven invariants
enforce-tf-policy.shTerraform without required tags, naming, encryption
provenance-stamp.shStamps every generation with prompt lineage
require-tests.shProduction code in Domain/Application/Api needs tests
require-tf-module-tests.shTerraform modules need a tests/ folder
require-tf-tags.shAll Azure resources need tags

These are not suggestions. They run automatically. If Claude tries to write a file that violates any of them, the hook exits with a non-zero code and Claude sees the error message and tries again.

What the generated CLAUDE.md looks like

After the interview, the generated project has its own CLAUDE.md. It includes all the guardrails plus the specific decisions you made:

# CLAUDE.md — QuantumApi

## What This Project Is
A REST API for managing post-quantum encryption keys.
Built with .NET 10 Clean Architecture, PostgreSQL, deployed
to Azure via GitHub Actions.

## Tech Stack
- Backend: .NET 10 (Domain / Application / Infrastructure / Api)
- Database: PostgreSQL 16
- Auth: Entra ID
- Hosting: Azure Container Apps
- IaC: Terraform
- CI/CD: GitHub Actions

## Methodology
ATLAS + GOTCHA + SDD applied to all features

## Coverage
75% minimum, 80% target — enforced in CI

## Guardrails (enforced by hooks)
- pnpm only (block-npm.sh)
- No git push without authorization (block-git-push.sh)
- No terraform apply locally (block-terraform-apply.sh)
- No Claude attribution (block-claude-attribution.sh)
- ... (the full list)

This file is the first thing Claude reads in any new conversation. So every future session in that project starts with the rules already loaded.

The prompt lineage ledger

This is the part I am most proud of. Every time Claude generates a file, the provenance-stamp.sh hook writes an entry to .claude/provenance/<date>.jsonl. The entry contains:

  • The exact prompt that triggered the generation
  • The model and version used
  • The file path that was created or modified
  • A SHA-256 hash of the content
  • The timestamp

This is your audit trail. Six months from now, when someone asks “why does this function exist?”, you can answer: “because on April 9, the user asked X, and Claude generated this file in response, and here’s the hash to prove it hasn’t changed since.”

This is invariant number 7 — Auditable — and it’s the one most teams forget about.

Template

Here’s the minimum you need to use this on your next project. No cloning, no git tricks, no setup. Just open the project folder with Claude and ask:

# 1. Open the scaffold-generator folder
cd path/to/scaffold-generator

# 2. Start Claude
claude

Then say, in any language: “I want to create a new project.”

Claude will detect your language, announce the interview, and start asking questions. Don’t skip ahead. Don’t say “use sensible defaults”. The whole point is that you make the decisions, and Claude documents them.

When the interview ends, Claude shows you a summary. If everything looks right, you reply with “SÍ” or “YES” and generation starts. The hooks are already active — they were active from the moment you opened Claude in this folder, because .claude/settings.json registered them.

If you want to verify the hooks are working before generating anything, ask Claude to write a file with a fake AWS key. It will refuse — and the error message will come from block-secrets.sh, not from Claude’s good intentions.

Challenge

Try this on a real project:

  1. Clone the scaffold-generator repo
  2. Pick a small idea — a TODO API, a personal expense tracker, a script that reads RSS feeds
  3. Run the interview from start to finish (yes, all the questions)
  4. Look at the generated project and count: how many decisions did Claude make on its own?

The answer should be zero. Every folder name, every package version, every CI step, every Terraform tag — all of it came from your answers. That is what governable AI development looks like.

And then test the hooks. Try to commit a hardcoded password. Try to run npm install. Try to write a Terraform file with 0.0.0.0/0. See what happens.

The repo is public, MIT licensed. Use it, fork it, send PRs. If your team has a guardrail I missed, open an issue — I want to add it.


If this article helps you, consider sponsoring me on GitHub or buying me a coffee.

This is a standalone article. If you want to learn the prompt structure that ATLAS + GOTCHA + SDD use to drive these generations, start with the ATLAS + GOTCHA series. If you want to see how the same governance model applies to a full Internal Developer Platform, the AI-Native IDP series walks through it end to end.

Comments

Loading comments...