The Infrastructure Hub -- Part 1
Your Infrastructure Has No Catalog
The Problem
Open your organization’s GitHub. Search for “terraform”. Count the repositories.
You’ll find something like this: terraform-azure-networking, infra-modules, tf-modules-v2, legacy-terraform, platform-terraform-new. Five repos. Three naming conventions. Two of them have a README that says “WIP”. One hasn’t been updated in 14 months.
Now try to answer these questions:
- Which module should I use to create a Virtual Network?
- Is
tf-modules-v2the latest version, or did someone createplatform-terraform-newto replace it? - Who owns the storage account module? Can I change it, or will I break someone’s environment?
- Does the AKS module work with Azure Provider 4.x, or is it still on 3.x?
- Is there a module for Scaleway, or do we only have Azure?
Nobody knows. Not even the person who wrote half of those modules. Because there’s no catalog. There’s no single place where you can see all the infrastructure modules your organization has, who owns them, what version they’re on, which cloud they target, and whether they’re still maintained.
This is the same problem the AI-Native IDP series solved for services. A service catalog that goes stale is useless. An infrastructure catalog that doesn’t exist is worse — because infrastructure mistakes are expensive and slow to fix.
And if you work in a managed services company — you manage infrastructure for 10, 20, 50 clients — multiply the problem by the number of clients. Each client has different modules, different conventions, different approval processes. Keeping track of all of it in your head doesn’t scale.
The Solution
Backstage already has a Software Catalog. We use it for services, APIs, and libraries. But a Backstage entity can be anything — including a Terraform module.
The idea: register every infrastructure module as a Component in the catalog with type: terraform-module. Add metadata: which cloud provider it targets, which Terraform provider version it needs, who owns it, what inputs it expects. Then use Backstage’s built-in features — search, filtering, ownership, TechDocs — to make the catalog useful.
For a single DevOps team, this is a searchable index of all your modules. For a managed services provider (MSP), this becomes a multi-tenant catalog: each client’s infrastructure is visible, organized, and traceable.
The catalog entry for a Terraform module looks like this:
# catalog-info.yaml (in the module repo)
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: tf-azurerm-vnet
title: Azure Virtual Network Module
description: "Creates a VNet with subnets, NSGs, and optional peering. Supports hub-spoke topology."
tags:
- terraform
- azure
- networking
annotations:
github.com/project-slug: victorZKov/tf-azurerm-vnet
backstage.io/techdocs-ref: dir:.
links:
- url: https://registry.terraform.io/providers/hashicorp/azurerm/latest
title: Azure Provider
spec:
type: terraform-module
lifecycle: production
owner: team-platform
system: infrastructure
providesApis:
- tf-azurerm-vnet-api
And the API entity documents the module’s interface — inputs and outputs:
apiVersion: backstage.io/v1alpha1
kind: API
metadata:
name: tf-azurerm-vnet-api
title: tf-azurerm-vnet Interface
description: "Inputs and outputs for the Azure VNet module"
spec:
type: terraform
lifecycle: production
owner: team-platform
definition: |
inputs:
- name: resource_group_name (string, required)
- name: location (string, default: "westeurope")
- name: address_space (list(string), required)
- name: subnets (map(object), required)
- name: tags (map(string), required)
outputs:
- name: vnet_id (string)
- name: subnet_ids (map(string))
Execute
We build on the Backstage instance from the IDP series. Same Forge project, same AI service, same authentication. We add infrastructure entities to the catalog.
Step 1: Define the infrastructure system
Create a system entity that groups all infrastructure modules:
# catalog/infrastructure.yaml
apiVersion: backstage.io/v1alpha1
kind: System
metadata:
name: infrastructure
title: Infrastructure Modules
description: "Terraform modules, deployment patterns, and environment definitions"
tags:
- terraform
- infrastructure
spec:
owner: team-platform
Step 2: Create a Terraform module with catalog metadata
Let’s create a real module. A simple Azure resource group module — basic, but it shows the pattern.
# modules/tf-azurerm-resource-group/main.tf
terraform {
required_version = ">= 1.8"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 4.0"
}
}
}
variable "name" {
type = string
description = "Resource group name"
}
variable "location" {
type = string
default = "westeurope"
description = "Azure region"
}
variable "tags" {
type = map(string)
description = "Resource tags"
}
resource "azurerm_resource_group" "this" {
name = var.name
location = var.location
tags = var.tags
}
output "name" {
value = azurerm_resource_group.this.name
}
output "id" {
value = azurerm_resource_group.this.id
}
output "location" {
value = azurerm_resource_group.this.location
}
Now the catalog entry:
# modules/tf-azurerm-resource-group/catalog-info.yaml
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: tf-azurerm-resource-group
title: Azure Resource Group Module
description: "Creates an Azure resource group with standard tags"
tags:
- terraform
- azure
- foundation
annotations:
github.com/project-slug: victorZKov/forge
backstage.io/techdocs-ref: dir:.
spec:
type: terraform-module
lifecycle: production
owner: team-platform
system: infrastructure
Step 3: Register modules in Backstage
Add the module locations to app-config.yaml:
catalog:
locations:
# Infrastructure modules
- type: file
target: ../../modules/tf-azurerm-resource-group/catalog-info.yaml
rules:
- allow: [Component, API]
# Infrastructure system
- type: file
target: ../../catalog/infrastructure.yaml
rules:
- allow: [System]
When Backstage starts, the modules appear in the catalog alongside your services. Filter by type: terraform-module to see only infrastructure components.
Step 4: A Scaffolder template for new modules
Every new Terraform module should start with the same structure: main.tf, variables.tf, outputs.tf, catalog-info.yaml, README.md, and a basic test. We create a Backstage template for this:
# templates/terraform-module/template.yaml
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
name: terraform-module
title: New Terraform Module
description: Create a new Terraform module with standard structure, catalog metadata, and documentation.
tags:
- terraform
- infrastructure
- recommended
spec:
owner: team-platform
type: terraform-module
parameters:
- title: Module Details
required:
- name
- cloud
- description
- owner
properties:
name:
title: Module Name
type: string
pattern: "^tf-[a-z]+-[a-z-]+$"
ui:placeholder: "tf-azurerm-storage-account"
ui:help: "Format: tf-{provider}-{resource}"
cloud:
title: Cloud Provider
type: string
enum:
- azurerm
- aws
- scaleway
- google
enumNames:
- Azure
- AWS
- Scaleway
- Google Cloud
description:
title: Description
type: string
ui:widget: textarea
owner:
title: Owner
type: string
ui:field: OwnerPicker
steps:
- id: fetch
name: Generate Module Structure
action: fetch:template
input:
url: ./skeleton
values:
name: ${{ parameters.name }}
cloud: ${{ parameters.cloud }}
description: ${{ parameters.description }}
owner: ${{ parameters.owner }}
- id: publish
name: Publish to GitHub
action: publish:github
input:
allowedHosts: ["github.com"]
repoUrl: github.com?owner=victorZKov&repo=${{ parameters.name }}
description: ${{ parameters.description }}
defaultBranch: main
- id: register
name: Register in Catalog
action: catalog:register
input:
repoContentsUrl: ${{ steps.publish.output.repoContentsUrl }}
catalogInfoPath: /catalog-info.yaml
output:
links:
- title: Repository
url: ${{ steps.publish.output.remoteUrl }}
- title: Open in Catalog
icon: catalog
entityRef: ${{ steps.register.output.entityRef }}
The skeleton directory has the standard module structure:
templates/terraform-module/skeleton/
├── main.tf # Provider config + resources
├── variables.tf # All input variables
├── outputs.tf # All outputs
├── versions.tf # Required providers and versions
├── catalog-info.yaml # Backstage metadata
└── README.md # Auto-generated from variables
A developer goes to Backstage, clicks “Create”, picks “New Terraform Module”, fills in the name, cloud provider, and description. Gets a repo with the right structure, already registered in the catalog.
Step 5: Use the AI enricher for modules
The catalog enricher from article 2 already reads code and updates catalog metadata. It works for Terraform modules too — it reads main.tf, variables.tf, and outputs.tf, and generates accurate descriptions and tags.
The enricher already looks for these files:
const targetFiles = tree.tree.filter(
f => f.path === 'Program.cs' ||
f.path === 'package.json' ||
f.path.endsWith('.csproj') ||
f.path === 'Dockerfile' ||
f.path === 'appsettings.json' ||
f.path === 'app-config.yaml',
);
We extend it to also pick up Terraform files:
const targetFiles = tree.tree.filter(
f => f.path === 'Program.cs' ||
f.path === 'package.json' ||
f.path.endsWith('.csproj') ||
f.path === 'Dockerfile' ||
f.path === 'appsettings.json' ||
f.path === 'app-config.yaml' ||
f.path === 'main.tf' ||
f.path === 'variables.tf' ||
f.path === 'outputs.tf' ||
f.path === 'versions.tf',
);
Now the AI reads your Terraform code and keeps the catalog accurate. If someone adds a new variable or output, the enricher detects it and proposes an update.
What it looks like
Open Backstage. Go to the catalog. Filter by type: terraform-module. You see:
| Module | Cloud | Owner | Lifecycle |
|---|---|---|---|
| tf-azurerm-resource-group | Azure | team-platform | production |
| tf-azurerm-vnet | Azure | team-platform | production |
| tf-azurerm-aks | Azure | team-platform | experimental |
| tf-scaleway-kapsule | Scaleway | team-platform | production |
| tf-aws-vpc | AWS | team-legacy | deprecated |
Click on any module. You see the description, inputs, outputs, who owns it, when it was last updated, and links to the source code. If TechDocs is configured, you see the rendered documentation right in Backstage.
No more searching GitHub repos. No more guessing which module is the right one. No more “is this still maintained?” — the lifecycle field tells you.
For an MSP managing multiple clients, add a client tag to each module. Filter by client. See exactly what infrastructure each client uses.
Template
Here’s the catalog-info.yaml template for any Terraform module:
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: ${{ name }}
title: ${{ title }}
description: "${{ description }}"
tags:
- terraform
- ${{ cloud }}
- ${{ category }} # networking, compute, storage, database, security
annotations:
github.com/project-slug: ${{ org }}/${{ name }}
backstage.io/techdocs-ref: dir:.
spec:
type: terraform-module
lifecycle: ${{ lifecycle }} # experimental, production, deprecated
owner: ${{ owner }}
system: infrastructure
Challenge
Before the next article:
- Pick 3 Terraform modules from your organization
- Create a
catalog-info.yamlfor each one - Register them in Backstage (or a local instance)
- Try filtering by cloud provider, by owner, by lifecycle
In the next article, we build Golden Path Terraform Modules — standard module templates for Azure, Scaleway, AWS, and GCP with built-in testing, documentation, and versioning. The scaffolder generates them from Backstage with the right structure from day one.
If this series helps you, consider buying me a coffee.
This is article 1 of the Infrastructure Hub series. Next: Golden Path Terraform Modules — standard templates for every cloud.
Loading comments...