Skip to main content

Backstage Integration

Signals Provided
  • Ownership - Telemetry data for causal analysis
  • Service Discovery - Automatic discovery of services, workloads, and infrastructure components

Overview

Causely can ingest your Backstage software catalog and stitch Backstage components and owners (groups and users) to Kubernetes services. This enriches the topology with team ownership, component metadata (type, lifecycle, system), and enables causal analysis and remediation with clear ownership context.

The integration:

  • Fetches catalog entities from the Backstage Catalog API (components, groups, users)
  • Stitches Backstage components to Kubernetes services by direct lookup from component-derived name and namespace (see below)
  • Creates or updates ServiceOwner entities from Backstage groups/users and links them to services
  • Applies Backstage-derived labels to services and owners for filtering and display

How Backstage components are stitched to Kubernetes services

Causely links Backstage components to Kubernetes Services by direct lookup: it reads a service name and optionally a namespace from the component, then looks up the Kubernetes Service with that name (and in that namespace when provided). When namespace is not provided, the service is resolved by name only and must be unique cluster-wide. Ownership and component metadata are then applied to the matched service.

Defaults: Causely reads the service name from the backstage.io/kubernetes-id annotation and the namespace from backstage.io/kubernetes-namespace on the Backstage component.

Owner: Owner is taken from the Backstage component only, for example spec.owner or backstage.io/owner on the component, depending on your label_map. If no owner can be resolved from the component, the owner link is not created for that service.

If your Backstage components use different fields for the service name or namespace, you can override the mapping in Helm values so Causely reads the right fields and resolves to the correct Kubernetes service.

Configuration

Add data source from the UI

You can add Backstage as a data source from the Causely UI. Go to Integrations, add the Backstage integration, and select the cluster to which the configuration will be pushed.

Basic setup (ops as code)

If you prefer to manage configuration as code, enable the Backstage scraper and point it at a Kubernetes Secret that holds your Backstage API endpoint and token:

causely-values.yaml
scrapers:
backstage:
enabled: true
instances:
- secretName: 'backstage-credentials'
namespace: 'causely'

Create a Kubernetes Secret with the Backstage URL and (optionally) token and TLS options, and label it so the scraper can discover it:

kubectl create secret generic backstage-credentials \
--namespace causely \
--from-literal=endpoint='https://your-backstage.example.com' \
--from-literal=token='your-backstage-api-token'

kubectl label secret backstage-credentials \
--namespace causely \
causely.ai/scraper=Backstage

The secret must have the label causely.ai/scraper: Backstage so the Backstage scraper can discover it.

Secret keys:

  • endpoint (required): Backstage base URL, for example https://backstage.example.com, without a trailing slash.
  • token (optional): Bearer token for Backstage API authentication.
  • insecure_skip_verify (optional): Set to true to skip TLS verification, for example for dev; not recommended for production.

Customize the mapping to your Backstage catalog (optional)

If your Backstage components store the Kubernetes service name or namespace in different fields than the defaults, you can override the mapping in Helm so Causely resolves to the right Kubernetes service. Under each Backstage instance, set label_map in your values to specify which component fields to use for:

  • component_kubernetes_id: where to read the name of the Kubernetes Service from the Backstage component
  • component_kubernetes_namespace: where to read the namespace of the Kubernetes Service from the Backstage component
  • component_owner: configurable fields on the Backstage component used to resolve the owner (if none can be resolved, the owner link is skipped for that service)

Example with the default mappings and comments:

causely-values.yaml
scrapers:
backstage:
enabled: true
instances:
- secretName: 'backstage-credentials'
namespace: 'causely'
# Override which Backstage component fields are used to resolve the K8s service (name + namespace)
label_map:
# Backstage field(s) used for the Kubernetes Service name
component_kubernetes_id:
- "annotations.backstage.io/kubernetes-id"
# Backstage field(s) used for the Kubernetes Service namespace
component_kubernetes_namespace:
- "annotations.backstage.io/kubernetes-namespace"
# Backstage field(s) used to resolve the component owner (no owner link if unresolved)
component_owner:
- "spec.owner"
- "annotations.backstage.io/owner"

This way the name and namespace used for lookup match how your Backstage components are defined.

What data is collected

  • ServiceOwner entities: Created or updated from Backstage groups and users, with display name and email when available.
  • Service–owner links: Each stitched Kubernetes service is linked to its owner when that owner can be resolved from the Backstage component.
  • Labels on services: Backstage scraper name, team (owner display name), component name/type/lifecycle/system, owner ref/kind/source, and the Kubernetes ID/namespace used for stitching.
  • Labels on owners: Owner ref, kind, display name, email, and scraper identification.

Only Backstage catalog entities of kind component are considered for stitching to Kubernetes services; group and user entities are used to build the owner catalog.