Alertmanager
Overview
You can forward alerts from Prometheus Alertmanager a to Causely by adding a webhook receiver. Once connected, Causely ingests your existing alerts, links them to the right services, and applies causal analysis to show the real underlying issues.
This means you can:
- Keep using your current alerting rules.
- See alerts mapped to services and dependencies inside Causely.
- Automatically identify which alerts are symptoms and which point to the actual causes.
Add Causely as a webhook receiver in Alertmanager
If you are using the Prometheus Operator, add the following AlertmanagerConfig to forward alerts to Causely’s mediator.
kind: AlertmanagerConfig
spec:
route:
group_by: ['alertname', 'pod', 'namespace'] # how alerts are grouped
repeat_interval: 3h # how often repeated alerts are sent
receiver: 'causely-mediator-webhook' # send all alerts here
receivers:
- name: 'causely-mediator-webhook'
webhook_configs:
- url: 'http://mediator.causely:9093/api/v1/alerts' # mediator endpoint
send_resolved: true # send firing + resolved states
Enabling Alertmanager as a Data Source
Add the following to your values.yaml to enable Alertmanager ingestion in the mediator:
mediator:
alertmanager:
enabled: true
port: 9093
alert_mappings: [] # optional; see examples below
Optional: Map Alerts to Entities and Attributes
Use alert_mappings to map Alertmanager alerts to Causely symptoms. For example, the following maps the Error Rate High alert to the RequestErrorRate_High symptom of the Service Entity:
alert_mappings:
- alert_name: 'Error Rate High'
symptom: 'RequestErrorRate_High'
entity:
service: {}
discovery:
- kubernetes_pod:
namespace: 'namespace'
pod_name: 'pod'
conditions:
- regex_not_match:
label: 'pod'
regex: '^.*-exporter.*$'
Without an alert_mappings entry, Causely will automatically map error and latency-type alerts—based on keyword matching—to the corresponding Causely Service symptoms.
Example Alert Payload
An example alert from Alertmanager that Causely can ingest:
[
{
"labels": {
"alertname": "HighErrorRate",
"severity": "warning",
"service": "causely-analysis",
"namespace": "production"
},
"annotations": {
"summary": "Error rate above 5% for 10m"
},
"startsAt": "2025-08-11T13:30:56Z",
"endsAt": "0001-01-01T00:00:00Z",
"generatorURL": "http://prometheus/graph?g0.expr=...",
"fingerprint": "047235cd648d8d01"
}
]
This alert would be mapped automatically to the high error rate symptom of the causely-analysis service.