Skip to main content

Integrate OpenTelemetry Traces

Signals Provided
  • Metrics - Performance metrics from applications and infrastructure
  • Symptoms - Automatic symptom detection from metrics, traces, and external monitoring systems
  • Traces - Distributed traces for service dependency discovery and communication monitoring

What is OpenTelemetry?

OpenTelemetry is an open source, vendor- and tool-neutral project that provides a comprehensive observability framework for generating, exporting and collecting telemetry data, such as traces, metrics and logs. Visit the OpenTelemetry website for more information.

Integrating OpenTelemetry with Causely

You can export traces from an application or an existing OpenTelemetry Collector to the mediator that was installed as part of the Causely agent. The mediator listens for traces on port 4317 using the OpenTelemetry Protocol (OTLP).

We recommend using OpenTelemetry with Causely because traces enable automatic discovery of service dependencies and monitoring of both synchronous and asynchronous communication signals.

info

By default, Causely will use OpenTelemetry eBPF Instrumentation for automatic instrumentation of your applications.

How it works

The following diagram shows how OpenTelemetry traces and metrics are collected and forwarded to the Causely mediator. For more information on how Causely works, see the How Causely Works page.


Good to know

No raw data is sent from the mediator to the Causely engine, only distilled insights. This way your data is secure and you can be sure that no sensitive data is sent to us.

Quick Start Guide

If you don't have an OpenTelemetry Collector running in your Kubernetes cluster, you can use the following command to install the OpenTelemetry Operator and the OpenTelemetry Collector:

helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm install opentelemetry-collector open-telemetry/opentelemetry-collector --values ./opentelemetry-values.yaml

For values of opentelemetry-values.yaml see the OpenTelemetry Operator Configuration section below.

OpenTelemetry Collector Configuration

For an instance of the OpenTelemetry Collector running within your Kubernetes cluster, you can use the following configuration:

exporters:
otlp/causely:
endpoint: mediator.causely:4317
compression: none
tls:
insecure: true

processors:
batch:
timeout: 1s
k8sattributes:
auth_type: 'serviceAccount'
passthrough: false
extract:
metadata:
- k8s.pod.name
- k8s.pod.uid
- k8s.container.name
- k8s.deployment.name
- k8s.namespace.name
- k8s.node.name
- k8s.pod.start_time
# Optional: Filter out internal spans
filter/ignore-internal:
error_mode: ignore
traces:
span:
- 'kind.string == Internal'

receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus/metrics:
# See a more detailed example in the Configuring Prometheus Metrics Relabeling section below
service:
pipelines:
metrics:
exporters: [otlp/causely]
processors: [k8sattributes, batch]
receivers: [otlp, prometheus/metrics]
traces:
exporters: [otlp/causely]
processors: [filter/ignore-internal, k8sattributes, batch]
receivers: [otlp]

Configuration Breakdown

This configuration will:

Configuring Prometheus Metrics Relabeling

When using the prometheus/metrics receiver in the OpenTelemetry Collector, you need to configure relabeling to extract Kubernetes metadata labels. Add the following relabel_configs to your prometheus/metrics receiver configuration:

prometheus/metrics:
config:
scrape_configs:
- job_name: 'prometheus-metrics'
relabel_configs:
# Filter by pod annotations
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
# Configure metrics path
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
# Configure metrics port
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
# Map pod labels
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)

# Extract metadata attributes
# Pod name
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: k8s.pod.name
# Pod UID
- source_labels: [__meta_kubernetes_pod_uid]
action: replace
target_label: k8s.pod.uid
# Container name (set both container_name and k8s.container.name)
- source_labels: [__meta_kubernetes_pod_container_name]
action: replace
target_label: container_name
- source_labels: [__meta_kubernetes_pod_container_name]
action: replace
target_label: k8s.container.name
# Deployment name (from controller)
- source_labels: [__meta_kubernetes_pod_controller_name]
action: replace
target_label: k8s.deployment.name
# Namespace
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: k8s.namespace.name
# Node name
- source_labels: [__meta_kubernetes_pod_node_name]
action: replace
target_label: k8s.node.name
# Pod start time (creation timestamp)
- source_labels: [__meta_kubernetes_pod_creation_timestamp]
action: replace
target_label: k8s.pod.start_time

This relabeling configuration maps Kubernetes metadata to metric labels:

  • __meta_kubernetes_namespacek8s.namespace.name
  • __meta_kubernetes_pod_namek8s.pod.name
  • __meta_kubernetes_pod_container_namek8s.container.name
  • __meta_kubernetes_pod_controller_namek8s.deployment.name

Important: OpenTelemetry uses the k8s.* label format (for example k8s.namespace.name, k8s.pod.name), while Prometheus typically uses simpler label names (for example namespace, pod). When configuring Causely discovery for metrics from the OpenTelemetry Collector, ensure your discovery configuration matches the actual label names in your metrics (for example use namespace: "k8s.namespace.name" and pod_name: "k8s.pod.name").

OpenTelemetry Operator Configuration

If you are using the OpenTelemetry Operator for Kubernetes, you can use the following configuration:

# Valid values are "daemonset" and "deployment".
# If set, agentCollector and standaloneCollector are ignored.
mode: 'deployment'

config:
exporters:
otlp/causely:
endpoint: mediator.causely:4317
compression: none
tls:
insecure: true

processors:
batch:
timeout: 1s
k8sattributes:
auth_type: 'serviceAccount'
passthrough: false
extract:
metadata:
- k8s.pod.name
- k8s.pod.uid
- k8s.container.name
- k8s.deployment.name
- k8s.namespace.name
- k8s.node.name
- k8s.pod.start_time

# Optional: Filter out internal spans
filter/ignore-internal:
error_mode: ignore
traces:
span:
- 'kind.string == Internal'

receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus/metrics:
config:
scrape_configs:
- job_name: 'prometheus-metrics'
relabel_configs:
# See Configuring Prometheus Metrics Relabeling section below

service:
pipelines:
metrics:
exporters: [otlp/causely]
processors: [k8sattributes, batch]
receivers: [otlp, prometheus/metrics]
traces:
exporters: [otlp/causely]
processors: [filter/ignore-internal, k8sattributes, batch]
receivers: [otlp]

presets:
kubernetesAttributes:
enabled: true

podLabels:
sidecar.istio.io/inject: 'disabled'

For the complete relabel_configs configuration, see the Configuring Prometheus Metrics Relabeling section below.