Integrate OpenTelemetry Traces
- Metrics - Performance metrics from applications and infrastructure
- Symptoms - Automatic symptom detection from metrics, traces, and external monitoring systems
- Traces - Distributed traces for service dependency discovery and communication monitoring
What is OpenTelemetry?​
OpenTelemetry is an open source, vendor- and tool-neutral project that provides a comprehensive observability framework for generating, exporting and collecting telemetry data, such as traces, metrics and logs. Visit the OpenTelemetry website for more information.
Integrating OpenTelemetry with Causely​
You can export traces from an application or an existing OpenTelemetry Collector to the mediator that was installed as part of the Causely agent. The mediator listens for traces on port 4317 using the OpenTelemetry Protocol (OTLP).
We recommend using OpenTelemetry with Causely because traces enable automatic discovery of service dependencies and monitoring of both synchronous and asynchronous communication signals.
By default, Causely will use OpenTelemetry eBPF Instrumentation for automatic instrumentation of your applications.
How it works​
The following diagram shows how OpenTelemetry traces are collected and forwarded to the Causely mediator. For more information on how Causely works, see the How Causely Works page.
No raw data is sent from the mediator to the Causely engine, only distilled insights. This way your data is secure and you can be sure that no sensitive data is sent to us.
Quick Start Guide​
If you don't have an OpenTelemetry Collector running in your Kubernetes cluster, you can use the following command to install the OpenTelemetry Operator and the OpenTelemetry Collector:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm install opentelemetry-collector open-telemetry/opentelemetry-collector --values ./opentelemetry-values.yaml
For values of opentelemetry-values.yaml see the OpenTelemetry Operator Configuration section below.
OpenTelemetry Collector Configuration​
For an instance of the OpenTelemetry Collector running within your Kubernetes cluster, you can use the following configuration:
exporters:
otlp/causely:
endpoint: mediator.causely:4317
compression: none
tls:
insecure: true
processors:
batch:
timeout: 1s
k8sattributes:
auth_type: 'serviceAccount'
passthrough: false
extract:
metadata:
- k8s.pod.name
- k8s.pod.uid
- k8s.container.name
- k8s.deployment.name
- k8s.namespace.name
- k8s.node.name
- k8s.pod.start_time
# Optional: Filter out internal spans
filter/ignore-internal:
error_mode: ignore
traces:
span:
- 'kind.string == Internal'
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus/metrics:
config:
scrape_configs:
- job_name: 'prometheus-metrics'
relabel_configs:
# Filter by pod annotations
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
# Configure metrics path
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
# Configure metrics port (7778)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
# Map pod labels
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
# Extract metadata attributes
# Pod name
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: k8s.pod.name
# Pod UID
- source_labels: [__meta_kubernetes_pod_uid]
action: replace
target_label: k8s.pod.uid
# Container name (set both container_name and k8s.container.name)
- source_labels: [__meta_kubernetes_pod_container_name]
action: replace
target_label: container_name
- source_labels: [__meta_kubernetes_pod_container_name]
action: replace
target_label: k8s.container.name
# Deployment name (from controller)
- source_labels: [__meta_kubernetes_pod_controller_name]
action: replace
target_label: k8s.deployment.name
# Namespace
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: k8s.namespace.name
# Node name
- source_labels: [__meta_kubernetes_pod_node_name]
action: replace
target_label: k8s.node.name
# Pod start time (creation timestamp)
- source_labels: [__meta_kubernetes_pod_creation_timestamp]
action: replace
target_label: k8s.pod.start_time
service:
pipelines:
metrics:
exporters: [otlp/causely]
processors: [k8sattributes, batch]
receivers: [otlp, prometheus/metrics]
traces:
exporters: [otlp/causely]
processors: [filter/ignore-internal, k8sattributes, batch]
receivers: [otlp]
Configuration Breakdown​
This configuration will:
- Export metrics and traces to the Causely mediator using the OTLP gRPC Exporter
- Filter out internal spans using the Filter Processor
- Add Kubernetes attributes to the spans using the Kubernetes Attributes Processor
- Batch the spans using the Batch Processor
OpenTelemetry Operator Configuration​
If you are using the OpenTelemetry Operator for Kubernetes, you can use the following configuration:
# Valid values are "daemonset" and "deployment".
# If set, agentCollector and standaloneCollector are ignored.
mode: 'deployment'
config:
exporters:
otlp/causely:
endpoint: mediator.causely:4317
compression: none
tls:
insecure: true
processors:
batch:
timeout: 1s
k8sattributes:
auth_type: 'serviceAccount'
passthrough: false
extract:
metadata:
- k8s.pod.name
- k8s.pod.uid
- k8s.container.name
- k8s.deployment.name
- k8s.namespace.name
- k8s.node.name
- k8s.pod.start_time
# Optional: Filter out internal spans
filter/ignore-internal:
error_mode: ignore
traces:
span:
- 'kind.string == Internal'
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus/metrics:
config:
scrape_configs:
- job_name: 'prometheus-metrics'
relabel_configs:
# Filter by pod annotations
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
# Configure metrics path
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
# Configure metrics port (7778)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
# Map pod labels
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
# Extract metadata attributes
# Pod name
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: k8s.pod.name
# Pod UID
- source_labels: [__meta_kubernetes_pod_uid]
action: replace
target_label: k8s.pod.uid
# Container name (set both container_name and k8s.container.name)
- source_labels: [__meta_kubernetes_pod_container_name]
action: replace
target_label: container_name
- source_labels: [__meta_kubernetes_pod_container_name]
action: replace
target_label: k8s.container.name
# Deployment name (from controller)
- source_labels: [__meta_kubernetes_pod_controller_name]
action: replace
target_label: k8s.deployment.name
# Namespace
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: k8s.namespace.name
# Node name
- source_labels: [__meta_kubernetes_pod_node_name]
action: replace
target_label: k8s.node.name
# Pod start time (creation timestamp)
- source_labels: [__meta_kubernetes_pod_creation_timestamp]
action: replace
target_label: k8s.pod.start_time
service:
pipelines:
metrics:
exporters: [otlp/causely]
processors: [k8sattributes, batch]
receivers: [otlp, prometheus/metrics]
traces:
exporters: [otlp/causely]
processors: [filter/ignore-internal, k8sattributes, batch]
receivers: [otlp]
presets:
kubernetesAttributes:
enabled: true
podLabels:
sidecar.istio.io/inject: 'disabled'