Skip to main content

Confluent

Overview

Causely provides native integration with Confluent Cloud to help you identify and resolve streaming platform performance issues before they impact your users.

Instead of just monitoring symptoms, Causely analyzes real-time signals from Confluent's Telemetry API to surface the actual root causes of streaming problems.

This integration helps you identify the following root causes, among others:

The integration supports Confluent Cloud environments and clusters with comprehensive monitoring of Kafka topics, consumer groups, and cluster health metrics.

Setup Guide

Step 1: Create API Key and Secret

Create an API key and secret for your Confluent Cloud account with the necessary permissions:

  1. Log in to the Confluent Cloud Console
  2. Navigate to API Keys in the Administration section
  3. Click Create key
  4. Select Cloud resource management as the scope
  5. Choose My account as the owner
  6. Generate and securely store the API key and secret

Required Permissions:

  • Read access to clusters, topics, and metrics
  • Access to Telemetry API for metrics collection

Step 2: Create a Kubernetes Secret

Create a Kubernetes secret containing your Confluent Cloud credentials:

kubectl create secret generic \
--namespace causely confluent-secret \
--from-literal=CONFLUENT_CLOUD_KEY="<your-api-key>" \
--from-literal=CONFLUENT_CLOUD_SECRET="<your-api-secret>"

Alternatively, you can create the secret using a YAML manifest:

apiVersion: v1
kind: Secret
metadata:
name: confluent-secret
namespace: causely
type: Opaque
stringData:
CONFLUENT_CLOUD_KEY: '<your-api-key>'
CONFLUENT_CLOUD_SECRET: '<your-api-secret>'

Step 3: Update Causely Configuration

Enable the Confluent scraper in your Causely configuration by adding the following to your values.yaml file:

Basic Configuration

scrapers:
confluent:
enabled: true
secretName: confluent-secret
environments:
- dev
- prod
clusters:
- dev-cluster
- prod-cluster

Alternative: Enable Credentials Autodiscovery

Causely also supports credentials autodiscovery. This feature allows you to add new scraping targets without updating the Causely configuration. Label the Kubernetes secret to enable autodiscovery for the corresponding scraper.

kubectl --namespace causely label secret confluent-secret "causely.ai/scraper=Confluent"

What Data is Collected

The Confluent scraper collects comprehensive metadata and performance information from your Confluent Cloud deployment, including:

  • Cluster entities with display names, environment labels, and Kafka bootstrap endpoints (port 9092)
  • Topic entities with cluster relationships and labels (Environment, ClusterId, ClusterName, Topic)
  • Consumer lag metrics from the io.confluent.kafka.server/consumer_lag_offsets telemetry API
  • TopicAccess entity updates with real-time lag attributes for existing consumer group mappings
  • Environment and cluster discovery based on configuration filters
  • Network endpoint mapping for Kafka bootstrap server connectivity