Skip to main content

Customize your installation

After you have installed the Causely agent, you can customize your installation to meet your needs.

Use a Custom Values File

To specify additional configurations - like a specific image tag, integrations, webhook notifications, etc - you can do this by creating a causely-values.yaml file. Below is an example of common values used for installation.

global:
cluster_name: <my_cluster_name>
image:
tag: <version> # Locate latest version in the Causely Portal > Gear Icon > Integrations > Agents
mediator:
gateway:
token: <my_token> # Locate your token in the Causely Portal > Gear Icon > Integrations > Details

Now add the --values parameter to the command you used to install the Causely agent.

helm upgrade --install causely \
--create-namespace oci://us-docker.pkg.dev/public-causely/public/causely \
--version <version> \
--namespace=causely \
--values </path/to/causely-values.yaml>

Using a Kubernetes Secret for the Access Token

Instead of passing your access token directly in the Helm values, you can store it in a Kubernetes secret and reference it using mediator.secretName. This approach is recommended for production environments and GitOps workflows.

Step 1: Create the Kubernetes Secret

Create a secret containing your Causely access token:

kubectl create secret --namespace causely generic causely-token \
--from-literal=gateway-token=<your-access-token>
note

The secret must contain a key named gateway-token with your access token value.

Step 2: Reference the Secret in Your Values File

Update your causely-values.yaml to reference the secret instead of the token directly:

mediator:
secretName: causely-token # Name of the secret created above

When mediator.secretName is set, Causely will automatically read the token from the referenced secret instead of expecting it in mediator.gateway.token.

Connect Additional Telemetry Sources

You can add additional data sources to Causely which expands Causely's causality map to help identify root causes. Some data sources require you to update your causely-values.yaml file. For more details on Telemetry Sources, see Telemetry Sources.

Push Insights into Your Workflows

Causely can automatically send identified root causes to your existing notification and observability tools. To explore integrations with Slack, Grafana, Opsgenie, and more, visit our Workflow Integrations page. You'll find setup instructions for webhooks (via causely-values.yaml) and details on our Grafana plugin, with additional native integrations coming soon.

Custom Labels for Scope configuration

You can scope the Causely interface to specific components out-of-the-box (clusters, namespaces, services).

Causely can also use labels to provide users with additional scopes - this can be accomplished by adding the following to your causely-values.yaml.

label_semconv:
scopes:
geography:
- 'app.kubernetes.io/geography'
environment:
- 'app.kubernetes.io/environment'
customer:
- 'app.kubernetes.io/customer'
team:
- 'app.kubernetes.io/team'
product:
- 'app.kubernetes.io/product'
project:
- 'app.kubernetes.io/project'
service:
- 'app.kubernetes.io/service'

When configured, Causely will automatically detect entities with these labels and make them available as scope options in the UI.

Installing in a large Kubernetes cluster (>1000 nodes)

If you install Causely in a large Kubernetes cluster, you can enable centralized caching of the api server responses by adding to the causely-values.yaml file:

k8sCache:
enabled: true

Connecting to a remote Kubernetes cluster

If you are connecting to a remote Kubernetes API server, you need to create a secret with the kubeconfig credentials:

kubectl create secret --namespace causely generic kubeconfig --from-file=kubeconfig=path_to_kubeconfig

and include in the causely-values.yaml file:

scrapers:
kubernetes:
# remote k8s cluster credentials
secretName: kubeconfig
agent:
enabled: false

Deploying on Openshift

If you are deploying on Openshift, you need to use the group id from the uid-range assigned to the project:

oc new-project causely
oc get ns causely -o yaml|grep uid-range
openshift.io/sa.scc.uid-range: 1000630000/10000

and include in the causely-values.yaml file:

global:
securityContext:
fsGroup: 1000630000

Or you can change the security context of the project to the 'anyuid' SCC

oc adm policy add-scc-to-group anyuid system:serviceaccounts:causely

In both cases, you need to assign the privileged SCC to the causely-agent service account used by the Causely agents:

oc adm policy add-scc-to-user privileged -z causely-agent -n causely

Using a custom StorageClass instead of the default one

If you are deploying into a cluster, where there is no default StorageClass defined, you can specify the StorageClass to use for persistent volumes:

global:
storageClass: ocs-storagecluster-ceph-rbd

Alternatively you can annotate a default StorageClass:

kubectl patch storageclass ocs-storagecluster-ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'