Skip to main content

Checkly

Overview

Causely provides native integration with Checkly to help you identify and resolve external monitoring and synthetic testing issues before they impact your users.

Instead of just monitoring symptoms, Causely analyzes real-time signals from Checkly's global monitoring network to surface the actual root causes of external availability and performance problems.

This integration helps you identify common root causes, among others:

The integration supports Checkly's comprehensive monitoring platform including API checks, browser checks, and global monitoring locations.

Setup Guide

Step 1: Create an API Key in Checkly

Create an API key for your Checkly account with the following permissions:

  • Read access to checks: Required to access check configurations and metadata
  • Read access to check results: Required to monitor check execution results and performance
  • Read access to locations: Required to access monitoring location information
  • Read access to alerts: Required to monitor alert configurations and status

For more information on Checkly API authentication

Step 2: Configure Monitoring Locations

Ensure your Checkly checks are configured to run from multiple monitoring locations to provide comprehensive coverage:

  • Public locations: Checkly's global network of monitoring locations
  • Private locations: Your own monitoring infrastructure for internal services
  • Geographic distribution: Multiple regions for comprehensive coverage

Step 3: Update Causely Configuration

Once the API key is created, update the Causely configuration to enable scraping for Checkly checks. Below is an example configuration:

scrapers:
checkly:
enabled: true
accounts:
- account_id: 'aaaabbbb-cccc-dddd-eeee-ffffffffffff'
token: 'xz_1234'

Alternative: Enable Credentials Autodiscovery

Causely also supports credentials autodiscovery. This feature allows you to add new scraping targets without updating the Causely configuration. Label the Kubernetes secret to enable autodiscovery for the corresponding scraper.

kubectl --namespace causely label secret checkly-credentials "causely.ai/scraper=Checkly"

What Data is Collected

The Checkly scraper collects comprehensive synthetic monitoring and testing data from your Checkly checks, including:

  • Check information: ID, name, type (API checks), frequency, and locations
  • Check settings: Alert configurations, retry strategies, and timeout settings
  • Request details: HTTP method, URL, headers, body, and query parameters
  • Assertions and validations: Response validation rules and performance thresholds
  • Environment variables: Check-specific configuration and test data
  • Response times: Individual check response times and performance data
  • Success/failure rates: Track successful vs failed check executions
  • Error information: Detailed error messages and failure reasons
  • Location-based results: Performance data per monitoring location
  • Recent check history: Last hour of check results for trend analysis
  • Geographic distribution: Different monitoring locations and regions
  • Location metadata: Region names and availability zones
  • Multi-location testing: Results from various geographic points
  • Public vs private locations: Coverage from Checkly's network and your infrastructure
  • Alert symptoms: RequestErrorRate_High and RequestDuration_High for proactive issue detection
  • Service mapping: Links Checkly checks to target services
  • Workload relationships: Associates monitoring locations with workloads
  • Network endpoint tracking: Maps checks to specific service endpoints
  • Access data collection: Creates network dependency mappings for causal analysis
  • HTTP response details: Status codes, response headers, and body content
  • Request timing phases: Detailed breakdown of request/response timing
  • Assertion results: Validation rule pass/fail status and actual values
  • Error categorization: Network errors, timeout issues, and validation failures
  • Performance thresholds: Degraded and maximum response time monitoring
  • Performance trend analysis: Identifies gradual performance degradation
  • Threshold-based alerting: Proactive alerts based on configurable thresholds
  • Location-specific performance: Identifies regional performance differences
  • Global availability monitoring: Tracks service availability across regions
  • Regional outage detection: Identifies location-specific service issues
  • Performance correlation: Links geographic performance to infrastructure issues
  • Current check status: Real-time monitoring of check health
  • Status transitions: Tracks changes from healthy to degraded/failed states
  • Alert state management: Updates external alerts based on current performance
  • Trend analysis: Identifies patterns in check performance over time