Understanding Correlation

Cased uses a sophisticated correlation system to connect anomalies with deployments, helping you quickly identify the root cause of issues. Our correlation engine analyzes multiple factors to determine how likely an anomaly is related to a specific deployment.

Correlation Strength Explained

Very Strong (90-100% Confidence)

When we see:

  • Errors occurring in recently modified code
  • Issues happening during the deployment process
  • Clear stack trace matches with changed files

Example:

🔴 Very Strong Correlation
- Error in: users/authentication.rb:45
- File changed in deployment: users/authentication.rb
- Time: During deployment

Strong (80-89% Confidence)

When we see:

  • Errors in modified code shortly after deployment
  • Clear pattern of issues following deployment
  • Strong timing correlation

Example:

🟠 Strong Correlation
- Error in modified code
- Occurred 30 minutes after deployment
- Multiple instances of the same error

Moderate (60-79% Confidence)

When we see:

  • Issues during deployment window
  • Indirect code relationships
  • Timing suggests possible connection

Example:

🟡 Moderate Correlation
- Error in dependent code
- During deployment window
- No direct code match

Weak (Below 60% Confidence)

When we see:

  • Errors in unmodified code
  • Significant time gap after deployment
  • No clear code relationship

Example:

⚪ Weak Correlation
- Unrelated code area
- Hours after deployment
- Common error pattern

Correlation Factors

1. Code Analysis

We analyze:

  • Modified files in deployment
  • Error stack traces
  • Function and method calls
  • Dependencies between files

Example of code matching:

Deployment Changes:
  - app/models/user.rb
  - app/controllers/auth_controller.rb

Error Stack:
  1. app/controllers/auth_controller.rb:67
  2. app/models/user.rb:123

Result: 🔴 Direct code match in changed files

2. Timing Analysis

We consider:

  • Deployment timestamp
  • Error occurrence time
  • Pattern of similar errors
  • Historical error rates

Example timeline:

2:00 PM - Deployment started
2:05 PM - First error occurred
2:10 PM - Error rate increased
2:15 PM - Deployment completed

Result: 🔴 Strong temporal correlation

3. Error Volume

We track:

  • Normal error rates
  • Post-deployment spikes
  • Error frequency patterns
  • Error types and categories

Example analysis:

Baseline: 2 errors/hour
Post-deploy: 20 errors/hour
New Error Types: 2

Result: 🔴 Significant anomaly

Custom Rules

Please reach out to support for custom correlation rules based on your specific use case.