Skip to content

Traces

Distributed tracing to analyze latency, detect slow spans, and find N+1 queries

Cased Telemetry captures distributed traces via the Sentry SDK or OpenTelemetry (through cased-agent). Analyze latency, find slow spans, detect N+1 query patterns, and track performance regressions.

Terminal window
# Find slow spans (>500ms)
cased perf slow --since 1h
# View latency percentiles
cased perf latency --since 1h --service api
# Detect N+1 query patterns
cased perf n1 --since 1h
# Check for regressions
cased perf regression --service api

Identify spans that exceed a latency threshold:

Terminal window
# Find spans slower than 500ms (default)
cased perf slow --since 1h
# Custom threshold (1 second)
cased perf slow --since 24h --threshold 1000
# Filter by service
cased perf slow --service api --since 1h

Options:

OptionDescriptionDefault
--sinceTime range (1h, 24h, 7d)1h
--thresholdMinimum duration in ms500
--serviceFilter by service name-
--limitMax results50

View p50, p95, p99 latencies grouped by service or endpoint:

Terminal window
# Overall latency by service
cased perf latency --since 1h
# Group by endpoint
cased perf latency --service api --group-by endpoint
# Both service and endpoint
cased perf latency --since 24h --group-by both

Output:

Service p50 p95 p99 Count
─────────────────────────────────────────────────
api 45ms 120ms 350ms 12,450
worker 230ms 890ms 1.2s 3,200
gateway 12ms 35ms 78ms 45,000

Options:

OptionDescriptionDefault
--sinceTime range1h
--serviceFilter by service-
--group-byservice, endpoint, or bothservice

Find repeated similar operations that indicate N+1 query patterns:

Terminal window
# Find N+1 patterns
cased perf n1 --since 1h
# Require more repetitions to flag
cased perf n1 --min-count 10

Output:

Pattern Count Trace Example
────────────────────────────────────────────────────────────
SELECT * FROM users WHERE id = ? 47 abc123
SELECT * FROM orders WHERE user_id 23 def456

Options:

OptionDescriptionDefault
--sinceTime range1h
--min-countMinimum repetitions to flag5

Get a detailed service breakdown for a specific trace:

Terminal window
cased perf breakdown <trace_id>

Output:

Trace: abc123def456
Total Duration: 1.2s
Service Duration % of Total Spans
──────────────────────────────────────────────────
database 680ms 56.7% 12
api 320ms 26.7% 3
redis 150ms 12.5% 8
external-api 50ms 4.2% 1

Compare recent performance against a baseline period:

Terminal window
# Compare last day to last week
cased perf regression --service api
# Custom periods
cased perf regression --service api --baseline 7d --compare 1d
# Filter by endpoint
cased perf regression --service api --endpoint /api/users

Output:

Endpoint Baseline p95 Current p95 Change
────────────────────────────────────────────────────────────
/api/users 45ms 120ms +167% ⚠️
/api/orders 230ms 245ms +7%
/api/health 5ms 5ms 0%

Options:

OptionDescriptionDefault
--serviceService to analyze (required)-
--endpointFilter by endpoint-
--baselineBaseline period7d
--compareComparison period1d

Get an overall summary of system performance:

Terminal window
cased perf summary --since 1h

Output:

Performance Summary (last 1h)
─────────────────────────────
Total Traces: 45,230
Total Spans: 234,500
Error Rate: 0.3%
Latency (all services):
p50: 34ms
p95: 180ms
p99: 450ms
Slowest Services:
1. database avg: 89ms
2. external-api avg: 67ms
3. worker avg: 45ms

All performance data is also available via REST API:

Terminal window
curl -H "Authorization: Token YOUR_API_KEY" \
"https://app.cased.com/api/v1/telemetry/traces/slow?since=1h&threshold=500"
Terminal window
curl -H "Authorization: Token YOUR_API_KEY" \
"https://app.cased.com/api/v1/telemetry/traces/latency?since=1h&group_by=service"
Terminal window
curl -H "Authorization: Token YOUR_API_KEY" \
"https://app.cased.com/api/v1/telemetry/traces/n1?since=1h&min_count=5"

A typical performance investigation workflow:

  1. Start with summary to get an overview:

    Terminal window
    cased perf summary --since 1h
  2. Check for regressions if latency increased:

    Terminal window
    cased perf regression --service api
  3. Find slow spans to identify bottlenecks:

    Terminal window
    cased perf slow --service api --since 1h
  4. Check for N+1 queries if database is slow:

    Terminal window
    cased perf n1 --since 1h
  5. Drill into a specific trace for details:

    Terminal window
    cased perf breakdown <trace_id>

Set up alerts based on performance thresholds using Cased workflows.