Traces
Distributed tracing to analyze latency, detect slow spans, and find N+1 queries
Cased Telemetry captures distributed traces via the Sentry SDK or OpenTelemetry (through cased-agent). Analyze latency, find slow spans, detect N+1 query patterns, and track performance regressions.
Quick Start
Section titled “Quick Start”# Find slow spans (>500ms)cased perf slow --since 1h
# View latency percentilescased perf latency --since 1h --service api
# Detect N+1 query patternscased perf n1 --since 1h
# Check for regressionscased perf regression --service apiCommands
Section titled “Commands”Find Slow Spans
Section titled “Find Slow Spans”Identify spans that exceed a latency threshold:
# Find spans slower than 500ms (default)cased perf slow --since 1h
# Custom threshold (1 second)cased perf slow --since 24h --threshold 1000
# Filter by servicecased perf slow --service api --since 1hOptions:
| Option | Description | Default |
|---|---|---|
--since | Time range (1h, 24h, 7d) | 1h |
--threshold | Minimum duration in ms | 500 |
--service | Filter by service name | - |
--limit | Max results | 50 |
Latency Percentiles
Section titled “Latency Percentiles”View p50, p95, p99 latencies grouped by service or endpoint:
# Overall latency by servicecased perf latency --since 1h
# Group by endpointcased perf latency --service api --group-by endpoint
# Both service and endpointcased perf latency --since 24h --group-by bothOutput:
Service p50 p95 p99 Count─────────────────────────────────────────────────api 45ms 120ms 350ms 12,450worker 230ms 890ms 1.2s 3,200gateway 12ms 35ms 78ms 45,000Options:
| Option | Description | Default |
|---|---|---|
--since | Time range | 1h |
--service | Filter by service | - |
--group-by | service, endpoint, or both | service |
Detect N+1 Queries
Section titled “Detect N+1 Queries”Find repeated similar operations that indicate N+1 query patterns:
# Find N+1 patternscased perf n1 --since 1h
# Require more repetitions to flagcased perf n1 --min-count 10Output:
Pattern Count Trace Example────────────────────────────────────────────────────────────SELECT * FROM users WHERE id = ? 47 abc123SELECT * FROM orders WHERE user_id 23 def456Options:
| Option | Description | Default |
|---|---|---|
--since | Time range | 1h |
--min-count | Minimum repetitions to flag | 5 |
Trace Breakdown
Section titled “Trace Breakdown”Get a detailed service breakdown for a specific trace:
cased perf breakdown <trace_id>Output:
Trace: abc123def456Total Duration: 1.2s
Service Duration % of Total Spans──────────────────────────────────────────────────database 680ms 56.7% 12api 320ms 26.7% 3redis 150ms 12.5% 8external-api 50ms 4.2% 1Detect Regressions
Section titled “Detect Regressions”Compare recent performance against a baseline period:
# Compare last day to last weekcased perf regression --service api
# Custom periodscased perf regression --service api --baseline 7d --compare 1d
# Filter by endpointcased perf regression --service api --endpoint /api/usersOutput:
Endpoint Baseline p95 Current p95 Change────────────────────────────────────────────────────────────/api/users 45ms 120ms +167% ⚠️/api/orders 230ms 245ms +7%/api/health 5ms 5ms 0%Options:
| Option | Description | Default |
|---|---|---|
--service | Service to analyze (required) | - |
--endpoint | Filter by endpoint | - |
--baseline | Baseline period | 7d |
--compare | Comparison period | 1d |
Performance Summary
Section titled “Performance Summary”Get an overall summary of system performance:
cased perf summary --since 1hOutput:
Performance Summary (last 1h)─────────────────────────────Total Traces: 45,230Total Spans: 234,500Error Rate: 0.3%
Latency (all services): p50: 34ms p95: 180ms p99: 450ms
Slowest Services: 1. database avg: 89ms 2. external-api avg: 67ms 3. worker avg: 45msAPI Endpoints
Section titled “API Endpoints”All performance data is also available via REST API:
Slow Spans
Section titled “Slow Spans”curl -H "Authorization: Token YOUR_API_KEY" \ "https://app.cased.com/api/v1/telemetry/traces/slow?since=1h&threshold=500"Latency Percentiles
Section titled “Latency Percentiles”curl -H "Authorization: Token YOUR_API_KEY" \ "https://app.cased.com/api/v1/telemetry/traces/latency?since=1h&group_by=service"N+1 Detection
Section titled “N+1 Detection”curl -H "Authorization: Token YOUR_API_KEY" \ "https://app.cased.com/api/v1/telemetry/traces/n1?since=1h&min_count=5"Investigation Workflow
Section titled “Investigation Workflow”A typical performance investigation workflow:
-
Start with summary to get an overview:
Terminal window cased perf summary --since 1h -
Check for regressions if latency increased:
Terminal window cased perf regression --service api -
Find slow spans to identify bottlenecks:
Terminal window cased perf slow --service api --since 1h -
Check for N+1 queries if database is slow:
Terminal window cased perf n1 --since 1h -
Drill into a specific trace for details:
Terminal window cased perf breakdown <trace_id>
Alerting
Section titled “Alerting”Set up alerts based on performance thresholds using Cased workflows.