Skip to content

Cased CLI

Query telemetry, monitor LLM usage, and analyze performance from your terminal

The Cased CLI provides command-line access to your observability data. Query errors, traces, metrics, and LLM usage directly from your terminal.

Terminal window
uv pip install cased-cli

The easiest way to authenticate is via your browser:

Terminal window
cased configure

This will:

  1. Open your browser to the Cased login page
  2. You click “Authorize” after logging in
  3. CLI saves your token to ~/.config/cased/config.json

Works in SSH sessions too—just copy the URL from the terminal.

Alternatively, set your API key as an environment variable:

Terminal window
export CASED_API_KEY=your_api_key_here

Get your API key from Settings.

Optionally set a custom API URL (defaults to https://app.cased.com):

Terminal window
export CASED_API_URL=https://app.cased.com
Terminal window
# Check current auth status
cased configure
# Re-authenticate (get a new token)
cased configure --force
# Log out and remove saved credentials
cased logout
Terminal window
# Follow logs in real-time
cased logs -f
# View recent errors
cased errors --since 1h
# Check trace performance
cased traces --service api --status error
# Monitor LLM costs
cased llm cost --since 24h
# Find slow spans
cased perf slow --threshold 500
# Get overall stats
cased stats
CommandDescription
cased logsQuery application logs
cased errorsQuery error events
cased tracesQuery distributed traces
cased metricsQuery container metrics
cased statsOverall telemetry statistics
cased clustersList clusters with telemetry
cased sessionsList AI agent sessions
cased sessionGet session details
cased sourcemapsManage source maps
cased perfPerformance analysis
cased llmLLM monitoring

All commands support:

OptionDescription
--jsonOutput as JSON instead of formatted tables
--helpShow help for a command

Query and tail application logs in real-time.

Terminal window
# Last 100 logs (default)
cased logs
# Last 50 logs
cased logs --tail 50
# Follow logs in real-time (like docker logs -f)
cased logs -f
# Follow with initial context
cased logs -f --tail 20
# Filter by level
cased logs -l error
cased logs -l warn
# Filter by service
cased logs -s api-server
# Search in messages
cased logs -q "timeout"
cased logs -q "connection refused" --since 1h
# Show full timestamps
cased logs -t
# Combine filters
cased logs -f -l error -s api-server

Options:

OptionDefaultDescription
--tail, -n100Number of lines to show
--follow, -f-Follow logs in real-time
--timestamps, -t-Show full timestamps
--since1hTime range (1h, 24h, 7d)
--level, -l-Filter by level (trace, debug, info, warn, error, fatal)
--service, -s-Filter by service name
--search, -q-Search in log messages

Query error events from your applications.

Terminal window
# Recent errors (last 24h)
cased errors
# Errors in the last hour
cased errors --since 1h
# Filter by severity
cased errors --level error
cased errors --level fatal
# Search in exception type/value
cased errors --search "KeyError"
cased errors --search "connection refused"
# Filter by project
cased errors --project my-project-id
# Output as JSON
cased errors --since 1h --json

Options:

OptionDefaultDescription
--since24hTime range (1h, 24h, 7d)
--level-Filter by level (error, warning, fatal)
--search-Search in exception type/value
--project-Filter by project ID
--limit50Max results

Query distributed traces and spans.

Terminal window
# Recent traces
cased traces --since 1h
# Filter by service
cased traces --service api
cased traces --service worker
# Filter by status
cased traces --status error
cased traces --status ok
# Get all spans for a specific trace
cased traces --trace-id abc123def456
# Filter by cluster
cased traces --cluster prod-us-east

Options:

OptionDefaultDescription
--since1hTime range
--service-Filter by service name
--status-Filter by status (ok, error)
--trace-id-Get all spans for a trace
--cluster-Filter by cluster ID
--limit50Max results

Query container metrics from your clusters.

Terminal window
# Recent metrics
cased metrics --since 1h
# Filter by pod
cased metrics --pod my-pod-abc123
# Filter by metric type
cased metrics --metric cpu_percent
cased metrics --metric memory_bytes
# Filter by namespace
cased metrics --namespace production

Options:

OptionDefaultDescription
--since1hTime range
--cluster-Filter by cluster ID
--namespace-Filter by namespace
--pod-Filter by pod name
--metric-Filter by metric name
--limit50Max results

Get overall telemetry statistics.

Terminal window
cased stats
cased stats --json

List clusters sending telemetry data.

Terminal window
cased clusters
cased clusters --json

List AI agent sessions.

Terminal window
# Recent sessions
cased sessions --since 24h
# Filter by status
cased sessions --status completed
cased sessions --status failed
cased sessions --status agent_running
# Filter by type
cased sessions --type root_cause_analysis
cased sessions --type deploy_monitor

Options:

OptionDefaultDescription
--since24hTime range
--status-Filter by status
--type-Filter by session type
--limit20Max results

Get details for a specific session.

Terminal window
# Basic details
cased session abc123
# Include execution logs
cased session abc123 --logs
# Include conversation history
cased session abc123 --conversation
# Full JSON output
cased session abc123 --json

Manage source maps for JavaScript error de-minification.

Upload source maps for a release.

Terminal window
# Upload source maps
cased sourcemaps upload -p my-app -r v1.2.3 dist/*.map
# With URL prefix
cased sourcemaps upload -p my-app -r v1.2.3 --url-prefix "~/" build/*.map
# Using git SHA as release
cased sourcemaps upload -p my-app -r $GIT_SHA dist/*.map

List uploaded source maps.

Terminal window
# All source maps for a project
cased sourcemaps list -p my-app
# Filter by release
cased sourcemaps list -p my-app -r v1.2.3

Delete source maps for a release.

Terminal window
# Interactive confirmation
cased sourcemaps delete -p my-app -r v1.2.3
# Skip confirmation
cased sourcemaps delete -p my-app -r v1.2.3 -y

Analyze trace performance, detect bottlenecks, and find regressions.

Find spans exceeding a duration threshold.

Terminal window
# Find slow spans (>500ms default)
cased perf slow --since 1h
# Custom threshold (1 second)
cased perf slow --threshold 1000
# Filter by service
cased perf slow --service api --since 24h

Options:

OptionDefaultDescription
--since1hTime range
--threshold500Minimum duration in ms
--service-Filter by service
--limit50Max results

View latency percentiles (p50, p95, p99).

Terminal window
# By service
cased perf latency --since 1h
# By endpoint
cased perf latency --service api --group-by endpoint
# Both service and endpoint
cased perf latency --group-by both

Options:

OptionDefaultDescription
--since1hTime range
--service-Filter by service
--group-byserviceGroup by: service, endpoint, both

Detect N+1 query patterns.

Terminal window
# Find N+1 patterns
cased perf n1 --since 1h
# Require more repetitions
cased perf n1 --min-count 10

Options:

OptionDefaultDescription
--since1hTime range
--min-count5Minimum repetitions to flag
--limit50Max results

Get service time breakdown for a trace.

Terminal window
cased perf breakdown <trace_id>

Shows where time was spent across different services in a trace.

Detect performance regressions by comparing time periods.

Terminal window
# Compare last day to last week
cased perf regression --service api
# Custom periods
cased perf regression --service api --baseline 7d --compare 1d
# Filter by endpoint
cased perf regression --service api --endpoint /api/users

Options:

OptionDefaultDescription
--service(required)Service to analyze
--endpoint-Filter by endpoint
--baseline7dBaseline period
--compare1dComparison period

Get overall performance summary.

Terminal window
cased perf summary --since 1h
cased perf summary --since 24h

Track LLM usage, costs, latency, and errors.

View token usage statistics.

Terminal window
# Usage by model
cased llm usage --since 24h
# Group by provider
cased llm usage --group-by provider
# Filter by model
cased llm usage --model gpt-4o

Options:

OptionDefaultDescription
--since24hTime range
--model-Filter by model
--provider-Filter by provider
--group-bymodelGroup by: model, provider, both
--limit50Max results

View estimated LLM costs.

Terminal window
# Cost by model
cased llm cost --since 24h
# Filter by model
cased llm cost --model claude-sonnet-4
# Group by provider
cased llm cost --group-by provider

View LLM latency percentiles.

Terminal window
cased llm latency --since 24h
cased llm latency --model gpt-4o

View LLM error statistics.

Terminal window
# Error rates
cased llm errors --since 24h
# Filter by model
cased llm errors --model gpt-4o

Get overall LLM usage summary.

Terminal window
cased llm summary --since 24h
cased llm summary --since 7d

Shows total calls, tokens, estimated costs, latency, and error rates.

View per-session LLM usage for multi-turn conversations.

Terminal window
# Sessions by cost (most expensive first)
cased llm sessions --sort-by cost
# Sessions by token usage
cased llm sessions --sort-by tokens
# Recent sessions
cased llm sessions --sort-by created --limit 10

Options:

OptionDefaultDescription
--since24hTime range
--sort-bycreatedSort by: cost, calls, tokens, created
--limit20Max sessions

All commands support --json for machine-readable output:

Terminal window
# Pipe to jq for filtering
cased errors --since 1h --json | jq '.events[] | select(.level == "fatal")'
# Save to file
cased llm cost --since 7d --json > weekly-costs.json
# Use in scripts
COST=$(cased llm summary --json | jq -r '.estimated_cost_usd')
echo "Weekly LLM cost: \$${COST}"

All --since options accept:

FormatExampleDescription
Hours1h, 6h, 24hLast N hours
Days1d, 7d, 30dLast N days
CodeDescription
0Success
1Error (API error, connection error, invalid input)