Time-Travel Replay
Time-Travel Replay lets you step through a trace's execution after the fact. Instead of reproducing a bug manually, you can replay the exact sequence of operations that led to it - and modify inputs along the way to test fixes without redeploying your application.
How Replay Works
When you open a replay session, Glassbrain reconstructs the execution state at each point in the trace. It uses the captured inputs, outputs, and intermediate state from each span to create a step-by-step view of what happened.
You navigate through the replay using forward and backward controls. At each step, you see the current state of the execution: what inputs were received, what processing occurred, and what output was produced. For error traces, the replay highlights the exact step where the failure occurred.
Replay sessions are created from existing traces. You can start a replay from the trace detail view in the dashboard or programmatically through the API.
Replay Modes
Glassbrain supports two replay modes, each suited to different debugging workflows.
Snapshot Mode
Snapshot mode replays the trace using only the data that was captured at the time of the original execution. No live calls are made to your application or external services. This mode is completely safe - it cannot trigger side effects or modify any data.
Use snapshot mode when you want to understand exactly what happened during the original execution without any risk of affecting your production systems. This is the default mode.
Live Mode
Live mode re-executes the trace against your actual application endpoints. When you modify inputs and step forward, Glassbrain sends the modified request to your application and captures the new response. This lets you test whether a fix would resolve the issue.
Important: Live mode makes real requests to your application. Only use live mode against development or staging environments. Glassbrain will display a confirmation dialog before starting a live replay session.
What You Can Modify
During a replay session, you can modify several aspects of the trace execution to test alternative scenarios:
Request Inputs
Change the request body, query parameters, or headers that were sent to an API endpoint. This lets you test whether different input values would have produced a successful result.
Function Arguments
Modify the arguments passed to a specific function in the call chain. Useful for testing edge cases like null values, empty arrays, or values outside expected ranges.
Environment Variables
Override environment variables for the duration of the replay. This helps test configuration-dependent behavior without changing your actual environment.
Mock Responses
Replace the response from an external service with a custom value. In live mode, this intercepts the real response. In snapshot mode, this replaces the recorded response data.
Replay API
You can create replay sessions programmatically using the Glassbrain API. This is useful for automated testing workflows where you want to replay traces with different inputs as part of a CI/CD pipeline.
curl -X POST https://glassbrain.dev/api/v1/replay \
-H "Content-Type: application/json" \
-H "x-api-key: your_api_key_here" \
-d '{
"trace_id": "trc_a1b2c3d4e5f6",
"mode": "snapshot",
"modifications": {
"inputs": {
"request_body": {
"user_id": "usr_456",
"include_deleted": false
}
},
"mock_responses": {
"span_id_database_query": {
"rows": [
{ "id": 1, "name": "Alice", "active": true }
]
}
}
}
}'The API returns a replay session object with a unique session ID. You can then step through the replay, inspect state at each step, and view the results:
{
"id": "rpl_f1e2d3c4b5a6",
"trace_id": "trc_a1b2c3d4e5f6",
"mode": "snapshot",
"status": "ready",
"total_steps": 8,
"current_step": 0,
"modifications_applied": 2,
"created_at": "2026-04-03T14:30:00.000Z",
"expires_at": "2026-04-03T15:30:00.000Z"
}Replay sessions expire after 1 hour. See the API Reference for the complete replay endpoint documentation.
Use Cases
Debugging Production Errors
When a user reports a bug, find the corresponding trace in your dashboard and start a snapshot replay. Step through the execution to see exactly where things went wrong. You can inspect the state at each step without needing to reproduce the bug locally or add additional logging.
Testing Alternative Inputs
When you suspect that a specific input value caused a failure, modify the input in the replay and step through to see if the alternative value produces a different result. This is faster than writing a test case, deploying, and verifying - especially for complex, multi-step operations.
Verifying Fixes Before Deployment
After implementing a fix in your staging environment, use live mode to replay the original failing trace against your updated code. If the replay completes without error, you have confidence that the fix addresses the original issue.
Automated Regression Testing
Use the replay API in your CI/CD pipeline to automatically replay known failure traces against new code. If a previously failing trace now succeeds, the regression is fixed. If a previously successful trace now fails, you have caught a new regression.