Getting Started

Glassbrain is a visual debugging engine for AI-powered applications. It captures every LLM call, tool invocation, and chain step in your app, then lets you replay and inspect them in a rich trace tree - so you can find and fix AI bugs in seconds instead of hours.

Overview

When your AI gives a wrong answer, the hardest part is figuring out why. Was it a bad prompt? A hallucinated tool call? A retrieval step that returned irrelevant context? Glassbrain answers that question instantly by giving you a full visual trace of every step your AI took.

With Glassbrain, you get:

  • Visual trace trees - see every LLM call, tool use, and chain step in a collapsible tree view
  • Time-travel replay - step through your AI execution frame by frame, forward and backward
  • AI fix suggestions - get automatic recommendations for prompt and configuration changes
  • Diff view - compare two traces side by side to see exactly what changed
  • One-line integration - works with OpenAI, Anthropic, LangChain, LlamaIndex, and any OpenTelemetry-compatible system

Quick Start

This guide walks you through the entire setup in under 5 minutes. By the end, you will have captured your first trace and viewed it in the Glassbrain dashboard.

Before you begin, you will need:

  1. A Glassbrain account - create one for free
  2. An API key from your Glassbrain project settings
  3. Node.js 18+ or Python 3.9+

Install the SDK

Glassbrain provides official SDKs for JavaScript and Python. Pick the one that matches your stack.

JavaScript / TypeScript

bashTerminal
npm install @glassbrain/js

Also available via yarn and pnpm: yarn add @glassbrain/js or pnpm add @glassbrain/js

Python

bashTerminal
pip install glassbrain

Compatible with Python 3.9 and above. We recommend using a virtual environment.

Initialize

Initialize the Glassbrain client at the entry point of your application. The SDK will automatically instrument supported LLM providers and capture traces.

JavaScript / TypeScript

typescriptsrc/index.ts
400 font-semibold">import { Glassbrain } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@glassbrain/js";

Glassbrain.init({
  apiKey: process.env.GLASSBRAIN_API_KEY,
  projectId: process.env.GLASSBRAIN_PROJECT_ID,
});

Python

pythonmain.py
400 font-semibold">import os
400 font-semibold">import glassbrain

glassbrain.init(
    api_key=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_API_KEY"],
    project_id=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_PROJECT_ID"],
)

Call init() once, as early as possible. The SDK patches supported LLM client libraries at import time, so it must be initialized before you create any OpenAI or Anthropic client instances.

Capture Your First Trace

After initialization, every LLM call in your application is automatically captured. Run the following example to generate a test trace.

JavaScript / TypeScript

typescripttest-trace.ts
400 font-semibold">import { Glassbrain } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@glassbrain/js";
400 font-semibold">import OpenAI 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"openai";

Glassbrain.init({
  apiKey: process.env.GLASSBRAIN_API_KEY,
  projectId: process.env.GLASSBRAIN_PROJECT_ID,
});

400 font-semibold">const openai = 400 font-semibold">new OpenAI();

400 font-semibold">async 400 font-semibold">function main() {
  400 font-semibold">const response = 400 font-semibold">await openai.chat.completions.create({
    model: 400 font-semibold">class="text-emerald-400">"gpt-4o",
    messages: [
      { role: 400 font-semibold">class="text-emerald-400">"system", content: 400 font-semibold">class="text-emerald-400">"You are a helpful assistant." },
      { role: 400 font-semibold">class="text-emerald-400">"user", content: 400 font-semibold">class="text-emerald-400">"Explain quantum computing 400 font-semibold">in one sentence." },
    ],
  });

  console.log(response.choices[0].message.content);
}

main();

Python

pythontest_trace.py
400 font-semibold">import os
400 font-semibold">import glassbrain
400 font-semibold">from openai 400 font-semibold">import OpenAI

glassbrain.init(
    api_key=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_API_KEY"],
    project_id=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_PROJECT_ID"],
)

client = OpenAI()

response = client.chat.completions.create(
    model=400 font-semibold">class="text-emerald-400">"gpt-4o",
    messages=[
        {400 font-semibold">class="text-emerald-400">"role": 400 font-semibold">class="text-emerald-400">"system", 400 font-semibold">class="text-emerald-400">"content": 400 font-semibold">class="text-emerald-400">"You are a helpful assistant."},
        {400 font-semibold">class="text-emerald-400">"role": 400 font-semibold">class="text-emerald-400">"user", 400 font-semibold">class="text-emerald-400">"content": 400 font-semibold">class="text-emerald-400">"Explain quantum computing 400 font-semibold">in one sentence."},
    ],
)

print(response.choices[0].message.content)

View in Dashboard

After running your test script, open the Glassbrain dashboard. Your trace will appear in the trace list within a few seconds.

Click on a trace to open the visual trace tree. From here, you can:

  • Expand and collapse individual nodes to inspect inputs and outputs
  • Use time-travel replay to step through execution frame by frame
  • View token counts, latency, and cost for each LLM call
  • Share a replay link with teammates
  • Get AI-powered fix suggestions if the trace contains errors

Next Steps

Now that you have Glassbrain running, explore these guides to go deeper: