Installation

Glassbrain provides official SDKs for JavaScript/TypeScript and Python. Both SDKs auto-instrument popular LLM providers so you can start capturing traces with a single function call.

JavaScript SDK

The JavaScript SDK supports Node.js 18+, Deno, and Bun. It works with both CommonJS and ES module projects.

Install

bashTerminal
# npm
npm install @glassbrain/js

# yarn
yarn add @glassbrain/js

# pnpm
pnpm add @glassbrain/js

Initialize

Import and call Glassbrain.init() at the very top of your application entry point, before importing any LLM client libraries. This lets the SDK monkey-patch supported providers.

typescriptsrc/index.ts
400 font-semibold">import { Glassbrain } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@glassbrain/js";

Glassbrain.init({
  apiKey: process.env.GLASSBRAIN_API_KEY,
  projectId: process.env.GLASSBRAIN_PROJECT_ID,

  400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Optional configuration
  environment: 400 font-semibold">class="text-emerald-400">"production",    400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// defaults to 400 font-semibold">class="text-emerald-400">"development"
  enableConsoleLogging: 400">false,  400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// log trace IDs to console
  batchSize: 10,                400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// flush traces 400 font-semibold">in batches 400 font-semibold">of N
  flushIntervalMs: 5000,        400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// flush interval 400 font-semibold">in milliseconds
});

Verify installation

Run a quick test to confirm traces are being sent to Glassbrain.

typescriptverify.ts
400 font-semibold">import { Glassbrain } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@glassbrain/js";

Glassbrain.init({
  apiKey: process.env.GLASSBRAIN_API_KEY,
  projectId: process.env.GLASSBRAIN_PROJECT_ID,
  enableConsoleLogging: 400">true,
});

400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// This will log the trace ID to the console
400 font-semibold">const trace = Glassbrain.startTrace(400 font-semibold">class="text-emerald-400">"test-trace");
trace.log(400 font-semibold">class="text-emerald-400">"Installation verified");
400 font-semibold">await trace.end();

console.log(400 font-semibold">class="text-emerald-400">"Check your Glassbrain dashboard 400 font-semibold">for the trace.");

Python SDK

The Python SDK supports Python 3.9 and above. It has zero required dependencies beyond the standard library, though it will automatically detect and instrument supported LLM libraries if they are installed.

Install

bashTerminal
# pip
pip install glassbrain

# poetry
poetry add glassbrain

# uv
uv add glassbrain

Initialize

Call glassbrain.init() at the start of your application, before creating any LLM client instances.

pythonmain.py
400 font-semibold">import os
400 font-semibold">import glassbrain

glassbrain.init(
    api_key=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_API_KEY"],
    project_id=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_PROJECT_ID"],

    400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Optional configuration
    environment=400 font-semibold">class="text-emerald-400">"production",       400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># defaults to 400 font-semibold">class="text-emerald-400">"development"
    enable_console_logging=400">False,   400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># log trace IDs to console
    batch_size=10,                  400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># flush traces 400 font-semibold">in batches of N
    flush_interval_s=5.0,           400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># flush interval 400 font-semibold">in seconds
)

Verify installation

pythonverify.py
400 font-semibold">import os
400 font-semibold">import glassbrain

glassbrain.init(
    api_key=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_API_KEY"],
    project_id=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_PROJECT_ID"],
    enable_console_logging=400">True,
)

400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># This will print the trace ID to the console
400 font-semibold">with glassbrain.start_trace(400 font-semibold">class="text-emerald-400">"test-trace") 400 font-semibold">as trace:
    trace.log(400 font-semibold">class="text-emerald-400">"Installation verified")

print(400 font-semibold">class="text-emerald-400">"Check your Glassbrain dashboard 400 font-semibold">for the trace.")

Environment Variables

The SDK reads the following environment variables. Setting these means you can call init() without passing any arguments.

VariableRequiredDescription
GLASSBRAIN_API_KEYYesYour project API key. Found in project settings.
GLASSBRAIN_PROJECT_IDYesThe project to send traces to. Found in project settings.
GLASSBRAIN_ENVIRONMENTNoEnvironment label (e.g. production, staging). Defaults to development.
GLASSBRAIN_BASE_URLNoOverride the API endpoint. Useful for self-hosted deployments.
GLASSBRAIN_DEBUGNoSet to true to enable verbose debug logging from the SDK.

Add these to your .env file:

bash.env
GLASSBRAIN_API_KEY=gb_sk_your_api_key_here
GLASSBRAIN_PROJECT_ID=proj_your_project_id_here
GLASSBRAIN_ENVIRONMENT=development

Security note: Never commit your API key to version control. Add .env to your .gitignore file.

Framework Setup

Below are recommended patterns for integrating Glassbrain into popular frameworks. The core principle is always the same: initialize the SDK as early as possible, before any LLM client is instantiated.

React / Next.js

In Next.js, initialize Glassbrain in your instrumentation.ts file. This runs once when the server starts and ensures all API routes and server components are instrumented.

typescriptinstrumentation.ts
400 font-semibold">import { Glassbrain } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@glassbrain/js";

400 font-semibold">export 400 font-semibold">function register() {
  Glassbrain.init({
    apiKey: process.env.GLASSBRAIN_API_KEY,
    projectId: process.env.GLASSBRAIN_PROJECT_ID,
    environment: process.env.NODE_ENV,
  });
}

Requires Next.js 14 or later. For older versions, initialize in a custom server file or at the top of your API route handlers.

Node.js / Express

Initialize at the top of your main server file, before any route definitions or middleware.

typescriptserver.ts
400 font-semibold">import { Glassbrain } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@glassbrain/js";
400 font-semibold">import express 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"express";
400 font-semibold">import OpenAI 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"openai";

400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Initialize before anything 400 font-semibold">else
Glassbrain.init({
  apiKey: process.env.GLASSBRAIN_API_KEY,
  projectId: process.env.GLASSBRAIN_PROJECT_ID,
});

400 font-semibold">const app = express();
400 font-semibold">const openai = 400 font-semibold">new OpenAI();

app.use(express.json());

app.post(400 font-semibold">class="text-emerald-400">"/api/chat", 400 font-semibold">async (req, res) => {
  400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// All OpenAI calls 400 font-semibold">in 400 font-semibold">this handler are auto-traced
  400 font-semibold">const response = 400 font-semibold">await openai.chat.completions.create({
    model: 400 font-semibold">class="text-emerald-400">"gpt-4o",
    messages: req.body.messages,
  });
  res.json(response);
});

app.listen(3000, () => console.log(400 font-semibold">class="text-emerald-400">"Server running on port 3000"));

FastAPI

Initialize Glassbrain at module level in your main application file, or use the provided ASGI middleware for per-request trace context.

pythonapp/main.py
400 font-semibold">import os
400 font-semibold">import glassbrain
400 font-semibold">from fastapi 400 font-semibold">import FastAPI
400 font-semibold">from openai 400 font-semibold">import OpenAI

glassbrain.init(
    api_key=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_API_KEY"],
    project_id=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_PROJECT_ID"],
)

app = FastAPI()
client = OpenAI()

400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Optional: add middleware 400 font-semibold">for per-request trace context
app.add_middleware(glassbrain.FastAPIMiddleware)

@app.post(400 font-semibold">class="text-emerald-400">"/api/chat")
400 font-semibold">async 400 font-semibold">def chat(request: ChatRequest):
    400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># All LLM calls 400 font-semibold">in this handler are auto-traced
    response = client.chat.completions.create(
        model=400 font-semibold">class="text-emerald-400">"gpt-4o",
        messages=request.messages,
    )
    400 font-semibold">return response

Django

Add the Glassbrain middleware to your Django settings and initialize in your settings.py or wsgi.py.

pythonmyproject/settings.py
400 font-semibold">import os
400 font-semibold">import glassbrain

glassbrain.init(
    api_key=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_API_KEY"],
    project_id=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_PROJECT_ID"],
)

MIDDLEWARE = [
    400 font-semibold">class="text-emerald-400">"glassbrain.DjangoMiddleware",
    400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># ... other middleware
]

The Django middleware automatically creates a trace for each request that involves LLM calls, attaching request metadata such as the URL path and user ID.