OpenAI Integration
Trace every OpenAI API call with a single line of code. Glassbrain automatically captures model parameters, prompts, completions, token usage, latency, and errors - giving you full visibility into your OpenAI-powered application.
Installation
Install the Glassbrain SDK alongside the OpenAI client library for your language.
JavaScript / TypeScript
npm install @glassbrain/js openaiPython
pip install glassbrain openaiQuick Start
Wrap your OpenAI client with Glassbrain to start tracing. No other code changes are needed - all existing API calls will be traced automatically.
JavaScript / TypeScript
400 font-semibold">import OpenAI 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"openai";
400 font-semibold">import { wrapOpenAI } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@glassbrain/js";
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Initialize the OpenAI client 400 font-semibold">as usual
400 font-semibold">const openai = 400 font-semibold">new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Wrap it with Glassbrain - that's it
400 font-semibold">const tracedOpenAI = wrapOpenAI(openai, {
projectKey: process.env.GLASSBRAIN_PROJECT_KEY,
});
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Use tracedOpenAI exactly like the original client
400 font-semibold">const response = 400 font-semibold">await tracedOpenAI.chat.completions.create({
model: 400 font-semibold">class="text-emerald-400">"gpt-4o",
messages: [
{ role: 400 font-semibold">class="text-emerald-400">"system", content: 400 font-semibold">class="text-emerald-400">"You are a helpful assistant." },
{ role: 400 font-semibold">class="text-emerald-400">"user", content: 400 font-semibold">class="text-emerald-400">"Explain quantum computing 400 font-semibold">in one sentence." },
],
temperature: 0.7,
max_tokens: 150,
});
console.log(response.choices[0].message.content);Python
400 font-semibold">import os
400 font-semibold">from openai 400 font-semibold">import OpenAI
400 font-semibold">from glassbrain 400 font-semibold">import wrap_openai
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Initialize the OpenAI client 400 font-semibold">as usual
openai = OpenAI(api_key=os.environ[400 font-semibold">class="text-emerald-400">"OPENAI_API_KEY"])
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Wrap it 400 font-semibold">with Glassbrain
traced_openai = wrap_openai(openai, project_key=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_PROJECT_KEY"])
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Use traced_openai exactly like the original client
response = traced_openai.chat.completions.create(
model=400 font-semibold">class="text-emerald-400">"gpt-4o",
messages=[
{400 font-semibold">class="text-emerald-400">"role": 400 font-semibold">class="text-emerald-400">"system", 400 font-semibold">class="text-emerald-400">"content": 400 font-semibold">class="text-emerald-400">"You are a helpful assistant."},
{400 font-semibold">class="text-emerald-400">"role": 400 font-semibold">class="text-emerald-400">"user", 400 font-semibold">class="text-emerald-400">"content": 400 font-semibold">class="text-emerald-400">"Explain quantum computing 400 font-semibold">in one sentence."},
],
temperature=0.7,
max_tokens=150,
)
print(response.choices[0].message.content)How It Works
The wrapOpenAI() function creates a proxy around the OpenAI client. When you call any API method on the wrapped client, Glassbrain intercepts the request and response to capture trace data. The wrapper does not modify any parameters or responses - your application logic remains unchanged.
Traces are sent to the Glassbrain backend asynchronously in the background. This means there is no meaningful latency added to your API calls. If the Glassbrain backend is unreachable, traces are buffered locally and retried automatically.
The following OpenAI methods are traced automatically:
chat.completions.create()- Chat completions (including streaming)completions.create()- Legacy completionsembeddings.create()- Embedding generationimages.generate()- Image generationaudio.transcriptions.create()- Audio transcription
What Gets Traced
Each traced OpenAI call produces a span with the following data structure. You can inspect this data in the Glassbrain dashboard trace viewer.
{
"span_id": "sp_abc123",
"trace_id": "tr_xyz789",
"provider": "openai",
"operation": "chat.completions.create",
"timestamp": "2026-04-03T12:00:00.000Z",
"duration_ms": 1243,
"status": "success",
"model": {
"name": "gpt-4o",
"provider": "openai"
},
"input": {
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Explain quantum computing in one sentence." }
],
"temperature": 0.7,
"max_tokens": 150,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0
},
"output": {
"message": {
"role": "assistant",
"content": "Quantum computing uses quantum bits..."
},
"finish_reason": "stop"
},
"usage": {
"prompt_tokens": 28,
"completion_tokens": 42,
"total_tokens": 70
},
"cost": {
"prompt_cost_usd": 0.00014,
"completion_cost_usd": 0.00042,
"total_cost_usd": 0.00056
},
"error": null
}When an error occurs, the error field contains the error type, message, and HTTP status code. Failed calls are highlighted in the dashboard for quick identification.
Streaming Support
Glassbrain fully supports streaming responses. When you use stream: true, the wrapper buffers chunks in the background to reconstruct the full response for tracing, while forwarding each chunk to your application with zero delay.
400 font-semibold">const stream = 400 font-semibold">await tracedOpenAI.chat.completions.create({
model: 400 font-semibold">class="text-emerald-400">"gpt-4o",
messages: [{ role: 400 font-semibold">class="text-emerald-400">"user", content: 400 font-semibold">class="text-emerald-400">"Write a haiku about debugging." }],
stream: 400">true,
});
400 font-semibold">for 400 font-semibold">await (400 font-semibold">const chunk 400 font-semibold">of stream) {
400 font-semibold">const content = chunk.choices[0]?.delta?.content || 400 font-semibold">class="text-emerald-400">"";
process.stdout.write(content);
}
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// The full response is traced automatically when the stream completesAdvanced Configuration
You can customize the wrapper behavior with additional options.
400 font-semibold">const tracedOpenAI = wrapOpenAI(openai, {
projectKey: process.env.GLASSBRAIN_PROJECT_KEY,
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Add custom metadata to every trace
metadata: {
environment: 400 font-semibold">class="text-emerald-400">"production",
service: 400 font-semibold">class="text-emerald-400">"chat-api",
version: 400 font-semibold">class="text-emerald-400">"1.2.0",
},
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Control what gets captured
captureInput: 400">true, 400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Set to 400">false to skip logging prompts
captureOutput: 400">true, 400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Set to 400">false to skip logging completions
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Sampling rate (0.0 to 1.0) - useful 400 font-semibold">for high-traffic production
sampleRate: 1.0,
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Custom base URL 400 font-semibold">for self-hosted Glassbrain
baseUrl: 400 font-semibold">class="text-emerald-400">"https:400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">//glassbrain.dev/api/traces",
});Troubleshooting
Traces are not appearing in the dashboard
Verify that your GLASSBRAIN_PROJECT_KEY is set correctly and matches a project in your Glassbrain account. You can find your project key in the dashboard under Project Settings. Also confirm that you are using the wrapped client (tracedOpenAI) and not the original openai instance for your API calls.
TypeScript type errors after wrapping
Make sure your openai and @glassbrain/js packages are on compatible versions. The Glassbrain SDK supports OpenAI SDK v4.x and above. Run npm ls openai to check your installed version.
Increased latency on API calls
Glassbrain sends traces asynchronously and should not add noticeable latency. If you observe slower responses, check your network connectivity to the Glassbrain endpoint. You can also reduce overhead in high-throughput scenarios by setting a sampleRate below 1.0 to trace only a percentage of calls.
Sensitive data in traces
If your prompts or completions contain sensitive information such as PII, set captureInput: false and captureOutput: false in the wrapper configuration. This will still trace metadata like model, tokens, latency, and errors without logging the actual content.