OpenTelemetry Integration
Send traces to Glassbrain using the OpenTelemetry protocol (OTLP). This integration gives you full control over instrumentation and works with any language or framework that supports OpenTelemetry. Use it when you need custom spans, attributes, and events beyond what the SDK wrappers provide.
Pro plan and above. The OpenTelemetry integration is available on the Pro plan and above. Upgrade your plan in the dashboard to enable this integration.
Installation
Install the OpenTelemetry SDK and OTLP exporter packages for your language.
JavaScript / TypeScript
npm install @opentelemetry/sdk-node @opentelemetry/api \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/resources \
@opentelemetry/semantic-conventionsPython
pip install opentelemetry-sdk opentelemetry-api \
opentelemetry-exporter-otlp-proto-httpConfigure the OTLP Exporter
Point the OTLP exporter to the Glassbrain ingestion endpoint and include your project key in the request headers. Glassbrain accepts traces over OTLP/HTTP with Protocol Buffers or JSON encoding.
JavaScript / TypeScript
400 font-semibold">import { NodeSDK } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@opentelemetry/sdk-node";
400 font-semibold">import { OTLPTraceExporter } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@opentelemetry/exporter-trace-otlp-http";
400 font-semibold">import { Resource } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@opentelemetry/resources";
400 font-semibold">import { ATTR_SERVICE_NAME } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@opentelemetry/semantic-conventions";
400 font-semibold">const traceExporter = 400 font-semibold">new OTLPTraceExporter({
url: 400 font-semibold">class="text-emerald-400">"https:400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">//otel.glassbrain.dev/v1/traces",
headers: {
400 font-semibold">class="text-emerald-400">"x-glassbrain-project-key": process.env.GLASSBRAIN_PROJECT_KEY!,
},
});
400 font-semibold">const sdk = 400 font-semibold">new NodeSDK({
resource: 400 font-semibold">new Resource({
[ATTR_SERVICE_NAME]: 400 font-semibold">class="text-emerald-400">"my-ai-service",
}),
traceExporter,
});
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Start the SDK before your application code
sdk.start();
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Ensure traces are flushed on shutdown
process.on(400 font-semibold">class="text-emerald-400">"SIGTERM", () => {
sdk.shutdown().then(() => process.exit(0));
});Python
400 font-semibold">import os
400 font-semibold">from opentelemetry 400 font-semibold">import trace
400 font-semibold">from opentelemetry.sdk.trace 400 font-semibold">import TracerProvider
400 font-semibold">from opentelemetry.sdk.trace.export 400 font-semibold">import BatchSpanProcessor
400 font-semibold">from opentelemetry.exporter.otlp.proto.http.trace_exporter 400 font-semibold">import OTLPSpanExporter
400 font-semibold">from opentelemetry.sdk.resources 400 font-semibold">import Resource
resource = Resource.create({400 font-semibold">class="text-emerald-400">"service.name": 400 font-semibold">class="text-emerald-400">"my-ai-service"})
exporter = OTLPSpanExporter(
endpoint=400 font-semibold">class="text-emerald-400">"https:400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">//otel.glassbrain.dev/v1/traces",
headers={
400 font-semibold">class="text-emerald-400">"x-glassbrain-project-key": os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_PROJECT_KEY"],
},
)
provider = TracerProvider(resource=resource)
provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)Environment Variables (Alternative)
You can also configure the exporter using standard OpenTelemetry environment variables instead of code.
OTEL_EXPORTER_OTLP_ENDPOINT=https://otel.glassbrain.dev
OTEL_EXPORTER_OTLP_HEADERS=x-glassbrain-project-key=gb_proj_your_key_here
OTEL_SERVICE_NAME=my-ai-service
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobufManual Instrumentation
Create custom spans to trace specific parts of your AI pipeline. This gives you full control over what data is captured and how spans are organized.
JavaScript / TypeScript
400 font-semibold">import { trace, SpanStatusCode } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@opentelemetry/api";
400 font-semibold">import OpenAI 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"openai";
400 font-semibold">const tracer = trace.getTracer(400 font-semibold">class="text-emerald-400">"my-ai-service", 400 font-semibold">class="text-emerald-400">"1.0.0");
400 font-semibold">const openai = 400 font-semibold">new OpenAI();
400 font-semibold">async 400 font-semibold">function handleUserQuery(query: string) {
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Create a parent span 400 font-semibold">for the entire operation
400 font-semibold">return tracer.startActiveSpan(400 font-semibold">class="text-emerald-400">"handle-user-query", 400 font-semibold">async (parentSpan) => {
parentSpan.setAttribute(400 font-semibold">class="text-emerald-400">"query", query);
parentSpan.setAttribute(400 font-semibold">class="text-emerald-400">"glassbrain.span_type", 400 font-semibold">class="text-emerald-400">"workflow");
400 font-semibold">try {
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Child span 400 font-semibold">for retrieval
400 font-semibold">const context = 400 font-semibold">await tracer.startActiveSpan(400 font-semibold">class="text-emerald-400">"retrieve-context", 400 font-semibold">async (span) => {
span.setAttribute(400 font-semibold">class="text-emerald-400">"glassbrain.span_type", 400 font-semibold">class="text-emerald-400">"retrieval");
span.setAttribute(400 font-semibold">class="text-emerald-400">"retriever.top_k", 5);
400 font-semibold">const results = 400 font-semibold">await searchDocuments(query);
span.setAttribute(400 font-semibold">class="text-emerald-400">"retriever.results_count", results.length);
span.end();
400 font-semibold">return results;
});
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Child span 400 font-semibold">for LLM call
400 font-semibold">const response = 400 font-semibold">await tracer.startActiveSpan(400 font-semibold">class="text-emerald-400">"llm-call", 400 font-semibold">async (span) => {
span.setAttribute(400 font-semibold">class="text-emerald-400">"glassbrain.span_type", 400 font-semibold">class="text-emerald-400">"llm");
span.setAttribute(400 font-semibold">class="text-emerald-400">"llm.model", 400 font-semibold">class="text-emerald-400">"gpt-4o");
span.setAttribute(400 font-semibold">class="text-emerald-400">"llm.provider", 400 font-semibold">class="text-emerald-400">"openai");
400 font-semibold">const completion = 400 font-semibold">await openai.chat.completions.create({
model: 400 font-semibold">class="text-emerald-400">"gpt-4o",
messages: [
{ role: 400 font-semibold">class="text-emerald-400">"system", content: 400 font-semibold">class="text-emerald-400">"Answer based on the context provided." },
{ role: 400 font-semibold">class="text-emerald-400">"user", content: 400 font-semibold">class="text-emerald-400">`Context: ${context.join("\n")}\n\nQuestion: ${query}` },
],
});
span.setAttribute(400 font-semibold">class="text-emerald-400">"llm.prompt_tokens", completion.usage?.prompt_tokens ?? 0);
span.setAttribute(400 font-semibold">class="text-emerald-400">"llm.completion_tokens", completion.usage?.completion_tokens ?? 0);
span.setAttribute(400 font-semibold">class="text-emerald-400">"llm.total_tokens", completion.usage?.total_tokens ?? 0);
span.end();
400 font-semibold">return completion.choices[0].message.content;
});
parentSpan.setAttribute(400 font-semibold">class="text-emerald-400">"response_length", response?.length ?? 0);
parentSpan.setStatus({ code: SpanStatusCode.OK });
parentSpan.end();
400 font-semibold">return response;
} 400 font-semibold">catch (error) {
parentSpan.setStatus({
code: SpanStatusCode.ERROR,
message: error 400 font-semibold">instanceof Error ? error.message : 400 font-semibold">class="text-emerald-400">"Unknown error",
});
parentSpan.recordException(error 400 font-semibold">as Error);
parentSpan.end();
400 font-semibold">throw error;
}
});
}Python
400 font-semibold">from opentelemetry 400 font-semibold">import trace
400 font-semibold">from openai 400 font-semibold">import OpenAI
tracer = trace.get_tracer(400 font-semibold">class="text-emerald-400">"my-ai-service", 400 font-semibold">class="text-emerald-400">"1.0.0")
openai = OpenAI()
400 font-semibold">def handle_user_query(query: str) -> str:
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Create a parent span 400 font-semibold">for the entire operation
400 font-semibold">with tracer.start_as_current_span(400 font-semibold">class="text-emerald-400">"handle-user-query") 400 font-semibold">as parent_span:
parent_span.set_attribute(400 font-semibold">class="text-emerald-400">"query", query)
parent_span.set_attribute(400 font-semibold">class="text-emerald-400">"glassbrain.span_type", 400 font-semibold">class="text-emerald-400">"workflow")
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Child span 400 font-semibold">for retrieval
400 font-semibold">with tracer.start_as_current_span(400 font-semibold">class="text-emerald-400">"retrieve-context") 400 font-semibold">as retrieval_span:
retrieval_span.set_attribute(400 font-semibold">class="text-emerald-400">"glassbrain.span_type", 400 font-semibold">class="text-emerald-400">"retrieval")
retrieval_span.set_attribute(400 font-semibold">class="text-emerald-400">"retriever.top_k", 5)
results = search_documents(query)
retrieval_span.set_attribute(400 font-semibold">class="text-emerald-400">"retriever.results_count", len(results))
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Child span 400 font-semibold">for LLM call
400 font-semibold">with tracer.start_as_current_span(400 font-semibold">class="text-emerald-400">"llm-call") 400 font-semibold">as llm_span:
llm_span.set_attribute(400 font-semibold">class="text-emerald-400">"glassbrain.span_type", 400 font-semibold">class="text-emerald-400">"llm")
llm_span.set_attribute(400 font-semibold">class="text-emerald-400">"llm.model", 400 font-semibold">class="text-emerald-400">"gpt-4o")
llm_span.set_attribute(400 font-semibold">class="text-emerald-400">"llm.provider", 400 font-semibold">class="text-emerald-400">"openai")
completion = openai.chat.completions.create(
model=400 font-semibold">class="text-emerald-400">"gpt-4o",
messages=[
{400 font-semibold">class="text-emerald-400">"role": 400 font-semibold">class="text-emerald-400">"system", 400 font-semibold">class="text-emerald-400">"content": 400 font-semibold">class="text-emerald-400">"Answer based on the context."},
{400 font-semibold">class="text-emerald-400">"role": 400 font-semibold">class="text-emerald-400">"user", 400 font-semibold">class="text-emerald-400">"content": f400 font-semibold">class="text-emerald-400">"Context: {results}\n\nQuestion: {query}"},
],
)
llm_span.set_attribute(400 font-semibold">class="text-emerald-400">"llm.prompt_tokens", completion.usage.prompt_tokens)
llm_span.set_attribute(400 font-semibold">class="text-emerald-400">"llm.completion_tokens", completion.usage.completion_tokens)
llm_span.set_attribute(400 font-semibold">class="text-emerald-400">"llm.total_tokens", completion.usage.total_tokens)
response = completion.choices[0].message.content
parent_span.set_attribute(400 font-semibold">class="text-emerald-400">"response_length", len(response))
400 font-semibold">return responseWhat Gets Traced
With OpenTelemetry, you control exactly what gets traced. The data sent to Glassbrain follows the standard OTLP format. Here is the structure of a span as it appears in Glassbrain.
{
"trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
"span_id": "00f067aa0ba902b7",
"parent_span_id": "a3ce929d0e0e4736",
"name": "llm-call",
"kind": "INTERNAL",
"start_time": "2026-04-03T12:00:00.000000000Z",
"end_time": "2026-04-03T12:00:01.243000000Z",
"status": {
"code": "OK"
},
"attributes": {
"glassbrain.span_type": "llm",
"llm.model": "gpt-4o",
"llm.provider": "openai",
"llm.prompt_tokens": 128,
"llm.completion_tokens": 256,
"llm.total_tokens": 384,
"service.name": "my-ai-service"
},
"events": [
{
"name": "llm.prompt",
"timestamp": "2026-04-03T12:00:00.001000000Z",
"attributes": {
"content": "Answer based on the context provided..."
}
}
],
"resource": {
"service.name": "my-ai-service",
"telemetry.sdk.language": "nodejs",
"telemetry.sdk.version": "1.20.0"
}
}Glassbrain recognizes standard OpenTelemetry semantic conventions and maps them to the appropriate fields in the dashboard. Custom attributes prefixed with "glassbrain." receive special treatment in the UI.
Semantic Conventions
Glassbrain recognizes the following attribute prefixes for enhanced visualization in the dashboard. Using these conventions is optional but recommended.
| Attribute | Type | Description |
|---|---|---|
glassbrain.span_type | string | Span type: "llm", "retrieval", "tool", "workflow", "embedding" |
llm.model | string | The model name (e.g., "gpt-4o", "claude-sonnet-4-20250514") |
llm.provider | string | The provider name (e.g., "openai", "anthropic") |
llm.prompt_tokens | int | Number of prompt/input tokens |
llm.completion_tokens | int | Number of completion/output tokens |
llm.total_tokens | int | Total token count |
retriever.top_k | int | Number of results requested from retriever |
retriever.results_count | int | Number of results returned by retriever |
Advanced Configuration
Batch Span Processor Tuning
The batch span processor collects spans and sends them in batches to reduce network overhead. You can tune the batch settings for your workload.
400 font-semibold">import { BatchSpanProcessor } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@opentelemetry/sdk-trace-base";
400 font-semibold">const spanProcessor = 400 font-semibold">new BatchSpanProcessor(traceExporter, {
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Maximum number 400 font-semibold">of spans per batch
maxExportBatchSize: 512,
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Maximum time (ms) to wait before sending a batch
scheduledDelayMillis: 5000,
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Maximum number 400 font-semibold">of spans queued 400 font-semibold">for 400 font-semibold">export
maxQueueSize: 2048,
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Timeout (ms) 400 font-semibold">for the 400 font-semibold">export request
exportTimeoutMillis: 30000,
});Adding Span Events
Use span events to record point-in-time occurrences within a span, such as logging the prompt or completion content.
span.addEvent(400 font-semibold">class="text-emerald-400">"llm.prompt", {
400 font-semibold">class="text-emerald-400">"content": 400 font-semibold">class="text-emerald-400">"You are a helpful assistant...",
400 font-semibold">class="text-emerald-400">"role": 400 font-semibold">class="text-emerald-400">"system",
});
span.addEvent(400 font-semibold">class="text-emerald-400">"llm.completion", {
400 font-semibold">class="text-emerald-400">"content": 400 font-semibold">class="text-emerald-400">"The answer to your question is...",
400 font-semibold">class="text-emerald-400">"role": 400 font-semibold">class="text-emerald-400">"assistant",
400 font-semibold">class="text-emerald-400">"finish_reason": 400 font-semibold">class="text-emerald-400">"stop",
});
span.addEvent(400 font-semibold">class="text-emerald-400">"error.retry", {
400 font-semibold">class="text-emerald-400">"attempt": 2,
400 font-semibold">class="text-emerald-400">"error": 400 font-semibold">class="text-emerald-400">"Rate limit exceeded",
400 font-semibold">class="text-emerald-400">"wait_seconds": 5,
});Troubleshooting
Traces are not appearing in the dashboard
Verify the exporter endpoint is https://otel.glassbrain.dev/v1/traces and the x-glassbrain-project-key header is set correctly. Check that the TracerProvider is initialized before any spans are created. Also confirm your application does not exit before the batch processor flushes its buffer - use the shutdown hook as shown in the configuration example.
Spans appear but are missing attributes
Attributes must be set before the span ends. Calling span.setAttribute() after span.end() has no effect. Also verify that attribute values are of supported types: string, number, boolean, or arrays of these types. Objects and nested structures are not supported as attribute values - use span events for complex data.
Feature not available error
The OpenTelemetry integration requires the Pro plan or above. Check your current plan in the Glassbrain dashboard under Account Settings. The OTLP endpoint will return HTTP 403 if your project is on the free plan.
Parent-child span relationships are broken
Make sure you are using startActiveSpan() (JS) or start_as_current_span() (Python) to create child spans within the context of a parent span. If you create spans with startSpan() without setting the parent context, they will appear as root spans. In async code, make sure context is propagated correctly across async boundaries.