LangChain Integration
Trace your entire LangChain execution graph - from chains and agents to individual LLM calls, tool invocations, and retriever queries. Glassbrain integrates via the LangChain callback system to give you full visibility into complex AI workflows.
Pro plan and above. The LangChain integration is available on the Pro plan and above. Upgrade your plan in the dashboard to enable this integration.
Installation
Install the Glassbrain SDK alongside LangChain for your language.
JavaScript / TypeScript
npm install @glassbrain/js langchain @langchain/core @langchain/openaiPython
pip install glassbrain langchain langchain-openaiQuick Start
Add the Glassbrain callback handler to your LangChain invocations. The handler hooks into LangChain's callback system to trace every step of execution.
JavaScript / TypeScript
400 font-semibold">import { ChatOpenAI } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@langchain/openai";
400 font-semibold">import { ChatPromptTemplate } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@langchain/core/prompts";
400 font-semibold">import { StringOutputParser } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@langchain/core/output_parsers";
400 font-semibold">import { GlassbrainCallbackHandler } 400 font-semibold">from 400 font-semibold">class="text-emerald-400">"@glassbrain/js/langchain";
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Create the Glassbrain callback handler
400 font-semibold">const glassbrainHandler = 400 font-semibold">new GlassbrainCallbackHandler({
projectKey: process.env.GLASSBRAIN_PROJECT_KEY,
});
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Build a LangChain chain 400 font-semibold">as usual
400 font-semibold">const prompt = ChatPromptTemplate.fromMessages([
[400 font-semibold">class="text-emerald-400">"system", 400 font-semibold">class="text-emerald-400">"You are a helpful assistant that explains concepts simply."],
[400 font-semibold">class="text-emerald-400">"human", 400 font-semibold">class="text-emerald-400">"{input}"],
]);
400 font-semibold">const model = 400 font-semibold">new ChatOpenAI({ model: 400 font-semibold">class="text-emerald-400">"gpt-4o" });
400 font-semibold">const outputParser = 400 font-semibold">new StringOutputParser();
400 font-semibold">const chain = prompt.pipe(model).pipe(outputParser);
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Pass the handler via the callbacks option
400 font-semibold">const result = 400 font-semibold">await chain.invoke(
{ input: 400 font-semibold">class="text-emerald-400">"What is retrieval-augmented generation?" },
{ callbacks: [glassbrainHandler] }
);
console.log(result);Python
400 font-semibold">import os
400 font-semibold">from langchain_openai 400 font-semibold">import ChatOpenAI
400 font-semibold">from langchain_core.prompts 400 font-semibold">import ChatPromptTemplate
400 font-semibold">from langchain_core.output_parsers 400 font-semibold">import StrOutputParser
400 font-semibold">from glassbrain.langchain 400 font-semibold">import GlassbrainCallbackHandler
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Create the Glassbrain callback handler
glassbrain_handler = GlassbrainCallbackHandler(
project_key=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_PROJECT_KEY"]
)
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Build a LangChain chain 400 font-semibold">as usual
prompt = ChatPromptTemplate.from_messages([
(400 font-semibold">class="text-emerald-400">"system", 400 font-semibold">class="text-emerald-400">"You are a helpful assistant that explains concepts simply."),
(400 font-semibold">class="text-emerald-400">"human", 400 font-semibold">class="text-emerald-400">"{input}"),
])
model = ChatOpenAI(model=400 font-semibold">class="text-emerald-400">"gpt-4o")
output_parser = StrOutputParser()
chain = prompt | model | output_parser
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Pass the handler via the config
result = chain.invoke(
{400 font-semibold">class="text-emerald-400">"input": 400 font-semibold">class="text-emerald-400">"What 400 font-semibold">is retrieval-augmented generation?"},
config={400 font-semibold">class="text-emerald-400">"callbacks": [glassbrain_handler]}
)
print(result)How It Works
The GlassbrainCallbackHandler implements LangChain's callback interface to receive events at every stage of chain execution. When a chain runs, LangChain fires events for chain start, LLM start, LLM end, tool start, tool end, retriever start, retriever end, and chain end. Glassbrain captures all of these events and organizes them into a hierarchical trace.
Each component in your chain (prompt template, LLM, output parser, tool, retriever) becomes a span in the trace. Parent-child relationships are preserved, so you can see exactly how data flows through your chain in the Glassbrain dashboard.
What Gets Traced
The LangChain integration traces multiple span types. Each type captures different data relevant to that component.
{
"span_id": "sp_chain_001",
"trace_id": "tr_lc_789",
"type": "chain",
"name": "RunnableSequence",
"timestamp": "2026-04-03T12:00:00.000Z",
"duration_ms": 3420,
"status": "success",
"input": {
"input": "What is retrieval-augmented generation?"
},
"output": "Retrieval-augmented generation (RAG) is a technique...",
"children": ["sp_prompt_001", "sp_llm_001", "sp_parser_001"]
}{
"span_id": "sp_llm_001",
"parent_span_id": "sp_chain_001",
"type": "llm",
"name": "ChatOpenAI",
"model": "gpt-4o",
"duration_ms": 2890,
"input": {
"messages": [
{ "role": "system", "content": "You are a helpful assistant..." },
{ "role": "human", "content": "What is retrieval-augmented generation?" }
]
},
"output": {
"message": { "role": "assistant", "content": "..." },
"finish_reason": "stop"
},
"usage": {
"prompt_tokens": 32,
"completion_tokens": 156,
"total_tokens": 188
}
}{
"span_id": "sp_tool_001",
"parent_span_id": "sp_agent_001",
"type": "tool",
"name": "search_documents",
"duration_ms": 450,
"input": {
"query": "RAG architecture overview"
},
"output": "Retrieved 5 documents matching the query..."
}{
"span_id": "sp_retriever_001",
"parent_span_id": "sp_chain_001",
"type": "retriever",
"name": "VectorStoreRetriever",
"duration_ms": 120,
"input": {
"query": "quantum computing basics"
},
"output": {
"documents": [
{ "page_content": "...", "metadata": { "source": "wiki", "page": 42 } }
]
}
}Tracing Agents
LangChain agents involve multiple iterations of LLM reasoning and tool calling. Glassbrain captures the full agent loop, including every reasoning step, tool invocation, and observation.
400 font-semibold">from langchain_openai 400 font-semibold">import ChatOpenAI
400 font-semibold">from langchain.agents 400 font-semibold">import create_tool_calling_agent, AgentExecutor
400 font-semibold">from langchain_core.prompts 400 font-semibold">import ChatPromptTemplate
400 font-semibold">from langchain_core.tools 400 font-semibold">import tool
400 font-semibold">from glassbrain.langchain 400 font-semibold">import GlassbrainCallbackHandler
glassbrain_handler = GlassbrainCallbackHandler(
project_key=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_PROJECT_KEY"]
)
@tool
400 font-semibold">def calculate(expression: str) -> str:
400 font-semibold">class="text-emerald-400">""400 font-semibold">class="text-emerald-400">"Evaluate a math expression."400 font-semibold">class="text-emerald-400">""
400 font-semibold">return str(eval(expression))
@tool
400 font-semibold">def search(query: str) -> str:
400 font-semibold">class="text-emerald-400">""400 font-semibold">class="text-emerald-400">"Search 400 font-semibold">for information."400 font-semibold">class="text-emerald-400">""
400 font-semibold">return f400 font-semibold">class="text-emerald-400">"Results 400 font-semibold">for: {query}"
llm = ChatOpenAI(model=400 font-semibold">class="text-emerald-400">"gpt-4o")
prompt = ChatPromptTemplate.from_messages([
(400 font-semibold">class="text-emerald-400">"system", 400 font-semibold">class="text-emerald-400">"You are a helpful assistant 400 font-semibold">with access to tools."),
(400 font-semibold">class="text-emerald-400">"human", 400 font-semibold">class="text-emerald-400">"{input}"),
(400 font-semibold">class="text-emerald-400">"placeholder", 400 font-semibold">class="text-emerald-400">"{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, [calculate, search], prompt)
executor = AgentExecutor(agent=agent, tools=[calculate, search])
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># All agent iterations, tool calls, 400 font-semibold">and reasoning steps are traced
result = executor.invoke(
{400 font-semibold">class="text-emerald-400">"input": 400 font-semibold">class="text-emerald-400">"What 400 font-semibold">is 42 * 17, 400 font-semibold">and then search 400 font-semibold">for that number?"},
config={400 font-semibold">class="text-emerald-400">"callbacks": [glassbrain_handler]}
)Advanced Configuration
Customize the callback handler with additional options.
400 font-semibold">const glassbrainHandler = 400 font-semibold">new GlassbrainCallbackHandler({
projectKey: process.env.GLASSBRAIN_PROJECT_KEY,
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Add custom metadata to the trace
metadata: {
environment: 400 font-semibold">class="text-emerald-400">"production",
pipeline: 400 font-semibold">class="text-emerald-400">"rag-v2",
userId: 400 font-semibold">class="text-emerald-400">"user_123",
},
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Name 400 font-semibold">this trace 400 font-semibold">for easier identification
traceName: 400 font-semibold">class="text-emerald-400">"rag-query",
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Control what gets captured
captureInput: 400">true,
captureOutput: 400">true,
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic">// Sampling rate (0.0 to 1.0)
sampleRate: 1.0,
});You can also set the handler globally so it applies to all LangChain invocations without passing it each time:
400 font-semibold">import langchain
400 font-semibold">from glassbrain.langchain 400 font-semibold">import GlassbrainCallbackHandler
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># Set the handler globally
glassbrain_handler = GlassbrainCallbackHandler(
project_key=os.environ[400 font-semibold">class="text-emerald-400">"GLASSBRAIN_PROJECT_KEY"]
)
400 font-semibold">class="text-[rgba(255,255,255,0.3)] italic"># All subsequent LangChain invocations will be traced
langchain.callbacks.manager.configure(
inheritable_callbacks=[glassbrain_handler]
)Troubleshooting
Only the top-level chain span appears
Make sure the callback handler is passed in the callbacks array (not callback singular). In Python, pass it via the config parameter as shown in the quick start example. Also verify that your LangChain version is 0.1.0 or above - older versions have incomplete callback support.
Feature not available error
The LangChain integration requires the Pro plan or above. Check your current plan in the Glassbrain dashboard under Account Settings. If you recently upgraded, allow a few minutes for the change to propagate.
Retriever spans are missing document content
By default, Glassbrain captures the full document content returned by retrievers. If documents are very large, they may be truncated at 10,000 characters per document. If you are not seeing retriever spans at all, verify that your retriever is a LangChain-compatible retriever that fires the standard callback events.
Agent traces stop after first iteration
This usually happens when an error occurs during tool execution. Check the trace in the Glassbrain dashboard - failed spans are highlighted in red. The error message and stack trace are captured in the span details. Fix the underlying tool error and rerun.