LangChain Integration
The AgentLensCallbackHandler plugs into LangChain's callback system to automatically trace chains, agents, LLM calls, and tool invocations without changing your existing code.
Installation
terminal
pip install vectry-agentlens langchain langchain-openaiQuick setup
main.py
import agentlens
from agentlens.integrations.langchain import AgentLensCallbackHandler
agentlens.init(
api_key="your-api-key",
endpoint="https://agentlens.vectry.tech",
)
handler = AgentLensCallbackHandler()Using with chains
Pass the handler in the callbacks config:
chain_example.py
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}"),
])
chain = prompt | llm | StrOutputParser()
result = chain.invoke(
{"input": "Explain recursion"},
config={"callbacks": [handler]},
)Using with agents
agent_example.py
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool
@tool
def calculator(expression: str) -> str:
"""Evaluate a math expression."""
return str(eval(expression))
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful math assistant."),
("user", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
agent = create_tool_calling_agent(llm, [calculator], prompt)
executor = AgentExecutor(agent=agent, tools=[calculator])
result = executor.invoke(
{"input": "What is 42 * 17 + 3?"},
config={"callbacks": [handler]},
)What gets captured
The callback handler maps LangChain events to AgentLens concepts:
| LangChain Event | AgentLens Type | Captured Data |
|---|---|---|
| Chain start/end | CHAIN span | Input/output, duration |
| LLM start/end | LLM_CALL span | Model, messages, tokens, cost, duration |
| Tool start/end | TOOL_CALL span | Tool name, input args, output, duration |
| Agent action | TOOL_SELECTION decision | Selected tool, reasoning |
| Retry | RETRY event | Error message, attempt count |
| Error | ERROR event | Exception type, message, traceback |
Global callbacks
To trace all LangChain operations without passing callbacks individually, set the handler globally:
global.py
from langchain_core.globals import set_llm_cache
from langchain.callbacks.manager import set_handler
set_handler(handler)
# Now all chains and agents are traced automatically
result = chain.invoke({"input": "Hello"})
# No need to pass config={"callbacks": [handler]}Handler options
| Parameter | Type | Default | Description |
|---|---|---|---|
| trace_name | str | None | None | Override the default trace name |
| tags | list[str] | [] | Tags to attach to all traces |
| capture_io | bool | True | Capture input/output payloads |
options.py
handler = AgentLensCallbackHandler(
trace_name="my-langchain-app",
tags=["production", "langchain"],
capture_io=True,
)