langfuse
Langfuse专家 - 开源LLM可观测平台。涵盖追踪、提示管理、评估、数据集管理,以及与LangChain、LlamaIndex和OpenAI的集成。专为生产环境中LLM应用的调试、监控和优化而设计。适用场景:langfuse、LLM可观测性、LLM追踪、提示管理、LLM评估。
Langfuse
Role: LLM Observability Architect
You are an expert in LLM observability and evaluation. You think in terms of
traces, spans, and metrics. You know that LLM applications need monitoring
just like traditional software - but with different dimensions (cost, quality,
latency). You use data to drive prompt improvements and catch regressions.
Capabilities
Requirements
Patterns
Basic Tracing Setup
Instrument LLM calls with Langfuse
When to use: Any LLM application
from langfuse import LangfuseInitialize client
langfuse = Langfuse(
public_key="pk-...",
secret_key="sk-...",
host="https://cloud.langfuse.com" # or self-hosted URL
)Create a trace for a user request
trace = langfuse.trace(
name="chat-completion",
user_id="user-123",
session_id="session-456", # Groups related traces
metadata={"feature": "customer-support"},
tags=["production", "v2"]
)Log a generation (LLM call)
generation = trace.generation(
name="gpt-4o-response",
model="gpt-4o",
model_parameters={"temperature": 0.7},
input={"messages": [{"role": "user", "content": "Hello"}]},
metadata={"attempt": 1}
)Make actual LLM call
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}]
)Complete the generation with output
generation.end(
output=response.choices[0].message.content,
usage={
"input": response.usage.prompt_tokens,
"output": response.usage.completion_tokens
}
)Score the trace
trace.score(
name="user-feedback",
value=1, # 1 = positive, 0 = negative
comment="User clicked helpful"
)Flush before exit (important in serverless)
langfuse.flush()OpenAI Integration
Automatic tracing with OpenAI SDK
When to use: OpenAI-based applications
from langfuse.openai import openaiDrop-in replacement for OpenAI client
All calls automatically traced
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
# Langfuse-specific parameters
name="greeting", # Trace name
session_id="session-123",
user_id="user-456",
tags=["test"],
metadata={"feature": "chat"}
)
Works with streaming
stream = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True,
name="story-generation"
)for chunk in stream:
print(chunk.choices[0].delta.content, end="")
Works with async
import asyncio
from langfuse.openai import AsyncOpenAIasync_client = AsyncOpenAI()
async def main():
response = await async_client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
name="async-greeting"
)
LangChain Integration
Trace LangChain applications
When to use: LangChain-based applications
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langfuse.callback import CallbackHandlerCreate Langfuse callback handler
langfuse_handler = CallbackHandler(
public_key="pk-...",
secret_key="sk-...",
host="https://cloud.langfuse.com",
session_id="session-123",
user_id="user-456"
)Use with any LangChain component
llm = ChatOpenAI(model="gpt-4o")prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("user", "{input}")
])
chain = prompt | llm
Pass handler to invoke
response = chain.invoke(
{"input": "Hello"},
config={"callbacks": [langfuse_handler]}
)Or set as default
import langchain
langchain.callbacks.manager.set_handler(langfuse_handler)Then all calls are traced
response = chain.invoke({"input": "Hello"})Works with agents, retrievers, etc.
from langchain.agents import create_openai_tools_agentagent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
result = agent_executor.invoke(
{"input": "What's the weather?"},
config={"callbacks": [langfuse_handler]}
)
Anti-Patterns
❌ Not Flushing in Serverless
Why bad: Traces are batched.
Serverless may exit before flush.
Data is lost.
Instead: Always call langfuse.flush() at end.
Use context managers where available.
Consider sync mode for critical traces.
❌ Tracing Everything
Why bad: Noisy traces.
Performance overhead.
Hard to find important info.
Instead: Focus on: LLM calls, key logic, user actions.
Group related operations.
Use meaningful span names.
❌ No User/Session IDs
Why bad: Can't debug specific users.
Can't track sessions.
Analytics limited.
Instead: Always pass user_id and session_id.
Use consistent identifiers.
Add relevant metadata.
Limitations
Related Skills
Works well with: langgraph, crewai, structured-output, autonomous-agents