About langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
Topics
analyticsautogenevaluationlangchainlarge-language-modelsllama-indexllmllm-evaluationllm-observabilityllmopsmonitoringobservability
No rules target langfuse yet
No published rules, MCP servers, or skills target langfuse yet. If you maintain a tool that works well with this project, you can publish for free during beta.
Related topics
- LLM evals: the Hamel process encoded as rulesets (2026)Hamel Husain's eval process: 60-80% of dev time on error analysis, custom annotation tools, binary judges, review 100 traces. Here's how to encode that as a tool-agnostic ruleset that survives the next acquisition.
- Promptfoo alternatives after the OpenAI acquisition (2026)OpenAI acquired Promptfoo in March 2026. ClickHouse acquired Langfuse in January. Two of the three biggest OSS eval tools changed hands in 8 weeks. Here's what to use now.
Why this page exists
RuleSell tracks the AI-coding ecosystem so you don't have to. When a repo like langfuse picks up momentum, we surface the Claude Code skills, Cursor rules, MCP servers, and agent configs that target it — with real author attribution, SPDX license badges, and quality scores. Every listing ships with copy-paste install for each environment.