About lean-ctx
The Context OS for AI Development. Reduce token waste in Cursor, Claude Code, Copilot, Windsurf, Codex, Gemini & more by 60–95% (up to 99% on cached reads) Shell Hook + MCP Server · 49 tools · 10 read modes · 90+ patterns · Single Rust binary
The context layer for AI coding agents Reduce token waste in Cursor, Claude Code, Copilot, Windsurf, Codex, Gemini & more by 60–95% (up to 99% on cached reads) Shell Hook + MCP Server · 58 tools · 10 read modes · 95+ patterns · Single Rust binary Website · Docs · Install · Demo · Benchmarks · Cookbook · Security · Changelog · Discord --- **lean-ctx** is a local-first context runtime that compresses file reads + shell output before they reach the LLM. Cached re-reads drop to **~13 tokens**. See it in action: Read + Shell Map-mode reads + compressed CLI output Gain (live) Tokens + USD savings in real time Benchmark proof Measure compression by language + mode All GIFs are generated from reproducible VHS tapes in demo/ . ## What it does **File reads (MCP)**: cached + mode-aware reads (full,
Topics
No rules target lean-ctx yet
No published rules, MCP servers, or skills target lean-ctx yet. If you maintain a tool that works well with this project, you can publish for free during beta.
Why this page exists
RuleSell tracks the AI-coding ecosystem so you don't have to. When a repo like lean-ctx picks up momentum, we surface the Claude Code skills, Cursor rules, MCP servers, and agent configs that target it — with real author attribution, SPDX license badges, and quality scores. Every listing ships with copy-paste install for each environment.