Skip to content

Trending repo

Claude Code & Cursor rules for promptfoo

by @promptfoo · 21,181 stars

View on GitHub →

About promptfoo

Test your prompts, agents, and RAGs. Red teaming/pentesting/vulnerability scanning for AI. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration. Used by OpenAI and Anthropic.

Topics

cici-cdcicdevaluationevaluation-frameworkllmllm-evalllm-evaluationllm-evaluation-frameworkllmopspentestingprompt-engineering

No rules target promptfoo yet

No published rules, MCP servers, or skills target promptfoo yet. If you maintain a tool that works well with this project, you can publish for free during beta.

Related topics

Why this page exists

RuleSell tracks the AI-coding ecosystem so you don't have to. When a repo like promptfoo picks up momentum, we surface the Claude Code skills, Cursor rules, MCP servers, and agent configs that target it — with real author attribution, SPDX license badges, and quality scores. Every listing ships with copy-paste install for each environment.