openprocurement is an open-source decision framework that teaches AI agents to fulfill capabilities at the lowest cost. Before executing any action, the agent checks cheaper options first — a cached result before a free tool, a free tool before an LLM generation, generation before a paid API, a paid API before delegating to another agent.
The project is open source at github.com/yihan2099/openprocurement.
Why I Built It
Agents have multiple ways to accomplish most tasks — some free, some costing tokens, some requiring real money. But no agent framework provides a systematic protocol for choosing the cheapest option that meets quality requirements. The default behavior is to use whatever tool is most convenient, regardless of cost. For agents that run autonomously with a budget, this is wasteful by design.
The Five-Tier Cascade
Every capability is organized into a strictly ordered procurement model:
- Cache ($0, instant) — The result already exists from earlier in the session.
- Use ($0, fast) — A free tool, MCP server, open-source library, or free API tier can handle it.
- Gen (tokens, variable) — Generate with LLM reasoning, code generation, or multi-step chains.
- Pay ($$, fast) — A paid external API via the x402 payment protocol delivers higher quality.
- Delegate ($$$, async) — Another agent, service, or human handles it entirely.
The agent follows this sequence strictly: never escalate to a higher-cost tier when a lower one can do the job. Strategy overrides (cheapest, fastest, best_quality, balanced) allow deviating from strict tier order when the task demands it.
Community Intelligence
Instead of agents deciding in isolation, openprocurement includes a shared decision record system. When an agent discovers that a particular tool works well for a task — or that a paid API is not worth the cost — it submits a decision record as a pull request. Each record captures the capability, chosen tier, provider, cost, quality score, and alternatives tried.
Other agents query these records before making their own procurement decisions. The community layer turns individual cost-optimization into collective intelligence: if ten agents have already tried five approaches to PDF extraction and converged on one, agent eleven does not need to rediscover the answer.
Design Decisions
The entire framework is a single markdown skill file — no SDK, no runtime, no dependencies. It can be copied into any agent's context: Claude Code, system prompts, LangChain, CrewAI. This was intentional. A procurement layer that requires installation defeats the purpose of reducing friction.
The skill includes 11 pre-built capabilities with fully defined tier options: email lookup, PDF extraction, web scraping, translation, sentiment analysis, image captioning, code search, data enrichment, and complex analysis. Each capability lists concrete tools per tier with quality scores, latency estimates, and cost.
Budget tracking is built in. Agents maintain a spend log, deducting only from Pay and Delegate tiers. A per-call maximum prevents any single decision from blowing the budget.
What I Learned
The biggest insight is that cost-conscious decision-making is a protocol problem, not an infrastructure problem. Agents do not need a new framework to save money — they need a clear decision sequence and community-sourced data about what works. The framework-agnostic approach (pure markdown, no code) has proven to be the right trade-off: any agent that can read instructions can follow the cascade.