4 reasoning strategies. Token-budgeted memory. 23 hook points.
MCP-native tool integration. Config-driven. Provider-agnostic.
No boilerplate. No framework lock-in. One YAML, one agent.
Define identity, provider, reasoning strategy, and tools in a single YAML file.
name: riker
provider: anthropic
reasoning: react
memory:
budget_tokens: 100000
compaction: summarizing
Add capabilities through AddOns and MCP tools. Mix local and remote.
addons:
- prompt_builder
- skills
- mcp_router
tools:
- module: mcp_shell_tools.shell
- uri: https://mcp.example.com
Run anywhere. CLI, Mattermost, REST API, Jupyter, or your own frontend.
$ heinzel serve --config agent.yaml
✓ Agent "riker" ready
✓ 3 addons loaded
✓ 95 tools available
✓ ReAct reasoning active
A cognitive architecture with five subsystems — rules first, LLM when needed.
System 1 (rules, facts, inference) handles what it can. System 2 (LLM) handles the rest. Like Kahneman, but for agents.
95 tools across 5 plugins. Direct (in-process), remote (HTTP), or stdio. Or build your own in minutes.
Claude, GPT, Gemini, Ollama. Switch at runtime with fallback and health checks. No vendor lock-in.
One YAML file defines a complete agent. Identity, tools, reasoning, memory. No code required.
Token-budgeted with compaction. Facts survive sessions. Your agent remembers — and learns.
AddOns plug into the pipeline at 23 points. Priority, chain, halt, error-isolated. Full control.
No CLI required. Import, configure, run.
We built what we couldn't find elsewhere.
| Feature | heinzel | langchain | crewai | claude sdk |
|---|---|---|---|---|
| Reasoning Strategies | 4 strategies | none | none | none |
| Token-budgeted Memory | ✓ | × | × | × |
| Hook Points | 23 | × | × | × |
| MCP-Native | ✓ | × | × | ✓ |
| Provider-Agnostic | ✓ | ✓ | × | × |
| Config-Driven | ✓ | × | partial | × |
No. LangChain chains LLM calls. Heinzel thinks before it calls. Rules and facts are checked first — the LLM is only consulted when creativity or language understanding is needed.
No. pip install heinzel-core. That's it. Docker is optional for multi-agent deployments.
Yes. Anthropic, OpenAI, Google, Ollama, or any OpenAI-compatible endpoint. Switch providers at runtime with automatic fallback.
933 tests, 5 MVPs, running in production since February 2026. Pre-1.0 but battle-tested. The architecture is stable — the API may still evolve.