heinzel.dev
Coming Soon
The cognitive agent framework for Python.
Launching Easter 2026.
pre-release — coming soon

Build AI agents that think before they act.

4 reasoning strategies. Token-budgeted memory. 23 hook points.
MCP-native tool integration. Config-driven. Provider-agnostic.

$ pip install heinzel-core
# Define your agent in YAML
$ cat agent.yaml
name: riker
reasoning: react
provider: anthropic
addons: [mcp_router, skills, memory]

# Run it
$ heinzel serve --config agent.yaml
✓ Agent "riker" ready — 3 addons, 95 tools, ReAct reasoning
933
Tests Passing
32k
Lines of Code
23
Hook Points
95
MCP Tools

Three steps to a thinking agent.

No boilerplate. No framework lock-in. One YAML, one agent.

01

Configure

Define identity, provider, reasoning strategy, and tools in a single YAML file.

name: riker
provider: anthropic
reasoning: react
memory:
  budget_tokens: 100000
  compaction: summarizing
02

Extend

Add capabilities through AddOns and MCP tools. Mix local and remote.

addons:
  - prompt_builder
  - skills
  - mcp_router
tools:
  - module: mcp_shell_tools.shell
  - uri: https://mcp.example.com
03

Deploy

Run anywhere. CLI, Mattermost, REST API, Jupyter, or your own frontend.

$ heinzel serve --config agent.yaml
✓ Agent "riker" ready
✓ 3 addons loaded
✓ 95 tools available
✓ ReAct reasoning active

Not just another LLM wrapper.

A cognitive architecture with five subsystems — rules first, LLM when needed.

Cognitive Architecture

System 1 (rules, facts, inference) handles what it can. System 2 (LLM) handles the rest. Like Kahneman, but for agents.

MCP-Native Tools

95 tools across 5 plugins. Direct (in-process), remote (HTTP), or stdio. Or build your own in minutes.

Provider-Agnostic

Claude, GPT, Gemini, Ollama. Switch at runtime with fallback and health checks. No vendor lock-in.

Config-Driven

One YAML file defines a complete agent. Identity, tools, reasoning, memory. No code required.

Memory That Persists

Token-budgeted with compaction. Facts survive sessions. Your agent remembers — and learns.

23 Hook Points

AddOns plug into the pipeline at 23 points. Priority, chain, halt, error-isolated. Full control.

Use it as a library.

No CLI required. Import, configure, run.

from heinzel_core import Runner
from heinzel_core.reasoning import ReAct
from heinzel_core.addons import MCPToolsRouter, PromptBuilder
from heinzel_providers import AnthropicProvider

agent = Runner(
    provider=AnthropicProvider(api_key="..."),
    reasoning=ReAct(),
    addons=[
        MCPToolsRouter(servers=[
            {"module": "mcp_shell_tools.shell", "mode": "direct"},
            {"uri": "https://mcp.example.com", "mode": "remote"},
        ]),
        PromptBuilder(),
    ],
)

response = await agent.chat("Analyze the logs on cirrus7")

How Heinzel compares.

We built what we couldn't find elsewhere.

Feature heinzel langchain crewai claude sdk
Reasoning Strategies 4 strategies none none none
Token-budgeted Memory × × ×
Hook Points 23 × × ×
MCP-Native × ×
Provider-Agnostic × ×
Config-Driven × partial ×

Common questions.

"Is this another LangChain?"

No. LangChain chains LLM calls. Heinzel thinks before it calls. Rules and facts are checked first — the LLM is only consulted when creativity or language understanding is needed.

"Do I need Docker?"

No. pip install heinzel-core. That's it. Docker is optional for multi-agent deployments.

"Can I use my own LLM?"

Yes. Anthropic, OpenAI, Google, Ollama, or any OpenAI-compatible endpoint. Switch providers at runtime with automatic fallback.

"Is it production-ready?"

933 tests, 5 MVPs, running in production since February 2026. Pre-1.0 but battle-tested. The architecture is stable — the API may still evolve.

Ready to build agents
that think?

$ pip install heinzel-core