Capture
Records real interactions into three-tier memory with semantic search.
See APIHow it works
Capture
memory.capture() stream · index
Reflect
dreams.trigger() reconsolidate · journal
Evolve
directives.propose() provisional → active
Critic
critic.evaluate() risk · veto · pass
experience Reflect patterns Evolve rules Critic corrections Install the SDK. Wrap your agent. It records real interactions, reflects between sessions, and rewrites its own behavioral rules — all on top of an immutable identity that cannot drift.
The cycle
Records real interactions into three-tier memory with semantic search.
See APIDream cycles extract patterns, synthesize journals, audit existing rules.
See APIAgent writes and applies its own behavioral rules — directives.
See APIInternal critic evaluates every output against the rules the agent wrote.
See APIWhy not just RAG or memory frameworks
Wisdom Layer is composable with RAG, not a replacement. Retrieval makes an agent knowledgeable. Wisdom makes an agent different next month than today.
| Capability | Vanilla LLM | RAG / Memory | Wisdom Layer |
|---|---|---|---|
| Remembers across sessions | ✗ | ✓ | ✓ |
| Semantic recall over experience | ✗ | ✓ | ✓ |
| Reflects on patterns autonomously | ✗ | ✗ | ✓ |
| Proposes its own behavioral rules | ✗ | ✗ | ✓ |
| Critiques its own output before send | ✗ | ✗ | ✓ |
| Detects drift in itself | ✗ | ✗ | ✓ |
| Improves without fine-tuning | ✗ | partial | ✓ |
Use RAG for grounding in external corpora. Use Wisdom Layer for the agent’s evolving relationship to its own work.
Code example
# pip install "wisdom-layer[ollama]"
from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.llm.ollama import OllamaAdapter
from wisdom_layer.storage.sqlite import SQLiteBackend
llm = OllamaAdapter(model="llama3.2")
agent = WisdomAgent(
agent_id="support-agent",
config=AgentConfig(name="Support Agent"),
llm=llm,
backend=SQLiteBackend("./agent.db"),
)
await agent.initialize()
# Agent remembers this conversation
await agent.memory.capture("conversation", {"user": msg})
# Agent reflects overnight — reconsolidates, audits, journals
report = await agent.dreams.trigger()
# Agent evaluates its own output against learned rules
review = await agent.critic.evaluate(response) What changes
The same agent running for two weeks with the SDK active. Behavioral drift you can graph.
Generic responses. No context. Starts from zero like every other agent.
Patterns start appearing. First rules proposed. Memory shapes search results.
Agent behaves differently. Self-authored rules active. Dream cycles surfacing insights.
Accumulated judgment that can’t be recreated from scratch. You don’t want to lose it.
Memory is what you’ve stored. Wisdom is what you’ve learned to do differently.
Nature & Nurture
Every Wisdom Layer agent has two layers: an immutable identity set at creation, and a lived experience that accumulates from there.
The boundary is enforced at the architecture layer, not by convention. An agent can propose a new rule for itself. It cannot rewrite its identity.
Identity enforcement is architectural, not convention. The
_internal/ kernel is Cython-compiled into a tamper-resistant
binary. Tier-gated features verify locally against an
Ed25519-signed license between periodic license-validation pings.
Both cooperate: the SDK won’t silently downgrade, and the genome
can’t be edited from agent code.
Wisdom Layer runs in-process. Memory contents, directive bodies, prompts, and responses never leave your infrastructure. Free installs send anonymous counts (agent count, memory count, version) by default to help us improve the SDK — opt out with one command, or use Pro, Team, or Enterprise where telemetry is off by default.
Full detail on the privacy page.
Outcomes from earlier work
7 agents running continuously for 6+ months on a digital-brain architecture inspired by functional neuroscience. Source material for the SDK and the benchmarks published on /benchmarks.
A computational pharmacogenomics platform built on the pre-SDK architecture is producing cross-platform-validated biomarker candidates. Active collaboration discussions with academic cancer centers.
An internal AI-assisted coding tool built on the same architecture runs across 20+ active engineering repositories. Each repo accumulates hundreds of memories that distill into 10–15 targeted directives, measurably reducing error rates in agent-driven code.
Install in 30 seconds. Drop into your existing agent — no rebuild, no migration.