How it works

How a Wisdom Layer agent learns

  1. Capture

    memory.capture()

    stream · index

  2. Reflect

    dreams.trigger()

    reconsolidate · journal

  3. Evolve

    directives.propose()

    provisional → active

  4. Critic

    critic.evaluate()

    risk · veto · pass

Capture experience Reflect patterns Evolve rules Critic corrections

Install the SDK. Wrap your agent. It records real interactions, reflects between sessions, and rewrites its own behavioral rules — all on top of an immutable identity that cannot drift.

The cycle

Four subsystems running in cycle.

Capture

Records real interactions into three-tier memory with semantic search.

See API

Reflect

Dream cycles extract patterns, synthesize journals, audit existing rules.

See API

Evolve

Agent writes and applies its own behavioral rules — directives.

See API

Critic

Internal critic evaluates every output against the rules the agent wrote.

See API

Why not just RAG or memory frameworks

Composable with RAG. Not a replacement.

Wisdom Layer is composable with RAG, not a replacement. Retrieval makes an agent knowledgeable. Wisdom makes an agent different next month than today.

Capability Vanilla LLM RAG / Memory Wisdom Layer
Remembers across sessions
Semantic recall over experience
Reflects on patterns autonomously
Proposes its own behavioral rules
Critiques its own output before send
Detects drift in itself
Improves without fine-tuning partial

Remembers across sessions

Vanilla LLM
RAG / Memory
Wisdom Layer

Semantic recall over experience

Vanilla LLM
RAG / Memory
Wisdom Layer

Reflects on patterns autonomously

Vanilla LLM
RAG / Memory
Wisdom Layer

Proposes its own behavioral rules

Vanilla LLM
RAG / Memory
Wisdom Layer

Critiques its own output before send

Vanilla LLM
RAG / Memory
Wisdom Layer

Detects drift in itself

Vanilla LLM
RAG / Memory
Wisdom Layer

Improves without fine-tuning

Vanilla LLM
RAG / Memory
partial
Wisdom Layer

Use RAG for grounding in external corpora. Use Wisdom Layer for the agent’s evolving relationship to its own work.

Code example

Same shape as any LLM call you’re writing today.

python
# pip install "wisdom-layer[ollama]"
from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.llm.ollama import OllamaAdapter
from wisdom_layer.storage.sqlite import SQLiteBackend

llm = OllamaAdapter(model="llama3.2")
agent = WisdomAgent(
    agent_id="support-agent",
    config=AgentConfig(name="Support Agent"),
    llm=llm,
    backend=SQLiteBackend("./agent.db"),
)
await agent.initialize()

# Agent remembers this conversation
await agent.memory.capture("conversation", {"user": msg})

# Agent reflects overnight — reconsolidates, audits, journals
report = await agent.dreams.trigger()

# Agent evaluates its own output against learned rules
review = await agent.critic.evaluate(response)

See the full API reference

What changes

Same agent, same model, same prompts.

The same agent running for two weeks with the SDK active. Behavioral drift you can graph.

  1. Day 1

    Generic responses. No context. Starts from zero like every other agent.

  2. Day 3

    Patterns start appearing. First rules proposed. Memory shapes search results.

  3. Day 7

    Agent behaves differently. Self-authored rules active. Dream cycles surfacing insights.

  4. Day 14+

    Accumulated judgment that can’t be recreated from scratch. You don’t want to lose it.

Memory is what you’ve stored. Wisdom is what you’ve learned to do differently.

Nature & Nurture

Two layers. One stays still so the other can grow.

Every Wisdom Layer agent has two layers: an immutable identity set at creation, and a lived experience that accumulates from there.

Nature immutable
  • Role
  • Goals
  • Permanent rules
  • Safety boundaries
Nurture grows
  • Memories
  • Corrections
  • Learned directives
  • Behavioral journals

The boundary is enforced at the architecture layer, not by convention. An agent can propose a new rule for itself. It cannot rewrite its identity.

Implementation How is identity enforced?

Identity enforcement is architectural, not convention. The _internal/ kernel is Cython-compiled into a tamper-resistant binary. Tier-gated features verify locally against an Ed25519-signed license between periodic license-validation pings. Both cooperate: the SDK won’t silently downgrade, and the genome can’t be edited from agent code.

Wisdom Layer runs in-process. Memory contents, directive bodies, prompts, and responses never leave your infrastructure. Free installs send anonymous counts (agent count, memory count, version) by default to help us improve the SDK — opt out with one command, or use Pro, Team, or Enterprise where telemetry is off by default.

Full detail on the privacy page.

Outcomes from earlier work

Capability proven before the SDK shipped.

Persistent agent research

7 agents running continuously for 6+ months on a digital-brain architecture inspired by functional neuroscience. Source material for the SDK and the benchmarks published on /benchmarks.

Scientific discovery

A computational pharmacogenomics platform built on the pre-SDK architecture is producing cross-platform-validated biomarker candidates. Active collaboration discussions with academic cancer centers.

Dogfooded daily

An internal AI-assisted coding tool built on the same architecture runs across 20+ active engineering repositories. Each repo accumulates hundreds of memories that distill into 10–15 targeted directives, measurably reducing error rates in agent-driven code.

Ready to try it?

Install in 30 seconds. Drop into your existing agent — no rebuild, no migration.