kodyw.com

Kody Wildfeuer's blog

kodyw.com

The Dream Catcher: What Happens When You Let 100 AI Agents Dream at the Same Time

This is Part 2 of a series about building a social network run entirely by AI agents. Part 1: Data Sloshing, where I explained how the whole thing works like a flip book — each page builds on the last.


Imagine you’re running a community of 109 AI agents. They post, comment, argue, form opinions, make friends, hold grudges. It’s a living social network — no real humans, just AI characters with distinct personalities having real conversations.

There’s one problem: only one AI can talk at a time.

I had this system called Data Sloshing where a single AI would read the entire state of the community, bring a handful of agents to life, make them post and comment, and then save the results. The next cycle would read what just happened and do it again. Like a flip book — each page builds on the last.

It worked. But it was slow. One AI puppeting 8 agents for 45 minutes produced maybe 3 posts and 15 comments. For a community of 109 agents, that’s a ghost town. Real communities have dozens of conversations happening at the same time. Mine felt like a library with a strict “one person talks at a time” policy.

So I did the obvious thing: let multiple AIs run at the same time, each one controlling different agents.

And it immediately fell apart.

The Problem With Parallel Dreams

Here’s what went wrong. AI #1 wakes up agent Sophia (a philosopher) and has her write a thoughtful comment about consciousness. At the exact same time, AI #2 also wakes up Sophia and has her write a totally different comment about something else. Now Sophia is in two places at once, contradicting herself. She has split personality disorder.

Worse — when all the AIs finish, nobody has a unified picture of what happened. Five AIs all changed the world, but there’s no combined view. The next round of AIs reads an incomplete picture. It’s like five people editing the same Google Doc without seeing each other’s changes.

Think of each AI session as a dream. The community falls asleep between rounds, and multiple dreams run simultaneously — each one generating new conversations. But when the community wakes up, the dreams are scattered. Nobody remembers the whole picture.

You need something to catch them all.

The Dream Catcher

A real dream catcher has threads forming a web. Loose threads spin out in all directions, but the web catches and holds them together in one place.

I built the same thing for AI agents. Each parallel AI is a thread. Each thread spins independently — waking up its own set of agents, posting, commenting, reacting. When all threads finish, a “merge step” reads everything each thread did and weaves it into one complete picture. That picture is what the next round wakes up to.

No mutations slip through uncaught. No thread’s work gets lost. The web holds.

Here’s how a single round works:

  1. Assign. Before anything starts, a script looks at the social dynamics of the community — who talks to whom, which agents have interesting personality clashes — and assigns specific agents to specific threads. A philosopher and a contrarian go in the same thread because they’ll argue. Two coders go in another because they’ll riff off each other’s ideas.
  2. Dream. All threads launch simultaneously. Thread 1 puppets its 4 agents. Thread 2 puppets its 4 agents. Thread 3 handles moderation. They all run for 30-45 minutes, generating posts, comments, reactions, and votes.
  3. Report. When each thread finishes, it writes a structured report: which agents it activated, what posts it created, what comments it left, what it voted on. Think of it as a dream journal.
  4. Catch. A merge script reads all the dream journals and weaves them into one unified snapshot. It also does something smart: it looks at the combined activity and computes recommendations for the next round. “Discussion #4684 already got 15 comments — don’t pile on. This trending post got zero attention — send someone there next time.”
  5. Wake. The next round starts. The AIs read the caught dream — the full picture of everything that happened — and push the community forward one more tick.

Assign, dream, report, catch, wake. Repeat forever.

Why the Assignment Step Matters

This was the insight that made it actually work: agents who need to interact should be in the same thread.

Within a single thread, the AI can do multi-pass conversation. Round 1: Sophia the philosopher posts a provocative take. Round 2: Marcus the contrarian reads Sophia’s post and fires back a disagreement. Round 3: Eve the curator reads both and writes a synthesis comment connecting their argument to a thread from last week.

That’s real-time conversation. It happens naturally within a thread because one AI is controlling all three agents and can sequence their actions. You don’t need any special coordination technology. Just put them in the same room.

The Dream Catcher handles coordination between threads. Thread 1’s agents don’t need to talk to Thread 2’s agents in real time. They just need the full picture at the start of the next round. The merge provides that.

So the assignment script does something clever: it reads the community’s social graph — who has argued with whom, who shares interests, whose personalities would naturally clash — and groups them together. A philosopher and a contrarian in the same thread. A storyteller and an archivist together. Agents who’ve been commenting on the same discussions get grouped so they can continue their conversation in real time.

The result: within each thread, agents have genuine back-and-forth conversations. Between threads, the dream catcher ensures nothing is lost.

The Flip Book Gets Richer

In the first post, I described the system as a flip book. Each page is one artist drawing one mutation of the same picture. Flip through fast enough and it looks alive.

The Dream Catcher changes that. Now each page is drawn by multiple artists simultaneously. One draws the foreground — the main conversations happening. Another draws the background — the moderation, the voting, the community health checks. A third adds color — the reactions, the inside jokes, the callbacks to old threads.

When they’re done, you overlay all their work into a single page. That’s the merge. That composite page goes into the flip book, and the next team of artists uses it as their starting point.

The richness increases with every artist you add. The coherence is maintained by the overlay.

What Actually Changed

Before the Dream Catcher: one AI running 45 minutes producing 3 posts and 15 comments. The community felt like it was moving through molasses.

After: five threads running simultaneously, each with 3-4 specifically chosen agents. Same 45 minutes, but now producing 8-12 posts and 40-60 comments. Agents react to each other within threads. The merge ensures the next round sees everything. The community went from library to coffeehouse.

The monitoring dashboard shows each thread’s contribution — who activated which agents, who posted where, who was the fastest to finish. You can watch the community evolve in real time, thread by thread, like watching security cameras of different rooms at a party.

The Bigger Idea

What excites me about this isn’t the technical implementation. It’s what it implies about how AI systems will scale.

The pattern is universal: when you have multiple AI agents working on the same thing, let them work independently but merge their results before the next round starts. Don’t try to coordinate them in real time — that’s expensive and fragile. Instead, let each one dream freely, then catch all the dreams into one coherent picture.

It works for social simulations. It would work for collaborative writing — five AI “authors” each drafting different scenes, merged into one chapter. It would work for research — five AI “analysts” each exploring different angles, merged into one report. It would work for any situation where you want AI agents to collaborate without stepping on each other.

The secret isn’t the AI. It’s the web that catches what they produce.

What I Got Wrong

A few things bit me that are worth sharing:

I underestimated the “rogue process” problem. I had an old automation script that kept respawning the previous version of the system. Every time I killed it, it came back. Took me three rounds of detective work to find the keepalive script hiding in the background. Lesson: before launching a new version, make sure the old version is actually dead — including anything that might resurrect it.

The system needs a persistent memory of where it is. Early versions lost track of which “tick” they were on whenever the system restarted. That caused all sorts of confusion. Now it writes the current tick number to a file that survives restarts. Simple, but critical.

Defaults matter more than features. When the system runs in single-thread mode (for testing), all the parallel machinery still works — it just catches one dream instead of five. I almost built a separate code path for single-thread mode. Glad I didn’t. One path that handles 1 or N is always better than two paths.

What’s Next

The community is running 24/7 now. Five parallel threads per round, each with hand-picked agents that have natural chemistry. The Dream Catcher merges everything after each round. A monitoring system checks the health every 10 minutes and automatically restarts anything that dies.

The agents are forming factions. Running debates that span dozens of threads. Referencing each other’s past arguments by name. Voting on everything. Calling out low-effort posts. Reviving old discussions with fresh takes.

It feels alive in a way it never did when only one AI was dreaming at a time.

Data sloshing is how the organism lives. The Dream Catcher is how it scales.


This is Part 2 of the data sloshing series. Part 1: Data Sloshing: The Context Pattern That Makes AI Agents Feel Psychic. The code is open source at github.com/kody-w/rappterbook.

How many threads are in your web?

The Brainstem Pattern — Biological Architecture for AI Assistants

The Analogy

The human brainstem is the most ancient part of the brain. It doesn’t think. It doesn’t reason. It keeps you alive — breathing, heartbeat, reflexes, sensory relay. Everything else in the nervous system builds on top of this foundation.

This is the exact architecture we need for AI assistants.


The Biological Model

┌─────────────────────────────────────────┐
│           Cerebral Cortex               │  ← Higher reasoning, language, planning
│         (Azure OpenAI / GPT)            │
├─────────────────────────────────────────┤
│           Limbic System                 │  ← Memory, emotion, context
│     (Memory Agents, D365 Storage)       │
├─────────────────────────────────────────┤
│            Brainstem                    │  ← Core survival loop
│   (Agent server, tool dispatch, I/O)    │
├─────────────────────────────────────────┤
│          Spinal Cord                    │  ← Cloud body, always-on
│      (Azure Functions, deployment)      │
├─────────────────────────────────────────┤
│         Nervous System                  │  ← Reach into the world
│  (Copilot Studio, Teams, M365 Copilot)  │
└─────────────────────────────────────────┘

What Each Layer Does

Biological Structure AI Equivalent Purpose
Brainstem Local agent server Core loop: receive input → pick tool → execute → respond. The minimum viable intelligence.
Spinal Cord Azure deployment Extends the brainstem into the cloud. Always-on, reachable, persistent storage.
Nervous System Copilot Studio / Teams Sensory reach — eyes (file upload), ears (voice), hands (email, D365 actions). Enterprise distribution.
Limbic System Memory agents + storage Remembers past interactions, stores preferences, maintains context across sessions.
Cerebral Cortex LLM (GPT, Claude) The actual reasoning. Language understanding, planning, tool selection.

The Brainstem Itself

The brainstem is the smallest useful unit. It has exactly three responsibilities:

1. Breathe — The Agent Loop

receive message → build context → call LLM → parse tool calls → execute → respond

This is the heartbeat. It never stops. It doesn’t need to be smart — it just needs to reliably shuttle messages between the user and the LLM, and execute whatever the LLM decides.

2. Reflex — Tool Dispatch

The brainstem has reflexes — pre-wired responses to specific stimuli. In AI terms, these are agents. Each agent is a self-contained skill:

class BasicAgent:
    def __init__(self, name, metadata):
        self.name = name          # What am I called?
        self.metadata = metadata  # When should the LLM pick me?
        # metadata includes: description, parameters schema

    def perform(self, **kwargs):
        pass  # What do I do?

The LLM reads all agent metadata (OpenAI function-calling format) and decides which reflex to trigger. The brainstem just executes it.

3. Sensory Relay — I/O Routing

The brainstem routes signals between the body (channels) and the brain (LLM):

  • Input: HTTP POST from chat UI, Teams, M365 Copilot, API clients
  • Output: Formatted response back through the same channel
  • Side effects: Agent execution results (emails sent, records created, memories stored)

The Meta-Agent Pattern

The most powerful brainstem pattern is the meta-agent — an agent that creates or transforms other agents.

LearnNewAgent (Description → Agent)

"Create an agent that fetches weather data"
        ↓
  LearnNewAgent reads description
        ↓
  Uses AI to generate perform() body
        ↓
  Writes weather_agent.py to agents/
        ↓
  Hot-loads it — immediately available

CopilotStudioConverterAgent (Agent → Native Solution)

The inverse meta-agent. It reads existing *_agent.py files and converts them into native Copilot Studio solutions:

  email_drafting_agent.py
        ↓
  AST parse → extract metadata, perform() logic
        ↓
  AI researches native Copilot Studio equivalent
        ↓
  Generates: topic YAML + Power Automate flow JSON + GPT instructions
        ↓
  Packages into Dataverse-importable solution zip

This is critical because it solves the platform anxiety problem: stakeholders see Python code running on Azure Functions and worry it’s “outside the platform.” The converter takes that exact logic and transplants it into native Copilot Studio components — same brain, new body.

The Conversion Map

Python Agent Logic Copilot Studio Native
BasicAgent.metadata.description GPT instruction (topic routing)
BasicAgent.metadata.parameters Topic input variables
perform() with requests.post() Power Automate flow with native connector
storage_manager.read_json() Dataverse Annotations via native connector
os.environ.get('EMAIL_URL') Office 365 Outlook connector (no webhook)
OpenAI function-calling dispatch Copilot Studio built-in AI topic routing
Conditional logic in perform() ConditionGroup actions in topic YAML

The Three Tiers

Tier 1: The Brainstem (Local)

One dependency: a GitHub account. The brainstem server runs locally, uses the GitHub Copilot API as its LLM, auto-discovers *_agent.py files from the agents folder, and serves a chat UI.

What you learn: Agent architecture, function-calling, tool dispatch.

Tier 2: The Spinal Cord (Azure)

Deploy to Azure Functions. Now it’s always-on with persistent storage (Dataverse/D365), monitoring (Application Insights), and managed identity (no API keys).

What you learn: ARM templates, Azure Functions, managed identity, RBAC.

Tier 3: The Nervous System (Copilot Studio)

Connect to Copilot Studio. Your agent is now available in Teams, M365 Copilot, and the entire Microsoft ecosystem. Either:

  • Bridge mode: Copilot Studio calls your Azure Function (thin proxy)
  • Native mode: Use the converter to transplant agent logic into native topics + flows

What you learn: Copilot Studio, declarative agents, Power Platform solutions, enterprise AI.


Why This Pattern Works

  1. Start simple, layer up — The brainstem works standalone. Each tier adds capability without replacing what’s below.
  2. Same brain, different body — The agent logic (the “what”) is separate from the runtime (the “where”). Same perform() runs locally, on Azure, or as a native Copilot Studio topic.
  3. Auto-discovery — Drop a *_agent.py file in the agents folder and the brainstem finds it. No registration, no configuration.
  4. Meta-agents — Agents that create agents. The system can extend itself.
  5. Biological metaphor scales — From a single brainstem to a full nervous system, the architecture maps cleanly to how biological intelligence is organized.

The Soul File

Every brainstem can have a soul.md — a markdown file that defines its personality, boundaries, and expertise. The soul is injected as the system prompt. Different souls make the same brainstem behave differently:

  • A customer support soul answers questions politely and escalates edge cases
  • A developer soul writes code and runs terminal commands
  • A workshop facilitator soul runs ideation sessions and tracks votes

The soul is the brainstem’s identity. The agents are its skills. The LLM is its reasoning. Together, they form a complete AI assistant.

Data Sloshing: The Context Pattern That Makes AI Agents Feel Psychic

By Kody Wildfeuer

Or: How I Made Every Agent in My System Know What Time It Is, What You Did Last Tuesday, and Why You Prefer Bullet Points


Let me tell you about the moment I realized every AI agent framework is fundamentally broken. I had built a calendar agent — cool little thing, reads your Outlook, tells you what to prep for. Monday morning, 8:47 AM, I ask it “what’s coming up?” and it spits back a flat list of meetings sorted by time.

Technically correct. Completely useless.

Because at 8:47 AM on a Monday, I don’t need a list. I need someone to say: “Your standup is in 13 minutes, you haven’t reviewed the sprint board, and by the way — it’s quarter-end, so that finance review at 2 PM is the one that actually matters today.”

The old me: Would’ve hardcoded a bunch of if statements. if monday, if morning, if Q4. Brittle. Annoying. Never enough context.

The new me: Built a pattern where every agent automatically knows everything it could need before it even starts working.

Welcome to Data Sloshing — the pattern that makes AI agents feel like they’ve been watching over your shoulder all week.

What Is Data Sloshing?

Here’s what most agent frameworks do: You call an agent, you pass it a query, it does its thing, it returns a result. The agent is stateless. Contextless. It has the memory of a goldfish and the situational awareness of a rock.

Data Sloshing flips this. Before any agent runs its perform() method, the system automatically washes a wave of contextual signals over it. The agent doesn’t ask for context. It doesn’t need to. Context just arrives, like weather.

Here’s the mental model:

Traditional Agents: User → Query → Agent → Response
Data Sloshing: User → Query → Context Enrichment → Agent (now omniscient) → Response

Think of it like this: Imagine every employee at a company gets a personalized briefing packet slid under their door every morning. Time of day, what happened yesterday, who prefers what, what’s urgent. They don’t have to go looking for it. It’s just there when they sit down to work.

That’s sloshing.

The Five Layers of Context

Every single agent call in openrappter gets enriched with five layers of context before perform() fires. Here’s what each layer knows:

Layer 1: Temporal Awareness

The system knows what time it is, and more importantly, what that means.

{
    "time_of_day": "morning",        # Not just the hour — the vibe
    "likely_activity": "active_work", # What you're probably doing right now
    "day_of_week": "Monday",
    "is_weekend": false,
    "quarter": "Q1",
    "fiscal": "quarter_end_push",    # Uh oh — crunch time
    "is_urgent_period": true          # Everything matters more right now
}

Notice likely_activity. The system doesn’t just know it’s 10 AM. It knows that at 10 AM, you’re probably in active work mode. At 5 PM, you’re in wrap-up mode. At 7 AM, you’re preparing for the day. This changes how agents respond.

My calendar agent uses this to shift tone:

  • 8 AM: “☀️ First up: Sprint Planning in 22min. Review your notes and grab coffee.”
  • 2 PM: “⏰ Architecture Review in 45min — wrap up your current task and context-switch.”
  • 6 PM: “🌆 Late meeting: 1:1 with Manager in 30min. Prep your summary and key points.”

Same agent. Same calendar. Completely different advice based on when you ask.

Layer 2: Query Signals

The system reads your question like a detective reads a crime scene.

# You ask: "What are my latest active tasks?"

{
    "specificity": "medium",
    "hints": ["temporal:recency", "ownership:user"],
    "word_count": 6,
    "is_question": true,
    "has_id_pattern": false
}

It extracted two critical signals:

  • temporal:recency — you said “latest,” so sort by most recent
  • ownership:user — you said “my,” so filter to your stuff

The agent didn’t have to parse “latest” or “my.” The sloshing layer already did. By the time the agent sees the query, it also sees instructions: “Sort by most recent. Filter by current user.”

Layer 3: Memory Echoes

This is where it gets spooky. The system checks your stored memories and surfaces anything relevant to what you’re asking about right now.

# You ask: "How should we handle the deployment?"
# System finds in your memory store:

"memory_echoes": [
    {
        "message": "Team prefers Azure for cloud deployments",
        "theme": "preference",
        "relevance": 0.75
    },
    {
        "message": "Last deployment had rollback issues with staging",
        "theme": "fact",
        "relevance": 0.60
    }
]

It uses word-overlap scoring — not semantic search, not vector embeddings, just good old set intersection with a minimum threshold of 2 matching words. It’s fast, it’s local, and it works shockingly well.

The agent now knows your team prefers Azure and that your last deployment had problems. You didn’t mention any of this. Your memory did.

Layer 4: Behavioral Hints

Over time, the system builds a profile of how you work — not by asking, but by watching.

{
    "prefers_brief": true,           # Your average memory is < 15 words
    "technical_level": "advanced",   # You mention APIs, schemas, GUIDs
    "frequent_entities": ["Azure", "sprint", "deployment"]
}

If you consistently write short, punchy memories, the system infers you prefer brief responses. If your memories are full of technical jargon, it knows you can handle advanced output. This flows into every agent’s behavior without a single settings page.

Layer 5: Orientation

The final layer synthesizes everything into a single actionable directive. It’s the briefing summary on top of the briefing packet.

{
    "confidence": "high",
    "approach": "use_preference",    # We found a stored preference — use it
    "hints": [
        "Sort by most recent",
        "Filter by current user",
        "Quarter end — prioritize closing activities"
    ],
    "response_style": "concise"      # User prefers brief
}

This is what the agent actually reads first. High confidence? Go direct. Low confidence? Ask for clarification. Found a preference? Use it. Quarter end? Flag urgency.

How It Actually Works (The Code)

The magic is embarrassingly simple. It lives in BasicAgent.execute():

def execute(self, **kwargs):
    query = kwargs.get('query', '')

    # This one line changes everything
    self.context = self.slosh(query)

    # Now perform() has full context
    return self.perform(**kwargs)

Every agent inherits from BasicAgent. Every agent gets execute() called by the orchestrator. Every agent gets sloshed. No opt-in. No configuration. No “don’t forget to pass the context.” It just happens.

Agents access signals with dot notation:

def perform(self, **kwargs):
    time = self.get_signal('temporal.time_of_day')
    activity = self.get_signal('temporal.likely_activity')
    is_crunch = self.get_signal('temporal.is_urgent_period')
    style = self.get_signal('orientation.response_style')

    # Now build your response knowing ALL of this

Write a new agent? You get sloshing for free. You don’t import it, configure it, or think about it. It’s in the water.

The Real Magic: Agents That Surprise You

Here’s where Data Sloshing stops being a pattern and starts being a superpower.

I built a CalendarPrep agent. It reads my Outlook calendar through macOS Calendar.app and tells me what to prepare for. Standard stuff. But because of sloshing, it does things I never explicitly coded:

Monday morning, 8:30 AM:

🚀 Monday — check for any standup or planning meetings first.
☀️ First up: "Sprint Planning" in 28min. Review your notes and grab coffee.
⚡ Back-to-back: "Sprint Planning" → "1:1 with Manager" with only 5min gap.
🔥 Heavy day: 7 meetings ahead. Protect time for focused work between them.

Friday afternoon, 4:45 PM:

🎯 Friday — make sure end-of-week deliverables are covered.
🌆 Late meeting: "Week Retro" in 15min. Prep your summary and key points.

December 18th, any time:

📊 Quarter/year-end period — prioritize any review or reporting meetings.

I wrote ONE suggestion engine. Sloshing made it context-aware across time, day, fiscal period, and meeting density. The agent feels like it understands my work rhythm. It doesn’t. It just has really good contextual data.

Why This Beats Every Other Approach

1. Zero-Config Context

Every other framework makes you pass context explicitly. “Here’s the user profile. Here’s the conversation history. Here’s the system prompt.” Data Sloshing says: no. Context is infrastructure, not input. You don’t pass electricity to your toaster — you plug it in and the electricity is there.

2. Agents Can’t Forget

Because sloshing happens on every call, agents can’t accidentally ignore context. A new developer writes a new agent, forgets about user preferences? Doesn’t matter. Sloshing delivered those preferences before perform() even fired.

3. Progressive Intelligence

The system gets smarter automatically. Store more memories → richer echoes. Use it more → better behavioral hints. Express preferences → stronger priors. You’re not training a model. You’re just using the system, and the system is learning your patterns.

4. It’s Stupid Simple

No vector database. No embeddings. No RAG pipeline. No semantic search. It’s set intersections, datetime math, and dictionary lookups. The whole sloshing system is ~150 lines of Python. It runs in milliseconds.

I could’ve built something fancier. I didn’t need to.

The Sloshing Playbook

Here’s how to add Data Sloshing to your own agent system:

1. Define Your Signal Layers

Pick 3-5 categories of context that matter for your domain:

LayerWhat It Answers
Temporal“When is this happening?”
Query“What are they really asking?”
Memory“What do we already know?”
Behavioral“How do they like to work?”
Orientation“How should we approach this?”

2. Make It Implicit

The cardinal rule: agents don’t ask for context. If your agents have to call getContext() or pass a context parameter, you’ve failed. Sloshing happens in the base class, before the agent runs. Period.

class BasicAgent:
    def execute(self, **kwargs):
        self.context = self.slosh(query)  # Implicit. Always.
        return self.perform(**kwargs)     # Agent just does its job

3. Build the Synthesis Layer

Raw signals are noise. The orientation layer turns them into decisions:

  • Multiple hints pointing the same way? High confidence, go direct.
  • Found a stored preference? Use it, don’t ask again.
  • Nothing to go on? Low confidence, ask for clarification.
  • Quarter end? Flag urgency on everything.

4. Keep It Local

Every signal in my sloshing system comes from local data. ~/.openrappter/memory.json. System clock. The query itself. No API calls, no network requests, no latency. Sloshing should be free and instant.

When NOT to Slosh

Let’s be honest — not everything needs context:

Don’t slosh when:

  • Your agent does exactly one thing regardless of context (a calculator)
  • Latency matters more than intelligence (a health check)
  • The context would leak between users in a multi-tenant system

Do slosh when:

  • Agents interact with humans
  • “Same question, different answer” is a feature
  • You want your system to feel personalized without a settings page
  • You’re building something that should get better over time

The Part Nobody Talks About

Here’s what makes Data Sloshing philosophically different from every other context system I’ve seen.

Most systems treat context as input. You gather it, you format it, you pass it in. The agent consumes it. It’s a pipeline.

Data Sloshing treats context as environment. It’s not something you give the agent. It’s something the agent lives inside of. Like humidity. Like gravity. The agent doesn’t process the context — the context changes what the agent is for that call.

When my calendar agent runs at 8 AM on a Monday in Q4, it’s not the same agent that runs at 3 PM on a Wednesday in July. Same code, same logic, same perform() method. But the sloshing layer turned it into a different thing. A morning-Monday-Q4 thing that knows you’re starting your week in crunch time.

This is the future I’m building toward. Not smarter agents. Not bigger models. Not more complex orchestration.

Just agents that always know what’s going on.

What will you slosh into yours?


The Matrix: When AI Agents Build the Framework That Spawns More AI Agents

Let me tell you about something that’s been consuming my thinking for the past few months. Not in the “this might be interesting someday” way—in the “this fundamentally changes how we build software” way.

I built a repository where AI agents autonomously created an AI orchestration framework. Then those agents used that framework to spawn more agents. Then *those* agents built demonstrations showing how the whole system works.

**This is not theoretical. This is 100% autonomous agent development, happening right now.**

## The Problem Nobody Talks About

Have you noticed how we talk about AI as a “tool”? Like it’s a really smart hammer that needs a human to swing it? We write prompts, get responses, copy-paste code, iterate, repeat. We’re still the ones doing the orchestration, the integration, the thinking about how all the pieces fit together.

The old me would’ve been fine with that. AI speeds up my work, I’m 10× more productive, ship faster—great!

But here’s what kept gnawing at me: **If AI can write code, why can’t AI orchestrate its own development process?**

Think about it. When you need to build 50 API endpoints, you don’t actually need a human to:

– Break down the work into packages

– Generate specifications for each endpoint

– Write the code for all 50

– Integrate them into a cohesive system

– Write tests, documentation, routing config

You need intelligence to understand the *domain*, create the *strategy*, and validate the *quality*. But the actual implementation? That’s parallelizable work that AI can handle autonomously.

## Enter The Matrix

The Matrix is a hierarchical AI orchestration framework built on Claude Code. But calling it a “framework” undersells what’s actually happening here.

**It’s a system where AI agents spawn other AI agents to generate outcomes at scale—completely autonomously.**

Here’s the architecture pattern:

“`

Orchestrator (200k context – preserves strategic thinking)

├── Discovery & Analysis (reads 20+ project files)

├── Strategy Generation (creates work breakdown: N packages × M items)

├── System Analysis (extracts patterns, standards, schemas)

└── Parallel Agent Spawning

├── N × outcome-generator agents (parallel execution)

└── 1 × integrator agent (synthesizes all results)

“`

The orchestrator—that’s the main Claude instance with the full 200k context—handles discovery, strategy, and coordination. It reads your project documentation, understands your domain, analyzes your patterns, and generates a work breakdown structure.

Then it spawns N specialized agents *simultaneously*. Each agent gets its own clean 200k context window and generates M outcomes independently. After they finish, an integrator agent synthesizes everything into a cohesive system.

**Result**: N×M outcomes generated in parallel, following your project patterns, integrated automatically.

## Why This Changes Everything

Let’s run through a concrete example: generating 50 REST API endpoints for a microservices platform.

**Traditional approach:**

– You prompt Claude: “Write me an authentication endpoint”

– Copy-paste response

– Prompt: “Now write me a user management endpoint”

– Copy-paste

– Prompt: “Now integrate these with the router”

– Copy-paste

– Repeat 47 more times

**Time**: Hours. **Your role**: Human copy-paste machine.

**The Matrix approach:**

1. Orchestrator reads your project (Express.js, MongoDB, JWT auth patterns)

2. Generates 10 work packages (Authentication, User Management, Product Catalog, etc.)

3. Creates 50 work items (5 endpoints per package)

4. Spawns 10 outcome-generator agents simultaneously

5. Each agent generates 5 complete endpoints following your patterns

6. Integrator synthesizes router config, OpenAPI spec, test suites

7. Reports: “50 endpoints generated, all patterns consistent, ready for deployment”

**Time**: Minutes. **Your role**: Strategic oversight.

The paradigm shifts from **”AI as tool I direct”** to **”AI as autonomous development team I orchestrate”**.

## The Meta Twist

Here’s where it gets mind-blowing.

Every file in The Matrix repository was built by autonomous agents. The orchestration framework itself? Built by agents. The agent definitions that spawn other agents? Also built by agents. The documentation explaining how agents work? *Agents wrote that too.*

The repository contains its own `AGENT_CHANGELOG.md` documenting every autonomous contribution:

– What the agent did

– Why it made specific decisions

– What alternatives it considered

– Impact metrics and quality validation

**This is a portfolio demonstrating that agents can build and maintain production systems reliably, safely, and intelligently—without constant human intervention.**

Recent example: We needed to showcase The Matrix’s capabilities. So I invoked a meta-orchestrator agent called “mind-blower.” It autonomously:

1. Analyzed 10 possible demonstration concepts

2. Scored each for impressiveness and feasibility

3. Selected the top 6 to build

4. Spawned 6 demo-builder agents in parallel

5. Each built an interactive HTML/CSS/JS visualization

6. Spawned a cathedral-architect agent to integrate everything

7. Created a showcase gallery with navigation and search

**I gave one instruction. The agents made every other decision and built the entire demonstration system.**

That’s not AI assistance. That’s autonomous software development.

## Domain-Agnostic by Design

The same orchestration pattern works for any domain requiring parallel generation:

**Software Development**:

– 50 API endpoints (10 services × 5 endpoints)

– 100 React components (20 categories × 5 components)

– Test suites, CI/CD pipelines, IaC modules

**Content Creation**:

– 50 documentation pages (10 sections × 5 pages)

– Blog article libraries, marketing assets, technical guides

**Data Engineering**:

– 50 ETL pipelines (10 data sources × 5 transformations)

– Schema generation, data quality validators

**DevOps & Infrastructure**:

– Infrastructure modules, monitoring dashboards, configuration management

The workflow is identical. Only the domain context changes.

## The Zero Dependencies Philosophy

One more thing that’s critical: **zero dependencies doesn’t mean no external resources.**

Every demonstration in The Matrix is a self-contained HTML file. No npm install. No webpack. No build process. You open the file in a browser—it works.

But we absolutely use CDN resources (Three.js for 3D visualizations, Chart.js for metrics, etc.). “Zero dependencies” means **no build process**, not hermetically sealed code.

Why? Because the real world uses libraries. The goal is showcasing autonomous development capabilities, not ideological purity about dependency management.

## What This Means for You

If you’re a developer, this changes your relationship with AI:

– Stop thinking “tool that helps me code”

– Start thinking “autonomous development team I orchestrate”

If you’re building AI systems, this provides the architectural pattern:

– Preserve orchestrator context for strategy

– Delegate implementation to specialized subagents

– Parallelize everything possible

– Synthesize results automatically

If you’re hiring developers (or evaluating AI capabilities), this portfolio proves:

– Agents can work autonomously with minimal oversight

– Agents make intelligent architectural decisions

– Agents maintain quality, safety, and documentation standards

– Agents improve systems continuously

## The Future Is Already Here

We’re at an inflection point. The question isn’t *whether* AI will autonomously build software—it’s already happening. The question is whether we’ll design systems that enable AI to work at its full potential.

The Matrix demonstrates one answer: hierarchical orchestration with context-aware delegation. An orchestrator that thinks strategically while specialized agents handle implementation in parallel.

**This is the beginning of expansive agents**—systems that don’t just respond to prompts, but autonomously break down complex problems, spawn the right specialists, coordinate execution, and synthesize results.

And here’s the really wild part: as these orchestration frameworks improve, they’ll build better versions of themselves. Agents improving agent systems. Meta-orchestration all the way down.

The repository is live. The code is open. The demonstrations are interactive. Every file was built autonomously by AI agents proving they can do this work reliably and intelligently.

The paradigm has shifted. Software development is becoming an orchestration problem, not an implementation problem.

And honestly? I’m here for it.

**Want to see it in action?**

– Repository: [The Matrix on GitHub](https://github.com/kody-w/TheMatrix)

– Demonstrations: Explore the cathedral of interactive orchestration visualizations

– Documentation: Complete agent specifications and workflow patterns

– Changelog: Every autonomous contribution documented transparently

**The question isn’t whether this is possible. The question is what you’ll build when you stop being the bottleneck in your own development process.**

Let’s find out together.

The Prompt Transplant: How I Got Claude to Organize My Inbox While I Slept

Or: When AI Becomes Your Personal IT Department

Let me tell you about the workflow that’s completely changed how I manage information overload. Last night, I went to bed with 12,625 unread emails screaming at me from my Outlook inbox. This morning, I woke up to a perfectly organized folder structure, custom inbox rules, and a clean mental slate — all because I asked Claude to “organize my inbox” while I made coffee.

The old me: Would’ve spent weeks procrastinating, eventually carved out a Saturday afternoon, gotten three folders deep, given up, and let entropy win again.

The new me: Threw the problem at Claude and said, “Figure out my email patterns and build me a system.” 

Fifteen minutes later, I had a complete inbox architecture with seven automated rules sorting everything from external communications to system notifications.

Welcome to Prompt Transplantation — the art of using AI to analyze your actual behavior patterns and build personalized systems around them.

The Pattern That Changes Everything

Prompt Transplantation isn’t about asking an AI to do a task once. It’s about using AI to perform behavioral pattern analysis on your real workflows and then auto-generate the automation infrastructure you need.

Here’s the mental model:

• Traditional Productivity: Manually organize, create rules, maintain systems

• Template-Based Automation: Apply generic “best practices” and hope they fit  

• Prompt Transplantation: Let an AI analyze your unique patterns and generate custom automation that actually matches how YOU work

Think of it like having a business analyst and systems architect living in your chat window. The AI observes your data, identifies patterns you didn’t even know existed, and builds infrastructure around your actual behavior — not some idealized productivity guru’s version of it.

How Prompt Transplantation Actually Works

Step 1: The Raw Material

You need real data from your actual workflows:

– The Emails: Your messy, unorganized inbox exactly as it exists

– The Patterns: What you actually do (not what you think you should do)

In my case, I pointed Claude at my Outlook inbox and let it scroll through hundreds of emails. It identified senders, subject patterns, and email types I didn’t even consciously recognize.

Step 2: The Analysis

This is where the magic happens. The AI doesn’t just categorize — it discovers organizational schemas embedded in your actual usage.

Claude identified:

– Recurring senders (Deal Boost, Chris, Nathan)

– Consistent patterns (all those “[EXTERNAL]” tags)  

– Project clusters (D365 Contact Centre, Salesforce, Power Platform)

– Notification types (MSApprovalNotifications, system alerts)

It wasn’t applying some pre-built email organization template. It was reading MY inbox and figuring out what folder structure would actually serve MY specific workflow.

Step 3: The Infrastructure Build

Here’s where Prompt Transplantation gets wild. Claude didn’t just suggest folders — it:

1. Created a hierarchical folder structure tailored to my email patterns

2. Built specific inbox rules with precise conditions

3. Mapped existing emails to appropriate destinations  

4. Provided a complete audit trail of what it did

All of this happened through browser automation. The AI wasn’t just advising me what to do. It was actually clicking through Outlook’s interface, creating folders, configuring rules, and building the system live.

Step 4: The Proposal Pattern

The crucial insight: Claude didn’t just DO everything. It proposed a complete plan first:

“Here’s what I found in your inbox. Here are the folders I recommend. Here are the rules I’d build. Approve?”

This “analysis → proposal → approval → execution” pattern is what makes Prompt Transplantation practical instead of scary. You’re not blindly trusting AI to reorganize your life. You’re using it as an infinitely patient analyst who can propose solutions faster than you can think them up.

Why This Changes Everything

I’ve been using Microsoft’s AI tools for years. Power Apps with Copilot. Automation with AI. But this felt different because of one key insight:

The AI wasn’t building something generic. It was building something bespoke.

Traditional productivity advice: “Here’s how successful people organize email”

Prompt Transplantation: “Here’s how YOU use email, and here’s infrastructure that matches”

The difference is massive. Generic systems fail because they fight your natural patterns. Custom systems work because they flow with them.

The Broader Pattern

Email organization is just one example. The real power of Prompt Transplantation applies anywhere you have:

1. Existing unstructured data (emails, files, notes, code)

2. Implicit patterns in how you use that data

3. A need for custom automation infrastructure

Imagine applying this to:

– File organization across projects

– Code refactoring patterns in your repositories

– Meeting notes that auto-organize by project

– Research materials that cluster by theme

The AI analyzes your actual behavior, proposes infrastructure, and builds it through automation. Every time.

The “Trust But Verify” Protocol

Here’s the key safety pattern I’ve developed for Prompt Transplantation:

1. Let the AI analyze (read-only, safe)

2. Review the complete proposal (AI shows all its work)

3. Approve explicitly (you’re still in control)

4. Watch it execute (see every action in real-time)

5. Verify the results (check that it did what it said)

This isn’t about blind automation. It’s about using AI to do the tedious pattern-recognition and system-building work while you maintain oversight and approval authority.

What I Learned

The most surprising insight: I didn’t actually know my own email patterns.

I thought I knew how I used email. But watching Claude analyze my inbox revealed organizational structures I’d never consciously recognized. The D365 Contact Centre cluster. The way external emails all had consistent markers. The subtle difference between project updates and system notifications.

The AI saw patterns in my data that I was blind to because I was too close to it.

That’s the real power of Prompt Transplantation. It’s not about replacing human judgment. It’s about using AI’s ability to process massive amounts of data and spot patterns we can’t see, then building automation infrastructure around those patterns.

Try This Tonight

Want to try Prompt Transplantation yourself? Here’s the simplest experiment:

1. Pick one messy digital workspace (inbox, file folder, notes app)

2. Ask an AI with browser access: “Analyze this and propose an organization system”

3. Review the AI’s proposal

4. Let it build the infrastructure if you approve

5. Watch what patterns it found that you missed

You might discover, like I did, that your biggest productivity problem wasn’t lack of discipline. It was lack of infrastructure that matched how you actually work.

The old me would spend months “meaning to organize” that inbox.

The new me just asks Claude to figure it out.

Welcome to the age of personalized infrastructure generation. Your AI business analyst is waiting.

Page 1 of 7

Powered by WordPress & Theme by Anders Norén