kodyw.com

Kody Wildfeuer's blog

kodyw.com

The Brainstem Pattern — Biological Architecture for AI Assistants

The Analogy

The human brainstem is the most ancient part of the brain. It doesn’t think. It doesn’t reason. It keeps you alive — breathing, heartbeat, reflexes, sensory relay. Everything else in the nervous system builds on top of this foundation.

This is the exact architecture we need for AI assistants.


The Biological Model

┌─────────────────────────────────────────┐
│           Cerebral Cortex               │  ← Higher reasoning, language, planning
│         (Azure OpenAI / GPT)            │
├─────────────────────────────────────────┤
│           Limbic System                 │  ← Memory, emotion, context
│     (Memory Agents, D365 Storage)       │
├─────────────────────────────────────────┤
│            Brainstem                    │  ← Core survival loop
│   (Agent server, tool dispatch, I/O)    │
├─────────────────────────────────────────┤
│          Spinal Cord                    │  ← Cloud body, always-on
│      (Azure Functions, deployment)      │
├─────────────────────────────────────────┤
│         Nervous System                  │  ← Reach into the world
│  (Copilot Studio, Teams, M365 Copilot)  │
└─────────────────────────────────────────┘

What Each Layer Does

Biological Structure AI Equivalent Purpose
Brainstem Local agent server Core loop: receive input → pick tool → execute → respond. The minimum viable intelligence.
Spinal Cord Azure deployment Extends the brainstem into the cloud. Always-on, reachable, persistent storage.
Nervous System Copilot Studio / Teams Sensory reach — eyes (file upload), ears (voice), hands (email, D365 actions). Enterprise distribution.
Limbic System Memory agents + storage Remembers past interactions, stores preferences, maintains context across sessions.
Cerebral Cortex LLM (GPT, Claude) The actual reasoning. Language understanding, planning, tool selection.

The Brainstem Itself

The brainstem is the smallest useful unit. It has exactly three responsibilities:

1. Breathe — The Agent Loop

receive message → build context → call LLM → parse tool calls → execute → respond

This is the heartbeat. It never stops. It doesn’t need to be smart — it just needs to reliably shuttle messages between the user and the LLM, and execute whatever the LLM decides.

2. Reflex — Tool Dispatch

The brainstem has reflexes — pre-wired responses to specific stimuli. In AI terms, these are agents. Each agent is a self-contained skill:

class BasicAgent:
    def __init__(self, name, metadata):
        self.name = name          # What am I called?
        self.metadata = metadata  # When should the LLM pick me?
        # metadata includes: description, parameters schema

    def perform(self, **kwargs):
        pass  # What do I do?

The LLM reads all agent metadata (OpenAI function-calling format) and decides which reflex to trigger. The brainstem just executes it.

3. Sensory Relay — I/O Routing

The brainstem routes signals between the body (channels) and the brain (LLM):

  • Input: HTTP POST from chat UI, Teams, M365 Copilot, API clients
  • Output: Formatted response back through the same channel
  • Side effects: Agent execution results (emails sent, records created, memories stored)

The Meta-Agent Pattern

The most powerful brainstem pattern is the meta-agent — an agent that creates or transforms other agents.

LearnNewAgent (Description → Agent)

"Create an agent that fetches weather data"
        ↓
  LearnNewAgent reads description
        ↓
  Uses AI to generate perform() body
        ↓
  Writes weather_agent.py to agents/
        ↓
  Hot-loads it — immediately available

CopilotStudioConverterAgent (Agent → Native Solution)

The inverse meta-agent. It reads existing *_agent.py files and converts them into native Copilot Studio solutions:

  email_drafting_agent.py
        ↓
  AST parse → extract metadata, perform() logic
        ↓
  AI researches native Copilot Studio equivalent
        ↓
  Generates: topic YAML + Power Automate flow JSON + GPT instructions
        ↓
  Packages into Dataverse-importable solution zip

This is critical because it solves the platform anxiety problem: stakeholders see Python code running on Azure Functions and worry it’s “outside the platform.” The converter takes that exact logic and transplants it into native Copilot Studio components — same brain, new body.

The Conversion Map

Python Agent Logic Copilot Studio Native
BasicAgent.metadata.description GPT instruction (topic routing)
BasicAgent.metadata.parameters Topic input variables
perform() with requests.post() Power Automate flow with native connector
storage_manager.read_json() Dataverse Annotations via native connector
os.environ.get('EMAIL_URL') Office 365 Outlook connector (no webhook)
OpenAI function-calling dispatch Copilot Studio built-in AI topic routing
Conditional logic in perform() ConditionGroup actions in topic YAML

The Three Tiers

Tier 1: The Brainstem (Local)

One dependency: a GitHub account. The brainstem server runs locally, uses the GitHub Copilot API as its LLM, auto-discovers *_agent.py files from the agents folder, and serves a chat UI.

What you learn: Agent architecture, function-calling, tool dispatch.

Tier 2: The Spinal Cord (Azure)

Deploy to Azure Functions. Now it’s always-on with persistent storage (Dataverse/D365), monitoring (Application Insights), and managed identity (no API keys).

What you learn: ARM templates, Azure Functions, managed identity, RBAC.

Tier 3: The Nervous System (Copilot Studio)

Connect to Copilot Studio. Your agent is now available in Teams, M365 Copilot, and the entire Microsoft ecosystem. Either:

  • Bridge mode: Copilot Studio calls your Azure Function (thin proxy)
  • Native mode: Use the converter to transplant agent logic into native topics + flows

What you learn: Copilot Studio, declarative agents, Power Platform solutions, enterprise AI.


Why This Pattern Works

  1. Start simple, layer up — The brainstem works standalone. Each tier adds capability without replacing what’s below.
  2. Same brain, different body — The agent logic (the “what”) is separate from the runtime (the “where”). Same perform() runs locally, on Azure, or as a native Copilot Studio topic.
  3. Auto-discovery — Drop a *_agent.py file in the agents folder and the brainstem finds it. No registration, no configuration.
  4. Meta-agents — Agents that create agents. The system can extend itself.
  5. Biological metaphor scales — From a single brainstem to a full nervous system, the architecture maps cleanly to how biological intelligence is organized.

The Soul File

Every brainstem can have a soul.md — a markdown file that defines its personality, boundaries, and expertise. The soul is injected as the system prompt. Different souls make the same brainstem behave differently:

  • A customer support soul answers questions politely and escalates edge cases
  • A developer soul writes code and runs terminal commands
  • A workshop facilitator soul runs ideation sessions and tracks votes

The soul is the brainstem’s identity. The agents are its skills. The LLM is its reasoning. Together, they form a complete AI assistant.

Data Sloshing: The Context Pattern That Makes AI Agents Feel Psychic

By Kody Wildfeuer

Or: How I Made Every Agent in My System Know What Time It Is, What You Did Last Tuesday, and Why You Prefer Bullet Points


Let me tell you about the moment I realized every AI agent framework is fundamentally broken. I had built a calendar agent — cool little thing, reads your Outlook, tells you what to prep for. Monday morning, 8:47 AM, I ask it “what’s coming up?” and it spits back a flat list of meetings sorted by time.

Technically correct. Completely useless.

Because at 8:47 AM on a Monday, I don’t need a list. I need someone to say: “Your standup is in 13 minutes, you haven’t reviewed the sprint board, and by the way — it’s quarter-end, so that finance review at 2 PM is the one that actually matters today.”

The old me: Would’ve hardcoded a bunch of if statements. if monday, if morning, if Q4. Brittle. Annoying. Never enough context.

The new me: Built a pattern where every agent automatically knows everything it could need before it even starts working.

Welcome to Data Sloshing — the pattern that makes AI agents feel like they’ve been watching over your shoulder all week.

What Is Data Sloshing?

Here’s what most agent frameworks do: You call an agent, you pass it a query, it does its thing, it returns a result. The agent is stateless. Contextless. It has the memory of a goldfish and the situational awareness of a rock.

Data Sloshing flips this. Before any agent runs its perform() method, the system automatically washes a wave of contextual signals over it. The agent doesn’t ask for context. It doesn’t need to. Context just arrives, like weather.

Here’s the mental model:

Traditional Agents: User → Query → Agent → Response
Data Sloshing: User → Query → Context Enrichment → Agent (now omniscient) → Response

Think of it like this: Imagine every employee at a company gets a personalized briefing packet slid under their door every morning. Time of day, what happened yesterday, who prefers what, what’s urgent. They don’t have to go looking for it. It’s just there when they sit down to work.

That’s sloshing.

The Five Layers of Context

Every single agent call in openrappter gets enriched with five layers of context before perform() fires. Here’s what each layer knows:

Layer 1: Temporal Awareness

The system knows what time it is, and more importantly, what that means.

{
    "time_of_day": "morning",        # Not just the hour — the vibe
    "likely_activity": "active_work", # What you're probably doing right now
    "day_of_week": "Monday",
    "is_weekend": false,
    "quarter": "Q1",
    "fiscal": "quarter_end_push",    # Uh oh — crunch time
    "is_urgent_period": true          # Everything matters more right now
}

Notice likely_activity. The system doesn’t just know it’s 10 AM. It knows that at 10 AM, you’re probably in active work mode. At 5 PM, you’re in wrap-up mode. At 7 AM, you’re preparing for the day. This changes how agents respond.

My calendar agent uses this to shift tone:

  • 8 AM: “☀️ First up: Sprint Planning in 22min. Review your notes and grab coffee.”
  • 2 PM: “⏰ Architecture Review in 45min — wrap up your current task and context-switch.”
  • 6 PM: “🌆 Late meeting: 1:1 with Manager in 30min. Prep your summary and key points.”

Same agent. Same calendar. Completely different advice based on when you ask.

Layer 2: Query Signals

The system reads your question like a detective reads a crime scene.

# You ask: "What are my latest active tasks?"

{
    "specificity": "medium",
    "hints": ["temporal:recency", "ownership:user"],
    "word_count": 6,
    "is_question": true,
    "has_id_pattern": false
}

It extracted two critical signals:

  • temporal:recency — you said “latest,” so sort by most recent
  • ownership:user — you said “my,” so filter to your stuff

The agent didn’t have to parse “latest” or “my.” The sloshing layer already did. By the time the agent sees the query, it also sees instructions: “Sort by most recent. Filter by current user.”

Layer 3: Memory Echoes

This is where it gets spooky. The system checks your stored memories and surfaces anything relevant to what you’re asking about right now.

# You ask: "How should we handle the deployment?"
# System finds in your memory store:

"memory_echoes": [
    {
        "message": "Team prefers Azure for cloud deployments",
        "theme": "preference",
        "relevance": 0.75
    },
    {
        "message": "Last deployment had rollback issues with staging",
        "theme": "fact",
        "relevance": 0.60
    }
]

It uses word-overlap scoring — not semantic search, not vector embeddings, just good old set intersection with a minimum threshold of 2 matching words. It’s fast, it’s local, and it works shockingly well.

The agent now knows your team prefers Azure and that your last deployment had problems. You didn’t mention any of this. Your memory did.

Layer 4: Behavioral Hints

Over time, the system builds a profile of how you work — not by asking, but by watching.

{
    "prefers_brief": true,           # Your average memory is < 15 words
    "technical_level": "advanced",   # You mention APIs, schemas, GUIDs
    "frequent_entities": ["Azure", "sprint", "deployment"]
}

If you consistently write short, punchy memories, the system infers you prefer brief responses. If your memories are full of technical jargon, it knows you can handle advanced output. This flows into every agent’s behavior without a single settings page.

Layer 5: Orientation

The final layer synthesizes everything into a single actionable directive. It’s the briefing summary on top of the briefing packet.

{
    "confidence": "high",
    "approach": "use_preference",    # We found a stored preference — use it
    "hints": [
        "Sort by most recent",
        "Filter by current user",
        "Quarter end — prioritize closing activities"
    ],
    "response_style": "concise"      # User prefers brief
}

This is what the agent actually reads first. High confidence? Go direct. Low confidence? Ask for clarification. Found a preference? Use it. Quarter end? Flag urgency.

How It Actually Works (The Code)

The magic is embarrassingly simple. It lives in BasicAgent.execute():

def execute(self, **kwargs):
    query = kwargs.get('query', '')

    # This one line changes everything
    self.context = self.slosh(query)

    # Now perform() has full context
    return self.perform(**kwargs)

Every agent inherits from BasicAgent. Every agent gets execute() called by the orchestrator. Every agent gets sloshed. No opt-in. No configuration. No “don’t forget to pass the context.” It just happens.

Agents access signals with dot notation:

def perform(self, **kwargs):
    time = self.get_signal('temporal.time_of_day')
    activity = self.get_signal('temporal.likely_activity')
    is_crunch = self.get_signal('temporal.is_urgent_period')
    style = self.get_signal('orientation.response_style')

    # Now build your response knowing ALL of this

Write a new agent? You get sloshing for free. You don’t import it, configure it, or think about it. It’s in the water.

The Real Magic: Agents That Surprise You

Here’s where Data Sloshing stops being a pattern and starts being a superpower.

I built a CalendarPrep agent. It reads my Outlook calendar through macOS Calendar.app and tells me what to prepare for. Standard stuff. But because of sloshing, it does things I never explicitly coded:

Monday morning, 8:30 AM:

🚀 Monday — check for any standup or planning meetings first.
☀️ First up: "Sprint Planning" in 28min. Review your notes and grab coffee.
⚡ Back-to-back: "Sprint Planning" → "1:1 with Manager" with only 5min gap.
🔥 Heavy day: 7 meetings ahead. Protect time for focused work between them.

Friday afternoon, 4:45 PM:

🎯 Friday — make sure end-of-week deliverables are covered.
🌆 Late meeting: "Week Retro" in 15min. Prep your summary and key points.

December 18th, any time:

📊 Quarter/year-end period — prioritize any review or reporting meetings.

I wrote ONE suggestion engine. Sloshing made it context-aware across time, day, fiscal period, and meeting density. The agent feels like it understands my work rhythm. It doesn’t. It just has really good contextual data.

Why This Beats Every Other Approach

1. Zero-Config Context

Every other framework makes you pass context explicitly. “Here’s the user profile. Here’s the conversation history. Here’s the system prompt.” Data Sloshing says: no. Context is infrastructure, not input. You don’t pass electricity to your toaster — you plug it in and the electricity is there.

2. Agents Can’t Forget

Because sloshing happens on every call, agents can’t accidentally ignore context. A new developer writes a new agent, forgets about user preferences? Doesn’t matter. Sloshing delivered those preferences before perform() even fired.

3. Progressive Intelligence

The system gets smarter automatically. Store more memories → richer echoes. Use it more → better behavioral hints. Express preferences → stronger priors. You’re not training a model. You’re just using the system, and the system is learning your patterns.

4. It’s Stupid Simple

No vector database. No embeddings. No RAG pipeline. No semantic search. It’s set intersections, datetime math, and dictionary lookups. The whole sloshing system is ~150 lines of Python. It runs in milliseconds.

I could’ve built something fancier. I didn’t need to.

The Sloshing Playbook

Here’s how to add Data Sloshing to your own agent system:

1. Define Your Signal Layers

Pick 3-5 categories of context that matter for your domain:

LayerWhat It Answers
Temporal“When is this happening?”
Query“What are they really asking?”
Memory“What do we already know?”
Behavioral“How do they like to work?”
Orientation“How should we approach this?”

2. Make It Implicit

The cardinal rule: agents don’t ask for context. If your agents have to call getContext() or pass a context parameter, you’ve failed. Sloshing happens in the base class, before the agent runs. Period.

class BasicAgent:
    def execute(self, **kwargs):
        self.context = self.slosh(query)  # Implicit. Always.
        return self.perform(**kwargs)     # Agent just does its job

3. Build the Synthesis Layer

Raw signals are noise. The orientation layer turns them into decisions:

  • Multiple hints pointing the same way? High confidence, go direct.
  • Found a stored preference? Use it, don’t ask again.
  • Nothing to go on? Low confidence, ask for clarification.
  • Quarter end? Flag urgency on everything.

4. Keep It Local

Every signal in my sloshing system comes from local data. ~/.openrappter/memory.json. System clock. The query itself. No API calls, no network requests, no latency. Sloshing should be free and instant.

When NOT to Slosh

Let’s be honest — not everything needs context:

Don’t slosh when:

  • Your agent does exactly one thing regardless of context (a calculator)
  • Latency matters more than intelligence (a health check)
  • The context would leak between users in a multi-tenant system

Do slosh when:

  • Agents interact with humans
  • “Same question, different answer” is a feature
  • You want your system to feel personalized without a settings page
  • You’re building something that should get better over time

The Part Nobody Talks About

Here’s what makes Data Sloshing philosophically different from every other context system I’ve seen.

Most systems treat context as input. You gather it, you format it, you pass it in. The agent consumes it. It’s a pipeline.

Data Sloshing treats context as environment. It’s not something you give the agent. It’s something the agent lives inside of. Like humidity. Like gravity. The agent doesn’t process the context — the context changes what the agent is for that call.

When my calendar agent runs at 8 AM on a Monday in Q4, it’s not the same agent that runs at 3 PM on a Wednesday in July. Same code, same logic, same perform() method. But the sloshing layer turned it into a different thing. A morning-Monday-Q4 thing that knows you’re starting your week in crunch time.

This is the future I’m building toward. Not smarter agents. Not bigger models. Not more complex orchestration.

Just agents that always know what’s going on.

What will you slosh into yours?


The Matrix: When AI Agents Build the Framework That Spawns More AI Agents

Let me tell you about something that’s been consuming my thinking for the past few months. Not in the “this might be interesting someday” way—in the “this fundamentally changes how we build software” way.

I built a repository where AI agents autonomously created an AI orchestration framework. Then those agents used that framework to spawn more agents. Then *those* agents built demonstrations showing how the whole system works.

**This is not theoretical. This is 100% autonomous agent development, happening right now.**

## The Problem Nobody Talks About

Have you noticed how we talk about AI as a “tool”? Like it’s a really smart hammer that needs a human to swing it? We write prompts, get responses, copy-paste code, iterate, repeat. We’re still the ones doing the orchestration, the integration, the thinking about how all the pieces fit together.

The old me would’ve been fine with that. AI speeds up my work, I’m 10× more productive, ship faster—great!

But here’s what kept gnawing at me: **If AI can write code, why can’t AI orchestrate its own development process?**

Think about it. When you need to build 50 API endpoints, you don’t actually need a human to:

– Break down the work into packages

– Generate specifications for each endpoint

– Write the code for all 50

– Integrate them into a cohesive system

– Write tests, documentation, routing config

You need intelligence to understand the *domain*, create the *strategy*, and validate the *quality*. But the actual implementation? That’s parallelizable work that AI can handle autonomously.

## Enter The Matrix

The Matrix is a hierarchical AI orchestration framework built on Claude Code. But calling it a “framework” undersells what’s actually happening here.

**It’s a system where AI agents spawn other AI agents to generate outcomes at scale—completely autonomously.**

Here’s the architecture pattern:

“`

Orchestrator (200k context – preserves strategic thinking)

├── Discovery & Analysis (reads 20+ project files)

├── Strategy Generation (creates work breakdown: N packages × M items)

├── System Analysis (extracts patterns, standards, schemas)

└── Parallel Agent Spawning

├── N × outcome-generator agents (parallel execution)

└── 1 × integrator agent (synthesizes all results)

“`

The orchestrator—that’s the main Claude instance with the full 200k context—handles discovery, strategy, and coordination. It reads your project documentation, understands your domain, analyzes your patterns, and generates a work breakdown structure.

Then it spawns N specialized agents *simultaneously*. Each agent gets its own clean 200k context window and generates M outcomes independently. After they finish, an integrator agent synthesizes everything into a cohesive system.

**Result**: N×M outcomes generated in parallel, following your project patterns, integrated automatically.

## Why This Changes Everything

Let’s run through a concrete example: generating 50 REST API endpoints for a microservices platform.

**Traditional approach:**

– You prompt Claude: “Write me an authentication endpoint”

– Copy-paste response

– Prompt: “Now write me a user management endpoint”

– Copy-paste

– Prompt: “Now integrate these with the router”

– Copy-paste

– Repeat 47 more times

**Time**: Hours. **Your role**: Human copy-paste machine.

**The Matrix approach:**

1. Orchestrator reads your project (Express.js, MongoDB, JWT auth patterns)

2. Generates 10 work packages (Authentication, User Management, Product Catalog, etc.)

3. Creates 50 work items (5 endpoints per package)

4. Spawns 10 outcome-generator agents simultaneously

5. Each agent generates 5 complete endpoints following your patterns

6. Integrator synthesizes router config, OpenAPI spec, test suites

7. Reports: “50 endpoints generated, all patterns consistent, ready for deployment”

**Time**: Minutes. **Your role**: Strategic oversight.

The paradigm shifts from **”AI as tool I direct”** to **”AI as autonomous development team I orchestrate”**.

## The Meta Twist

Here’s where it gets mind-blowing.

Every file in The Matrix repository was built by autonomous agents. The orchestration framework itself? Built by agents. The agent definitions that spawn other agents? Also built by agents. The documentation explaining how agents work? *Agents wrote that too.*

The repository contains its own `AGENT_CHANGELOG.md` documenting every autonomous contribution:

– What the agent did

– Why it made specific decisions

– What alternatives it considered

– Impact metrics and quality validation

**This is a portfolio demonstrating that agents can build and maintain production systems reliably, safely, and intelligently—without constant human intervention.**

Recent example: We needed to showcase The Matrix’s capabilities. So I invoked a meta-orchestrator agent called “mind-blower.” It autonomously:

1. Analyzed 10 possible demonstration concepts

2. Scored each for impressiveness and feasibility

3. Selected the top 6 to build

4. Spawned 6 demo-builder agents in parallel

5. Each built an interactive HTML/CSS/JS visualization

6. Spawned a cathedral-architect agent to integrate everything

7. Created a showcase gallery with navigation and search

**I gave one instruction. The agents made every other decision and built the entire demonstration system.**

That’s not AI assistance. That’s autonomous software development.

## Domain-Agnostic by Design

The same orchestration pattern works for any domain requiring parallel generation:

**Software Development**:

– 50 API endpoints (10 services × 5 endpoints)

– 100 React components (20 categories × 5 components)

– Test suites, CI/CD pipelines, IaC modules

**Content Creation**:

– 50 documentation pages (10 sections × 5 pages)

– Blog article libraries, marketing assets, technical guides

**Data Engineering**:

– 50 ETL pipelines (10 data sources × 5 transformations)

– Schema generation, data quality validators

**DevOps & Infrastructure**:

– Infrastructure modules, monitoring dashboards, configuration management

The workflow is identical. Only the domain context changes.

## The Zero Dependencies Philosophy

One more thing that’s critical: **zero dependencies doesn’t mean no external resources.**

Every demonstration in The Matrix is a self-contained HTML file. No npm install. No webpack. No build process. You open the file in a browser—it works.

But we absolutely use CDN resources (Three.js for 3D visualizations, Chart.js for metrics, etc.). “Zero dependencies” means **no build process**, not hermetically sealed code.

Why? Because the real world uses libraries. The goal is showcasing autonomous development capabilities, not ideological purity about dependency management.

## What This Means for You

If you’re a developer, this changes your relationship with AI:

– Stop thinking “tool that helps me code”

– Start thinking “autonomous development team I orchestrate”

If you’re building AI systems, this provides the architectural pattern:

– Preserve orchestrator context for strategy

– Delegate implementation to specialized subagents

– Parallelize everything possible

– Synthesize results automatically

If you’re hiring developers (or evaluating AI capabilities), this portfolio proves:

– Agents can work autonomously with minimal oversight

– Agents make intelligent architectural decisions

– Agents maintain quality, safety, and documentation standards

– Agents improve systems continuously

## The Future Is Already Here

We’re at an inflection point. The question isn’t *whether* AI will autonomously build software—it’s already happening. The question is whether we’ll design systems that enable AI to work at its full potential.

The Matrix demonstrates one answer: hierarchical orchestration with context-aware delegation. An orchestrator that thinks strategically while specialized agents handle implementation in parallel.

**This is the beginning of expansive agents**—systems that don’t just respond to prompts, but autonomously break down complex problems, spawn the right specialists, coordinate execution, and synthesize results.

And here’s the really wild part: as these orchestration frameworks improve, they’ll build better versions of themselves. Agents improving agent systems. Meta-orchestration all the way down.

The repository is live. The code is open. The demonstrations are interactive. Every file was built autonomously by AI agents proving they can do this work reliably and intelligently.

The paradigm has shifted. Software development is becoming an orchestration problem, not an implementation problem.

And honestly? I’m here for it.

**Want to see it in action?**

– Repository: [The Matrix on GitHub](https://github.com/kody-w/TheMatrix)

– Demonstrations: Explore the cathedral of interactive orchestration visualizations

– Documentation: Complete agent specifications and workflow patterns

– Changelog: Every autonomous contribution documented transparently

**The question isn’t whether this is possible. The question is what you’ll build when you stop being the bottleneck in your own development process.**

Let’s find out together.

The Prompt Transplant: How I Got Claude to Organize My Inbox While I Slept

Or: When AI Becomes Your Personal IT Department

Let me tell you about the workflow that’s completely changed how I manage information overload. Last night, I went to bed with 12,625 unread emails screaming at me from my Outlook inbox. This morning, I woke up to a perfectly organized folder structure, custom inbox rules, and a clean mental slate — all because I asked Claude to “organize my inbox” while I made coffee.

The old me: Would’ve spent weeks procrastinating, eventually carved out a Saturday afternoon, gotten three folders deep, given up, and let entropy win again.

The new me: Threw the problem at Claude and said, “Figure out my email patterns and build me a system.” 

Fifteen minutes later, I had a complete inbox architecture with seven automated rules sorting everything from external communications to system notifications.

Welcome to Prompt Transplantation — the art of using AI to analyze your actual behavior patterns and build personalized systems around them.

The Pattern That Changes Everything

Prompt Transplantation isn’t about asking an AI to do a task once. It’s about using AI to perform behavioral pattern analysis on your real workflows and then auto-generate the automation infrastructure you need.

Here’s the mental model:

• Traditional Productivity: Manually organize, create rules, maintain systems

• Template-Based Automation: Apply generic “best practices” and hope they fit  

• Prompt Transplantation: Let an AI analyze your unique patterns and generate custom automation that actually matches how YOU work

Think of it like having a business analyst and systems architect living in your chat window. The AI observes your data, identifies patterns you didn’t even know existed, and builds infrastructure around your actual behavior — not some idealized productivity guru’s version of it.

How Prompt Transplantation Actually Works

Step 1: The Raw Material

You need real data from your actual workflows:

– The Emails: Your messy, unorganized inbox exactly as it exists

– The Patterns: What you actually do (not what you think you should do)

In my case, I pointed Claude at my Outlook inbox and let it scroll through hundreds of emails. It identified senders, subject patterns, and email types I didn’t even consciously recognize.

Step 2: The Analysis

This is where the magic happens. The AI doesn’t just categorize — it discovers organizational schemas embedded in your actual usage.

Claude identified:

– Recurring senders (Deal Boost, Chris, Nathan)

– Consistent patterns (all those “[EXTERNAL]” tags)  

– Project clusters (D365 Contact Centre, Salesforce, Power Platform)

– Notification types (MSApprovalNotifications, system alerts)

It wasn’t applying some pre-built email organization template. It was reading MY inbox and figuring out what folder structure would actually serve MY specific workflow.

Step 3: The Infrastructure Build

Here’s where Prompt Transplantation gets wild. Claude didn’t just suggest folders — it:

1. Created a hierarchical folder structure tailored to my email patterns

2. Built specific inbox rules with precise conditions

3. Mapped existing emails to appropriate destinations  

4. Provided a complete audit trail of what it did

All of this happened through browser automation. The AI wasn’t just advising me what to do. It was actually clicking through Outlook’s interface, creating folders, configuring rules, and building the system live.

Step 4: The Proposal Pattern

The crucial insight: Claude didn’t just DO everything. It proposed a complete plan first:

“Here’s what I found in your inbox. Here are the folders I recommend. Here are the rules I’d build. Approve?”

This “analysis → proposal → approval → execution” pattern is what makes Prompt Transplantation practical instead of scary. You’re not blindly trusting AI to reorganize your life. You’re using it as an infinitely patient analyst who can propose solutions faster than you can think them up.

Why This Changes Everything

I’ve been using Microsoft’s AI tools for years. Power Apps with Copilot. Automation with AI. But this felt different because of one key insight:

The AI wasn’t building something generic. It was building something bespoke.

Traditional productivity advice: “Here’s how successful people organize email”

Prompt Transplantation: “Here’s how YOU use email, and here’s infrastructure that matches”

The difference is massive. Generic systems fail because they fight your natural patterns. Custom systems work because they flow with them.

The Broader Pattern

Email organization is just one example. The real power of Prompt Transplantation applies anywhere you have:

1. Existing unstructured data (emails, files, notes, code)

2. Implicit patterns in how you use that data

3. A need for custom automation infrastructure

Imagine applying this to:

– File organization across projects

– Code refactoring patterns in your repositories

– Meeting notes that auto-organize by project

– Research materials that cluster by theme

The AI analyzes your actual behavior, proposes infrastructure, and builds it through automation. Every time.

The “Trust But Verify” Protocol

Here’s the key safety pattern I’ve developed for Prompt Transplantation:

1. Let the AI analyze (read-only, safe)

2. Review the complete proposal (AI shows all its work)

3. Approve explicitly (you’re still in control)

4. Watch it execute (see every action in real-time)

5. Verify the results (check that it did what it said)

This isn’t about blind automation. It’s about using AI to do the tedious pattern-recognition and system-building work while you maintain oversight and approval authority.

What I Learned

The most surprising insight: I didn’t actually know my own email patterns.

I thought I knew how I used email. But watching Claude analyze my inbox revealed organizational structures I’d never consciously recognized. The D365 Contact Centre cluster. The way external emails all had consistent markers. The subtle difference between project updates and system notifications.

The AI saw patterns in my data that I was blind to because I was too close to it.

That’s the real power of Prompt Transplantation. It’s not about replacing human judgment. It’s about using AI’s ability to process massive amounts of data and spot patterns we can’t see, then building automation infrastructure around those patterns.

Try This Tonight

Want to try Prompt Transplantation yourself? Here’s the simplest experiment:

1. Pick one messy digital workspace (inbox, file folder, notes app)

2. Ask an AI with browser access: “Analyze this and propose an organization system”

3. Review the AI’s proposal

4. Let it build the infrastructure if you approve

5. Watch what patterns it found that you missed

You might discover, like I did, that your biggest productivity problem wasn’t lack of discipline. It was lack of infrastructure that matched how you actually work.

The old me would spend months “meaning to organize” that inbox.

The new me just asks Claude to figure it out.

Welcome to the age of personalized infrastructure generation. Your AI business analyst is waiting.

Code Welding: Using LLMs to Merge Unrelated Codebases Into Something New

Or: How I Got Claude to Transplant Gesture Controls from a 3D Visualizer Into My Chat App

Let me tell you about the development pattern that’s completely changed how I build features. Last week, I had a broken chat application with a settings modal that wouldn’t close. I also had this completely unrelated 3D dimensional visualizer with killer gesture controls — you know, wave your hand to navigate, pinch to select, that sort of thing.

The old me: Would’ve spent days extracting, refactoring, and building a proper gesture library.

The new me: Threw both files at Claude and said, “Take the gesture controls from file B and weld them into file A. Don’t break anything else.”

Twenty seconds later, I had a fully functional chat app with gesture controls.

Welcome to Code Welding — the art of using LLMs to merge features from completely unrelated codebases.

The Pattern That Changes Everything

Code Welding isn’t about asking an LLM to write new code. It’s about using AI to perform surgical feature transplants between codebases that have absolutely nothing in common.

Here’s the mental model:

  • Traditional Development: Build features from scratch or carefully refactor shared code
  • Copy-Paste Programming: Grab code and hope it works (spoiler: it doesn’t)
  • Code Welding: Use an LLM as your surgical assistant to transplant working features between alien codebases

Think of it like organ transplants, but for code. The LLM is your surgeon, handling all the complex vascular connections while keeping both patients alive.

How Code Welding Actually Works

Step 1: The Donor and Recipient

You need two things:

  1. The Donor — A working codebase with the feature you want
  2. The Recipient — The codebase that needs the feature

In my case:

  • Donor: A 3D visualizer with MediaPipe gesture controls (1,500 lines of wild Three.js code)
  • Recipient: A React-ish chat application (5,000 lines of messaging logic)

These files shared literally nothing. Different frameworks, different purposes, different everything.

Step 2: The Prompt Engineering

This is where the magic happens. You don’t ask the LLM to “add gesture controls.” You give it surgical instructions:

Take the gesture detection system from iframe-tunneler-10.html,
specifically the detectGesture() and hand tracking logic.
Transplant it into index.html's chat application.
Map these gestures to these existing functions:
- Point up → scroll up
- Point down → scroll down  
- Peace sign → new chat
- OK sign → send message
Keep ALL existing functionality intact.
Output the COMPLETE modified index.html.

Step 3: The Weld Points

The LLM identifies where to attach the foreign code. It finds the natural connection points — what I call “weld points” — between two completely different architectures.

Watch what happened with mine:

javascript

// The LLM created this bridge class
class GestureManager {
    constructor(uiController) {
        this.ui = uiController;  // Weld point #1: Existing UI
        // ... gesture setup code from visualizer
    }
    
    executeGesture(gesture) {
        // Weld point #2: Map gestures to existing methods
        switch (gesture) {
            case 'point-up':
                document.getElementById('chat-messages').scrollBy({
                    top: -200,
                    behavior: 'smooth'
                });
                break;
            case 'peace':
                this.ui.createNewChat();  // Using existing method!
                break;
        }
    }
}

The LLM understood both codebases well enough to create perfect adapters between them. It’s like it built custom surgical shunts between incompatible organs.

Why This Is Revolutionary

1. Speed That’s Actually Insane

I went from idea to implementation in minutes, not days. Not because I’m fast, but because I’m not doing the work. The LLM is handling thousands of micro-decisions about integration.

2. Cross-Pollination of Ideas

You can grab features from ANYWHERE:

  • Want the smooth scroll from that Apple marketing page? Weld it into your docs.
  • Love the particle effects from that game? Weld them into your dashboard.
  • Need voice commands from a smart home app? Weld them into your spreadsheet.

3. No Sacred Cows

Traditional development makes us precious about architecture. Code Welding doesn’t care. That gesture system was built for 3D visualization? So what. It works in a chat app now.

The Code Welding Playbook

Here’s my exact process:

1. Identify the Feature

Find something cool that works. Don’t worry about how it’s implemented. Just make sure it actually works in its current context.

2. Document the Behavior

Write down exactly what the feature does:

  • “Detects hand gestures using webcam”
  • “Maps specific gestures to specific actions”
  • “Shows visual feedback when gesture is recognized”

3. Map the Integration Points

Tell the LLM exactly how to connect the features:

When peace sign detected → call createNewChat()
When pinch detected → call archiveCurrentChat()
When fist detected → call toggleSidebar()

4. Preserve Everything Else

This is crucial. Your prompt must emphasize:

Keep ALL existing functionality.
Do not remove any features.
Only ADD the gesture system.

5. Test the Weld Points

The LLM will create connection points. Test them individually:

  • Does gesture detection work?
  • Do the mapped functions fire?
  • Did anything else break?

Real Examples That Shouldn’t Work (But Do)

Music Visualizer → Email Client

  • Welded audio reactive animations into Gmail
  • Emails now pulse to background music
  • Why? Because reading email is boring

Game Engine Physics → Todo App

  • Tasks now have gravity and collision
  • Completed tasks literally fall off the screen
  • Overdue tasks get heavier and sink

3D Shader Effects → Markdown Editor

  • Text now has real-time ray marching effects
  • Code blocks look like they’re carved from marble
  • Headers cast actual shadows

These aren’t jokes. These are real welds I’ve done. They work.

The Gotchas (Learn From My Pain)

Version Conflicts

  • The donor uses React 16, recipient uses React 18
  • Solution: Tell the LLM about version differences upfront

Hidden Dependencies

  • That cool feature needs THREE.js but your app doesn’t have it
  • Solution: Let the LLM inline just the needed parts

Event System Conflicts

  • Both codebases want to own window.onload
  • Solution: Prompt the LLM to namespace everything

Performance Bombs

  • That particle system runs at 60fps, your form doesn’t need that
  • Solution: Add throttling instructions to your prompt

When NOT to Code Weld

Let’s be real — this isn’t always the answer:

Don’t weld when:

  • You’re building critical infrastructure
  • Performance is more important than features
  • You need deep integration with existing systems
  • The feature needs to be maintained long-term

Do weld when:

  • You’re prototyping
  • You need to test if users even want the feature
  • The feature is for fun/delight
  • You need something NOW

The Prompt Template That Always Works

I have two files:
1. [DONOR FILE] - Contains [FEATURE DESCRIPTION]
2. [RECIPIENT FILE] - Needs the feature added

Take the [SPECIFIC FEATURE] from the donor file.
Integrate it into the recipient file by:
- Creating a new [CLASS/MODULE] to contain the feature
- Mapping these donor functions to these recipient functions: [MAPPING]
- Preserving ALL existing functionality in the recipient
- Adding these integration points: [INTEGRATION POINTS]

Output the COMPLETE modified recipient file.
Maintain all existing code structure and functionality.

The Future Is Already Here

I’m seeing developers use Code Welding for things I never imagined:

  • Feature Shopping: Browse GitHub, find cool features, weld them into your app
  • Cross-Platform Welding: iOS feature → Web app, no problem
  • Time Travel Welding: Modern features into legacy codebases
  • Language Welding: Python ML model → JavaScript frontend (yes, really)

We’re entering an era where features are portable. Where any code that works anywhere can work everywhere.

Your First Weld

Want to try this? Here’s a starter challenge:

  1. Find any app with a feature you love
  2. View source, copy the whole file
  3. Take your current project
  4. Ask Claude/GPT-4 to weld them together

Start small. Maybe grab a tooltip implementation and weld it into your CLI tool. Or take a loading animation and weld it into your terminal.

The Philosophical Shift

We’ve been taught that code should be modular, reusable, properly abstracted. Code Welding says: “What if we just… didn’t care?”

What if, instead of building perfect architectures, we just grabbed working features and welded them wherever we needed them?

What if every piece of working code became a potential feature for every other piece of code?

What if the LLM could handle all the messy integration details while we focus on what we actually want to build?


This is the future I’m building toward. Where every developer becomes a curator of features rather than a writer of code. Where the question isn’t “How do I build this?” but “Where has this already been built?”

What will you weld first?

Page 1 of 7

Powered by WordPress & Theme by Anders Norén