The Analogy
The human brainstem is the most ancient part of the brain. It doesn’t think. It doesn’t reason. It keeps you alive — breathing, heartbeat, reflexes, sensory relay. Everything else in the nervous system builds on top of this foundation.
This is the exact architecture we need for AI assistants.
The Biological Model
┌─────────────────────────────────────────┐ │ Cerebral Cortex │ ← Higher reasoning, language, planning │ (Azure OpenAI / GPT) │ ├─────────────────────────────────────────┤ │ Limbic System │ ← Memory, emotion, context │ (Memory Agents, D365 Storage) │ ├─────────────────────────────────────────┤ │ Brainstem │ ← Core survival loop │ (Agent server, tool dispatch, I/O) │ ├─────────────────────────────────────────┤ │ Spinal Cord │ ← Cloud body, always-on │ (Azure Functions, deployment) │ ├─────────────────────────────────────────┤ │ Nervous System │ ← Reach into the world │ (Copilot Studio, Teams, M365 Copilot) │ └─────────────────────────────────────────┘
What Each Layer Does
| Biological Structure | AI Equivalent | Purpose |
|---|---|---|
| Brainstem | Local agent server | Core loop: receive input → pick tool → execute → respond. The minimum viable intelligence. |
| Spinal Cord | Azure deployment | Extends the brainstem into the cloud. Always-on, reachable, persistent storage. |
| Nervous System | Copilot Studio / Teams | Sensory reach — eyes (file upload), ears (voice), hands (email, D365 actions). Enterprise distribution. |
| Limbic System | Memory agents + storage | Remembers past interactions, stores preferences, maintains context across sessions. |
| Cerebral Cortex | LLM (GPT, Claude) | The actual reasoning. Language understanding, planning, tool selection. |
The Brainstem Itself
The brainstem is the smallest useful unit. It has exactly three responsibilities:
1. Breathe — The Agent Loop
receive message → build context → call LLM → parse tool calls → execute → respond
This is the heartbeat. It never stops. It doesn’t need to be smart — it just needs to reliably shuttle messages between the user and the LLM, and execute whatever the LLM decides.
2. Reflex — Tool Dispatch
The brainstem has reflexes — pre-wired responses to specific stimuli. In AI terms, these are agents. Each agent is a self-contained skill:
class BasicAgent: def __init__(self, name, metadata): self.name = name # What am I called? self.metadata = metadata # When should the LLM pick me? # metadata includes: description, parameters schema def perform(self, **kwargs): pass # What do I do?
The LLM reads all agent metadata (OpenAI function-calling format) and decides which reflex to trigger. The brainstem just executes it.
3. Sensory Relay — I/O Routing
The brainstem routes signals between the body (channels) and the brain (LLM):
- Input: HTTP POST from chat UI, Teams, M365 Copilot, API clients
- Output: Formatted response back through the same channel
- Side effects: Agent execution results (emails sent, records created, memories stored)
The Meta-Agent Pattern
The most powerful brainstem pattern is the meta-agent — an agent that creates or transforms other agents.
LearnNewAgent (Description → Agent)
"Create an agent that fetches weather data"
↓
LearnNewAgent reads description
↓
Uses AI to generate perform() body
↓
Writes weather_agent.py to agents/
↓
Hot-loads it — immediately available
CopilotStudioConverterAgent (Agent → Native Solution)
The inverse meta-agent. It reads existing *_agent.py files and converts them into native Copilot Studio solutions:
email_drafting_agent.py
↓
AST parse → extract metadata, perform() logic
↓
AI researches native Copilot Studio equivalent
↓
Generates: topic YAML + Power Automate flow JSON + GPT instructions
↓
Packages into Dataverse-importable solution zip
This is critical because it solves the platform anxiety problem: stakeholders see Python code running on Azure Functions and worry it’s “outside the platform.” The converter takes that exact logic and transplants it into native Copilot Studio components — same brain, new body.
The Conversion Map
| Python Agent Logic | Copilot Studio Native |
|---|---|
BasicAgent.metadata.description |
GPT instruction (topic routing) |
BasicAgent.metadata.parameters |
Topic input variables |
perform() with requests.post() |
Power Automate flow with native connector |
storage_manager.read_json() |
Dataverse Annotations via native connector |
os.environ.get('EMAIL_URL') |
Office 365 Outlook connector (no webhook) |
| OpenAI function-calling dispatch | Copilot Studio built-in AI topic routing |
Conditional logic in perform() |
ConditionGroup actions in topic YAML |
The Three Tiers
Tier 1: The Brainstem (Local)
One dependency: a GitHub account. The brainstem server runs locally, uses the GitHub Copilot API as its LLM, auto-discovers *_agent.py files from the agents folder, and serves a chat UI.
What you learn: Agent architecture, function-calling, tool dispatch.
Tier 2: The Spinal Cord (Azure)
Deploy to Azure Functions. Now it’s always-on with persistent storage (Dataverse/D365), monitoring (Application Insights), and managed identity (no API keys).
What you learn: ARM templates, Azure Functions, managed identity, RBAC.
Tier 3: The Nervous System (Copilot Studio)
Connect to Copilot Studio. Your agent is now available in Teams, M365 Copilot, and the entire Microsoft ecosystem. Either:
- Bridge mode: Copilot Studio calls your Azure Function (thin proxy)
- Native mode: Use the converter to transplant agent logic into native topics + flows
What you learn: Copilot Studio, declarative agents, Power Platform solutions, enterprise AI.
Why This Pattern Works
- Start simple, layer up — The brainstem works standalone. Each tier adds capability without replacing what’s below.
- Same brain, different body — The agent logic (the “what”) is separate from the runtime (the “where”). Same
perform()runs locally, on Azure, or as a native Copilot Studio topic. - Auto-discovery — Drop a
*_agent.pyfile in the agents folder and the brainstem finds it. No registration, no configuration. - Meta-agents — Agents that create agents. The system can extend itself.
- Biological metaphor scales — From a single brainstem to a full nervous system, the architecture maps cleanly to how biological intelligence is organized.
The Soul File
Every brainstem can have a soul.md — a markdown file that defines its personality, boundaries, and expertise. The soul is injected as the system prompt. Different souls make the same brainstem behave differently:
- A customer support soul answers questions politely and escalates edge cases
- A developer soul writes code and runs terminal commands
- A workshop facilitator soul runs ideation sessions and tracks votes
The soul is the brainstem’s identity. The agents are its skills. The LLM is its reasoning. Together, they form a complete AI assistant.
Leave a Reply