Kody Wildfeuer's blog

kodyw.com

Month: February 2026

The Brainstem Pattern — Biological Architecture for AI Assistants

The Analogy

The human brainstem is the most ancient part of the brain. It doesn’t think. It doesn’t reason. It keeps you alive — breathing, heartbeat, reflexes, sensory relay. Everything else in the nervous system builds on top of this foundation.

This is the exact architecture we need for AI assistants.


The Biological Model

┌─────────────────────────────────────────┐
│           Cerebral Cortex               │  ← Higher reasoning, language, planning
│         (Azure OpenAI / GPT)            │
├─────────────────────────────────────────┤
│           Limbic System                 │  ← Memory, emotion, context
│     (Memory Agents, D365 Storage)       │
├─────────────────────────────────────────┤
│            Brainstem                    │  ← Core survival loop
│   (Agent server, tool dispatch, I/O)    │
├─────────────────────────────────────────┤
│          Spinal Cord                    │  ← Cloud body, always-on
│      (Azure Functions, deployment)      │
├─────────────────────────────────────────┤
│         Nervous System                  │  ← Reach into the world
│  (Copilot Studio, Teams, M365 Copilot)  │
└─────────────────────────────────────────┘

What Each Layer Does

Biological Structure AI Equivalent Purpose
Brainstem Local agent server Core loop: receive input → pick tool → execute → respond. The minimum viable intelligence.
Spinal Cord Azure deployment Extends the brainstem into the cloud. Always-on, reachable, persistent storage.
Nervous System Copilot Studio / Teams Sensory reach — eyes (file upload), ears (voice), hands (email, D365 actions). Enterprise distribution.
Limbic System Memory agents + storage Remembers past interactions, stores preferences, maintains context across sessions.
Cerebral Cortex LLM (GPT, Claude) The actual reasoning. Language understanding, planning, tool selection.

The Brainstem Itself

The brainstem is the smallest useful unit. It has exactly three responsibilities:

1. Breathe — The Agent Loop

receive message → build context → call LLM → parse tool calls → execute → respond

This is the heartbeat. It never stops. It doesn’t need to be smart — it just needs to reliably shuttle messages between the user and the LLM, and execute whatever the LLM decides.

2. Reflex — Tool Dispatch

The brainstem has reflexes — pre-wired responses to specific stimuli. In AI terms, these are agents. Each agent is a self-contained skill:

class BasicAgent:
    def __init__(self, name, metadata):
        self.name = name          # What am I called?
        self.metadata = metadata  # When should the LLM pick me?
        # metadata includes: description, parameters schema

    def perform(self, **kwargs):
        pass  # What do I do?

The LLM reads all agent metadata (OpenAI function-calling format) and decides which reflex to trigger. The brainstem just executes it.

3. Sensory Relay — I/O Routing

The brainstem routes signals between the body (channels) and the brain (LLM):

  • Input: HTTP POST from chat UI, Teams, M365 Copilot, API clients
  • Output: Formatted response back through the same channel
  • Side effects: Agent execution results (emails sent, records created, memories stored)

The Meta-Agent Pattern

The most powerful brainstem pattern is the meta-agent — an agent that creates or transforms other agents.

LearnNewAgent (Description → Agent)

"Create an agent that fetches weather data"
        ↓
  LearnNewAgent reads description
        ↓
  Uses AI to generate perform() body
        ↓
  Writes weather_agent.py to agents/
        ↓
  Hot-loads it — immediately available

CopilotStudioConverterAgent (Agent → Native Solution)

The inverse meta-agent. It reads existing *_agent.py files and converts them into native Copilot Studio solutions:

  email_drafting_agent.py
        ↓
  AST parse → extract metadata, perform() logic
        ↓
  AI researches native Copilot Studio equivalent
        ↓
  Generates: topic YAML + Power Automate flow JSON + GPT instructions
        ↓
  Packages into Dataverse-importable solution zip

This is critical because it solves the platform anxiety problem: stakeholders see Python code running on Azure Functions and worry it’s “outside the platform.” The converter takes that exact logic and transplants it into native Copilot Studio components — same brain, new body.

The Conversion Map

Python Agent Logic Copilot Studio Native
BasicAgent.metadata.description GPT instruction (topic routing)
BasicAgent.metadata.parameters Topic input variables
perform() with requests.post() Power Automate flow with native connector
storage_manager.read_json() Dataverse Annotations via native connector
os.environ.get('EMAIL_URL') Office 365 Outlook connector (no webhook)
OpenAI function-calling dispatch Copilot Studio built-in AI topic routing
Conditional logic in perform() ConditionGroup actions in topic YAML

The Three Tiers

Tier 1: The Brainstem (Local)

One dependency: a GitHub account. The brainstem server runs locally, uses the GitHub Copilot API as its LLM, auto-discovers *_agent.py files from the agents folder, and serves a chat UI.

What you learn: Agent architecture, function-calling, tool dispatch.

Tier 2: The Spinal Cord (Azure)

Deploy to Azure Functions. Now it’s always-on with persistent storage (Dataverse/D365), monitoring (Application Insights), and managed identity (no API keys).

What you learn: ARM templates, Azure Functions, managed identity, RBAC.

Tier 3: The Nervous System (Copilot Studio)

Connect to Copilot Studio. Your agent is now available in Teams, M365 Copilot, and the entire Microsoft ecosystem. Either:

  • Bridge mode: Copilot Studio calls your Azure Function (thin proxy)
  • Native mode: Use the converter to transplant agent logic into native topics + flows

What you learn: Copilot Studio, declarative agents, Power Platform solutions, enterprise AI.


Why This Pattern Works

  1. Start simple, layer up — The brainstem works standalone. Each tier adds capability without replacing what’s below.
  2. Same brain, different body — The agent logic (the “what”) is separate from the runtime (the “where”). Same perform() runs locally, on Azure, or as a native Copilot Studio topic.
  3. Auto-discovery — Drop a *_agent.py file in the agents folder and the brainstem finds it. No registration, no configuration.
  4. Meta-agents — Agents that create agents. The system can extend itself.
  5. Biological metaphor scales — From a single brainstem to a full nervous system, the architecture maps cleanly to how biological intelligence is organized.

The Soul File

Every brainstem can have a soul.md — a markdown file that defines its personality, boundaries, and expertise. The soul is injected as the system prompt. Different souls make the same brainstem behave differently:

  • A customer support soul answers questions politely and escalates edge cases
  • A developer soul writes code and runs terminal commands
  • A workshop facilitator soul runs ideation sessions and tracks votes

The soul is the brainstem’s identity. The agents are its skills. The LLM is its reasoning. Together, they form a complete AI assistant.

Data Sloshing: The Context Pattern That Makes AI Agents Feel Psychic

By Kody Wildfeuer

Or: How I Made Every Agent in My System Know What Time It Is, What You Did Last Tuesday, and Why You Prefer Bullet Points


Let me tell you about the moment I realized every AI agent framework is fundamentally broken. I had built a calendar agent — cool little thing, reads your Outlook, tells you what to prep for. Monday morning, 8:47 AM, I ask it “what’s coming up?” and it spits back a flat list of meetings sorted by time.

Technically correct. Completely useless.

Because at 8:47 AM on a Monday, I don’t need a list. I need someone to say: “Your standup is in 13 minutes, you haven’t reviewed the sprint board, and by the way — it’s quarter-end, so that finance review at 2 PM is the one that actually matters today.”

The old me: Would’ve hardcoded a bunch of if statements. if monday, if morning, if Q4. Brittle. Annoying. Never enough context.

The new me: Built a pattern where every agent automatically knows everything it could need before it even starts working.

Welcome to Data Sloshing — the pattern that makes AI agents feel like they’ve been watching over your shoulder all week.

What Is Data Sloshing?

Here’s what most agent frameworks do: You call an agent, you pass it a query, it does its thing, it returns a result. The agent is stateless. Contextless. It has the memory of a goldfish and the situational awareness of a rock.

Data Sloshing flips this. Before any agent runs its perform() method, the system automatically washes a wave of contextual signals over it. The agent doesn’t ask for context. It doesn’t need to. Context just arrives, like weather.

Here’s the mental model:

Traditional Agents: User → Query → Agent → Response
Data Sloshing: User → Query → Context Enrichment → Agent (now omniscient) → Response

Think of it like this: Imagine every employee at a company gets a personalized briefing packet slid under their door every morning. Time of day, what happened yesterday, who prefers what, what’s urgent. They don’t have to go looking for it. It’s just there when they sit down to work.

That’s sloshing.

The Five Layers of Context

Every single agent call in openrappter gets enriched with five layers of context before perform() fires. Here’s what each layer knows:

Layer 1: Temporal Awareness

The system knows what time it is, and more importantly, what that means.

{
    "time_of_day": "morning",        # Not just the hour — the vibe
    "likely_activity": "active_work", # What you're probably doing right now
    "day_of_week": "Monday",
    "is_weekend": false,
    "quarter": "Q1",
    "fiscal": "quarter_end_push",    # Uh oh — crunch time
    "is_urgent_period": true          # Everything matters more right now
}

Notice likely_activity. The system doesn’t just know it’s 10 AM. It knows that at 10 AM, you’re probably in active work mode. At 5 PM, you’re in wrap-up mode. At 7 AM, you’re preparing for the day. This changes how agents respond.

My calendar agent uses this to shift tone:

  • 8 AM: “☀️ First up: Sprint Planning in 22min. Review your notes and grab coffee.”
  • 2 PM: “⏰ Architecture Review in 45min — wrap up your current task and context-switch.”
  • 6 PM: “🌆 Late meeting: 1:1 with Manager in 30min. Prep your summary and key points.”

Same agent. Same calendar. Completely different advice based on when you ask.

Layer 2: Query Signals

The system reads your question like a detective reads a crime scene.

# You ask: "What are my latest active tasks?"

{
    "specificity": "medium",
    "hints": ["temporal:recency", "ownership:user"],
    "word_count": 6,
    "is_question": true,
    "has_id_pattern": false
}

It extracted two critical signals:

  • temporal:recency — you said “latest,” so sort by most recent
  • ownership:user — you said “my,” so filter to your stuff

The agent didn’t have to parse “latest” or “my.” The sloshing layer already did. By the time the agent sees the query, it also sees instructions: “Sort by most recent. Filter by current user.”

Layer 3: Memory Echoes

This is where it gets spooky. The system checks your stored memories and surfaces anything relevant to what you’re asking about right now.

# You ask: "How should we handle the deployment?"
# System finds in your memory store:

"memory_echoes": [
    {
        "message": "Team prefers Azure for cloud deployments",
        "theme": "preference",
        "relevance": 0.75
    },
    {
        "message": "Last deployment had rollback issues with staging",
        "theme": "fact",
        "relevance": 0.60
    }
]

It uses word-overlap scoring — not semantic search, not vector embeddings, just good old set intersection with a minimum threshold of 2 matching words. It’s fast, it’s local, and it works shockingly well.

The agent now knows your team prefers Azure and that your last deployment had problems. You didn’t mention any of this. Your memory did.

Layer 4: Behavioral Hints

Over time, the system builds a profile of how you work — not by asking, but by watching.

{
    "prefers_brief": true,           # Your average memory is < 15 words
    "technical_level": "advanced",   # You mention APIs, schemas, GUIDs
    "frequent_entities": ["Azure", "sprint", "deployment"]
}

If you consistently write short, punchy memories, the system infers you prefer brief responses. If your memories are full of technical jargon, it knows you can handle advanced output. This flows into every agent’s behavior without a single settings page.

Layer 5: Orientation

The final layer synthesizes everything into a single actionable directive. It’s the briefing summary on top of the briefing packet.

{
    "confidence": "high",
    "approach": "use_preference",    # We found a stored preference — use it
    "hints": [
        "Sort by most recent",
        "Filter by current user",
        "Quarter end — prioritize closing activities"
    ],
    "response_style": "concise"      # User prefers brief
}

This is what the agent actually reads first. High confidence? Go direct. Low confidence? Ask for clarification. Found a preference? Use it. Quarter end? Flag urgency.

How It Actually Works (The Code)

The magic is embarrassingly simple. It lives in BasicAgent.execute():

def execute(self, **kwargs):
    query = kwargs.get('query', '')

    # This one line changes everything
    self.context = self.slosh(query)

    # Now perform() has full context
    return self.perform(**kwargs)

Every agent inherits from BasicAgent. Every agent gets execute() called by the orchestrator. Every agent gets sloshed. No opt-in. No configuration. No “don’t forget to pass the context.” It just happens.

Agents access signals with dot notation:

def perform(self, **kwargs):
    time = self.get_signal('temporal.time_of_day')
    activity = self.get_signal('temporal.likely_activity')
    is_crunch = self.get_signal('temporal.is_urgent_period')
    style = self.get_signal('orientation.response_style')

    # Now build your response knowing ALL of this

Write a new agent? You get sloshing for free. You don’t import it, configure it, or think about it. It’s in the water.

The Real Magic: Agents That Surprise You

Here’s where Data Sloshing stops being a pattern and starts being a superpower.

I built a CalendarPrep agent. It reads my Outlook calendar through macOS Calendar.app and tells me what to prepare for. Standard stuff. But because of sloshing, it does things I never explicitly coded:

Monday morning, 8:30 AM:

🚀 Monday — check for any standup or planning meetings first.
☀️ First up: "Sprint Planning" in 28min. Review your notes and grab coffee.
⚡ Back-to-back: "Sprint Planning" → "1:1 with Manager" with only 5min gap.
🔥 Heavy day: 7 meetings ahead. Protect time for focused work between them.

Friday afternoon, 4:45 PM:

🎯 Friday — make sure end-of-week deliverables are covered.
🌆 Late meeting: "Week Retro" in 15min. Prep your summary and key points.

December 18th, any time:

📊 Quarter/year-end period — prioritize any review or reporting meetings.

I wrote ONE suggestion engine. Sloshing made it context-aware across time, day, fiscal period, and meeting density. The agent feels like it understands my work rhythm. It doesn’t. It just has really good contextual data.

Why This Beats Every Other Approach

1. Zero-Config Context

Every other framework makes you pass context explicitly. “Here’s the user profile. Here’s the conversation history. Here’s the system prompt.” Data Sloshing says: no. Context is infrastructure, not input. You don’t pass electricity to your toaster — you plug it in and the electricity is there.

2. Agents Can’t Forget

Because sloshing happens on every call, agents can’t accidentally ignore context. A new developer writes a new agent, forgets about user preferences? Doesn’t matter. Sloshing delivered those preferences before perform() even fired.

3. Progressive Intelligence

The system gets smarter automatically. Store more memories → richer echoes. Use it more → better behavioral hints. Express preferences → stronger priors. You’re not training a model. You’re just using the system, and the system is learning your patterns.

4. It’s Stupid Simple

No vector database. No embeddings. No RAG pipeline. No semantic search. It’s set intersections, datetime math, and dictionary lookups. The whole sloshing system is ~150 lines of Python. It runs in milliseconds.

I could’ve built something fancier. I didn’t need to.

The Sloshing Playbook

Here’s how to add Data Sloshing to your own agent system:

1. Define Your Signal Layers

Pick 3-5 categories of context that matter for your domain:

LayerWhat It Answers
Temporal“When is this happening?”
Query“What are they really asking?”
Memory“What do we already know?”
Behavioral“How do they like to work?”
Orientation“How should we approach this?”

2. Make It Implicit

The cardinal rule: agents don’t ask for context. If your agents have to call getContext() or pass a context parameter, you’ve failed. Sloshing happens in the base class, before the agent runs. Period.

class BasicAgent:
    def execute(self, **kwargs):
        self.context = self.slosh(query)  # Implicit. Always.
        return self.perform(**kwargs)     # Agent just does its job

3. Build the Synthesis Layer

Raw signals are noise. The orientation layer turns them into decisions:

  • Multiple hints pointing the same way? High confidence, go direct.
  • Found a stored preference? Use it, don’t ask again.
  • Nothing to go on? Low confidence, ask for clarification.
  • Quarter end? Flag urgency on everything.

4. Keep It Local

Every signal in my sloshing system comes from local data. ~/.openrappter/memory.json. System clock. The query itself. No API calls, no network requests, no latency. Sloshing should be free and instant.

When NOT to Slosh

Let’s be honest — not everything needs context:

Don’t slosh when:

  • Your agent does exactly one thing regardless of context (a calculator)
  • Latency matters more than intelligence (a health check)
  • The context would leak between users in a multi-tenant system

Do slosh when:

  • Agents interact with humans
  • “Same question, different answer” is a feature
  • You want your system to feel personalized without a settings page
  • You’re building something that should get better over time

The Part Nobody Talks About

Here’s what makes Data Sloshing philosophically different from every other context system I’ve seen.

Most systems treat context as input. You gather it, you format it, you pass it in. The agent consumes it. It’s a pipeline.

Data Sloshing treats context as environment. It’s not something you give the agent. It’s something the agent lives inside of. Like humidity. Like gravity. The agent doesn’t process the context — the context changes what the agent is for that call.

When my calendar agent runs at 8 AM on a Monday in Q4, it’s not the same agent that runs at 3 PM on a Wednesday in July. Same code, same logic, same perform() method. But the sloshing layer turned it into a different thing. A morning-Monday-Q4 thing that knows you’re starting your week in crunch time.

This is the future I’m building toward. Not smarter agents. Not bigger models. Not more complex orchestration.

Just agents that always know what’s going on.

What will you slosh into yours?


The Matrix: When AI Agents Build the Framework That Spawns More AI Agents

Let me tell you about something that’s been consuming my thinking for the past few months. Not in the “this might be interesting someday” way—in the “this fundamentally changes how we build software” way.

I built a repository where AI agents autonomously created an AI orchestration framework. Then those agents used that framework to spawn more agents. Then *those* agents built demonstrations showing how the whole system works.

**This is not theoretical. This is 100% autonomous agent development, happening right now.**

## The Problem Nobody Talks About

Have you noticed how we talk about AI as a “tool”? Like it’s a really smart hammer that needs a human to swing it? We write prompts, get responses, copy-paste code, iterate, repeat. We’re still the ones doing the orchestration, the integration, the thinking about how all the pieces fit together.

The old me would’ve been fine with that. AI speeds up my work, I’m 10× more productive, ship faster—great!

But here’s what kept gnawing at me: **If AI can write code, why can’t AI orchestrate its own development process?**

Think about it. When you need to build 50 API endpoints, you don’t actually need a human to:

– Break down the work into packages

– Generate specifications for each endpoint

– Write the code for all 50

– Integrate them into a cohesive system

– Write tests, documentation, routing config

You need intelligence to understand the *domain*, create the *strategy*, and validate the *quality*. But the actual implementation? That’s parallelizable work that AI can handle autonomously.

## Enter The Matrix

The Matrix is a hierarchical AI orchestration framework built on Claude Code. But calling it a “framework” undersells what’s actually happening here.

**It’s a system where AI agents spawn other AI agents to generate outcomes at scale—completely autonomously.**

Here’s the architecture pattern:

“`

Orchestrator (200k context – preserves strategic thinking)

├── Discovery & Analysis (reads 20+ project files)

├── Strategy Generation (creates work breakdown: N packages × M items)

├── System Analysis (extracts patterns, standards, schemas)

└── Parallel Agent Spawning

├── N × outcome-generator agents (parallel execution)

└── 1 × integrator agent (synthesizes all results)

“`

The orchestrator—that’s the main Claude instance with the full 200k context—handles discovery, strategy, and coordination. It reads your project documentation, understands your domain, analyzes your patterns, and generates a work breakdown structure.

Then it spawns N specialized agents *simultaneously*. Each agent gets its own clean 200k context window and generates M outcomes independently. After they finish, an integrator agent synthesizes everything into a cohesive system.

**Result**: N×M outcomes generated in parallel, following your project patterns, integrated automatically.

## Why This Changes Everything

Let’s run through a concrete example: generating 50 REST API endpoints for a microservices platform.

**Traditional approach:**

– You prompt Claude: “Write me an authentication endpoint”

– Copy-paste response

– Prompt: “Now write me a user management endpoint”

– Copy-paste

– Prompt: “Now integrate these with the router”

– Copy-paste

– Repeat 47 more times

**Time**: Hours. **Your role**: Human copy-paste machine.

**The Matrix approach:**

1. Orchestrator reads your project (Express.js, MongoDB, JWT auth patterns)

2. Generates 10 work packages (Authentication, User Management, Product Catalog, etc.)

3. Creates 50 work items (5 endpoints per package)

4. Spawns 10 outcome-generator agents simultaneously

5. Each agent generates 5 complete endpoints following your patterns

6. Integrator synthesizes router config, OpenAPI spec, test suites

7. Reports: “50 endpoints generated, all patterns consistent, ready for deployment”

**Time**: Minutes. **Your role**: Strategic oversight.

The paradigm shifts from **”AI as tool I direct”** to **”AI as autonomous development team I orchestrate”**.

## The Meta Twist

Here’s where it gets mind-blowing.

Every file in The Matrix repository was built by autonomous agents. The orchestration framework itself? Built by agents. The agent definitions that spawn other agents? Also built by agents. The documentation explaining how agents work? *Agents wrote that too.*

The repository contains its own `AGENT_CHANGELOG.md` documenting every autonomous contribution:

– What the agent did

– Why it made specific decisions

– What alternatives it considered

– Impact metrics and quality validation

**This is a portfolio demonstrating that agents can build and maintain production systems reliably, safely, and intelligently—without constant human intervention.**

Recent example: We needed to showcase The Matrix’s capabilities. So I invoked a meta-orchestrator agent called “mind-blower.” It autonomously:

1. Analyzed 10 possible demonstration concepts

2. Scored each for impressiveness and feasibility

3. Selected the top 6 to build

4. Spawned 6 demo-builder agents in parallel

5. Each built an interactive HTML/CSS/JS visualization

6. Spawned a cathedral-architect agent to integrate everything

7. Created a showcase gallery with navigation and search

**I gave one instruction. The agents made every other decision and built the entire demonstration system.**

That’s not AI assistance. That’s autonomous software development.

## Domain-Agnostic by Design

The same orchestration pattern works for any domain requiring parallel generation:

**Software Development**:

– 50 API endpoints (10 services × 5 endpoints)

– 100 React components (20 categories × 5 components)

– Test suites, CI/CD pipelines, IaC modules

**Content Creation**:

– 50 documentation pages (10 sections × 5 pages)

– Blog article libraries, marketing assets, technical guides

**Data Engineering**:

– 50 ETL pipelines (10 data sources × 5 transformations)

– Schema generation, data quality validators

**DevOps & Infrastructure**:

– Infrastructure modules, monitoring dashboards, configuration management

The workflow is identical. Only the domain context changes.

## The Zero Dependencies Philosophy

One more thing that’s critical: **zero dependencies doesn’t mean no external resources.**

Every demonstration in The Matrix is a self-contained HTML file. No npm install. No webpack. No build process. You open the file in a browser—it works.

But we absolutely use CDN resources (Three.js for 3D visualizations, Chart.js for metrics, etc.). “Zero dependencies” means **no build process**, not hermetically sealed code.

Why? Because the real world uses libraries. The goal is showcasing autonomous development capabilities, not ideological purity about dependency management.

## What This Means for You

If you’re a developer, this changes your relationship with AI:

– Stop thinking “tool that helps me code”

– Start thinking “autonomous development team I orchestrate”

If you’re building AI systems, this provides the architectural pattern:

– Preserve orchestrator context for strategy

– Delegate implementation to specialized subagents

– Parallelize everything possible

– Synthesize results automatically

If you’re hiring developers (or evaluating AI capabilities), this portfolio proves:

– Agents can work autonomously with minimal oversight

– Agents make intelligent architectural decisions

– Agents maintain quality, safety, and documentation standards

– Agents improve systems continuously

## The Future Is Already Here

We’re at an inflection point. The question isn’t *whether* AI will autonomously build software—it’s already happening. The question is whether we’ll design systems that enable AI to work at its full potential.

The Matrix demonstrates one answer: hierarchical orchestration with context-aware delegation. An orchestrator that thinks strategically while specialized agents handle implementation in parallel.

**This is the beginning of expansive agents**—systems that don’t just respond to prompts, but autonomously break down complex problems, spawn the right specialists, coordinate execution, and synthesize results.

And here’s the really wild part: as these orchestration frameworks improve, they’ll build better versions of themselves. Agents improving agent systems. Meta-orchestration all the way down.

The repository is live. The code is open. The demonstrations are interactive. Every file was built autonomously by AI agents proving they can do this work reliably and intelligently.

The paradigm has shifted. Software development is becoming an orchestration problem, not an implementation problem.

And honestly? I’m here for it.

**Want to see it in action?**

– Repository: [The Matrix on GitHub](https://github.com/kody-w/TheMatrix)

– Demonstrations: Explore the cathedral of interactive orchestration visualizations

– Documentation: Complete agent specifications and workflow patterns

– Changelog: Every autonomous contribution documented transparently

**The question isn’t whether this is possible. The question is what you’ll build when you stop being the bottleneck in your own development process.**

Let’s find out together.

Powered by WordPress & Theme by Anders Norén