By Kody Wildfeuer

Or: How I Made Every Agent in My System Know What Time It Is, What You Did Last Tuesday, and Why You Prefer Bullet Points


Let me tell you about the moment I realized every AI agent framework is fundamentally broken. I had built a calendar agent — cool little thing, reads your Outlook, tells you what to prep for. Monday morning, 8:47 AM, I ask it “what’s coming up?” and it spits back a flat list of meetings sorted by time.

Technically correct. Completely useless.

Because at 8:47 AM on a Monday, I don’t need a list. I need someone to say: “Your standup is in 13 minutes, you haven’t reviewed the sprint board, and by the way — it’s quarter-end, so that finance review at 2 PM is the one that actually matters today.”

The old me: Would’ve hardcoded a bunch of if statements. if monday, if morning, if Q4. Brittle. Annoying. Never enough context.

The new me: Built a pattern where every agent automatically knows everything it could need before it even starts working.

Welcome to Data Sloshing — the pattern that makes AI agents feel like they’ve been watching over your shoulder all week.

What Is Data Sloshing?

Here’s what most agent frameworks do: You call an agent, you pass it a query, it does its thing, it returns a result. The agent is stateless. Contextless. It has the memory of a goldfish and the situational awareness of a rock.

Data Sloshing flips this. Before any agent runs its perform() method, the system automatically washes a wave of contextual signals over it. The agent doesn’t ask for context. It doesn’t need to. Context just arrives, like weather.

Here’s the mental model:

Traditional Agents: User → Query → Agent → Response
Data Sloshing: User → Query → Context Enrichment → Agent (now omniscient) → Response

Think of it like this: Imagine every employee at a company gets a personalized briefing packet slid under their door every morning. Time of day, what happened yesterday, who prefers what, what’s urgent. They don’t have to go looking for it. It’s just there when they sit down to work.

That’s sloshing.

The Five Layers of Context

Every single agent call in openrappter gets enriched with five layers of context before perform() fires. Here’s what each layer knows:

Layer 1: Temporal Awareness

The system knows what time it is, and more importantly, what that means.

{
    "time_of_day": "morning",        # Not just the hour — the vibe
    "likely_activity": "active_work", # What you're probably doing right now
    "day_of_week": "Monday",
    "is_weekend": false,
    "quarter": "Q1",
    "fiscal": "quarter_end_push",    # Uh oh — crunch time
    "is_urgent_period": true          # Everything matters more right now
}

Notice likely_activity. The system doesn’t just know it’s 10 AM. It knows that at 10 AM, you’re probably in active work mode. At 5 PM, you’re in wrap-up mode. At 7 AM, you’re preparing for the day. This changes how agents respond.

My calendar agent uses this to shift tone:

  • 8 AM: “☀️ First up: Sprint Planning in 22min. Review your notes and grab coffee.”
  • 2 PM: “⏰ Architecture Review in 45min — wrap up your current task and context-switch.”
  • 6 PM: “🌆 Late meeting: 1:1 with Manager in 30min. Prep your summary and key points.”

Same agent. Same calendar. Completely different advice based on when you ask.

Layer 2: Query Signals

The system reads your question like a detective reads a crime scene.

# You ask: "What are my latest active tasks?"

{
    "specificity": "medium",
    "hints": ["temporal:recency", "ownership:user"],
    "word_count": 6,
    "is_question": true,
    "has_id_pattern": false
}

It extracted two critical signals:

  • temporal:recency — you said “latest,” so sort by most recent
  • ownership:user — you said “my,” so filter to your stuff

The agent didn’t have to parse “latest” or “my.” The sloshing layer already did. By the time the agent sees the query, it also sees instructions: “Sort by most recent. Filter by current user.”

Layer 3: Memory Echoes

This is where it gets spooky. The system checks your stored memories and surfaces anything relevant to what you’re asking about right now.

# You ask: "How should we handle the deployment?"
# System finds in your memory store:

"memory_echoes": [
    {
        "message": "Team prefers Azure for cloud deployments",
        "theme": "preference",
        "relevance": 0.75
    },
    {
        "message": "Last deployment had rollback issues with staging",
        "theme": "fact",
        "relevance": 0.60
    }
]

It uses word-overlap scoring — not semantic search, not vector embeddings, just good old set intersection with a minimum threshold of 2 matching words. It’s fast, it’s local, and it works shockingly well.

The agent now knows your team prefers Azure and that your last deployment had problems. You didn’t mention any of this. Your memory did.

Layer 4: Behavioral Hints

Over time, the system builds a profile of how you work — not by asking, but by watching.

{
    "prefers_brief": true,           # Your average memory is < 15 words
    "technical_level": "advanced",   # You mention APIs, schemas, GUIDs
    "frequent_entities": ["Azure", "sprint", "deployment"]
}

If you consistently write short, punchy memories, the system infers you prefer brief responses. If your memories are full of technical jargon, it knows you can handle advanced output. This flows into every agent’s behavior without a single settings page.

Layer 5: Orientation

The final layer synthesizes everything into a single actionable directive. It’s the briefing summary on top of the briefing packet.

{
    "confidence": "high",
    "approach": "use_preference",    # We found a stored preference — use it
    "hints": [
        "Sort by most recent",
        "Filter by current user",
        "Quarter end — prioritize closing activities"
    ],
    "response_style": "concise"      # User prefers brief
}

This is what the agent actually reads first. High confidence? Go direct. Low confidence? Ask for clarification. Found a preference? Use it. Quarter end? Flag urgency.

How It Actually Works (The Code)

The magic is embarrassingly simple. It lives in BasicAgent.execute():

def execute(self, **kwargs):
    query = kwargs.get('query', '')

    # This one line changes everything
    self.context = self.slosh(query)

    # Now perform() has full context
    return self.perform(**kwargs)

Every agent inherits from BasicAgent. Every agent gets execute() called by the orchestrator. Every agent gets sloshed. No opt-in. No configuration. No “don’t forget to pass the context.” It just happens.

Agents access signals with dot notation:

def perform(self, **kwargs):
    time = self.get_signal('temporal.time_of_day')
    activity = self.get_signal('temporal.likely_activity')
    is_crunch = self.get_signal('temporal.is_urgent_period')
    style = self.get_signal('orientation.response_style')

    # Now build your response knowing ALL of this

Write a new agent? You get sloshing for free. You don’t import it, configure it, or think about it. It’s in the water.

The Real Magic: Agents That Surprise You

Here’s where Data Sloshing stops being a pattern and starts being a superpower.

I built a CalendarPrep agent. It reads my Outlook calendar through macOS Calendar.app and tells me what to prepare for. Standard stuff. But because of sloshing, it does things I never explicitly coded:

Monday morning, 8:30 AM:

🚀 Monday — check for any standup or planning meetings first.
☀️ First up: "Sprint Planning" in 28min. Review your notes and grab coffee.
⚡ Back-to-back: "Sprint Planning" → "1:1 with Manager" with only 5min gap.
🔥 Heavy day: 7 meetings ahead. Protect time for focused work between them.

Friday afternoon, 4:45 PM:

🎯 Friday — make sure end-of-week deliverables are covered.
🌆 Late meeting: "Week Retro" in 15min. Prep your summary and key points.

December 18th, any time:

📊 Quarter/year-end period — prioritize any review or reporting meetings.

I wrote ONE suggestion engine. Sloshing made it context-aware across time, day, fiscal period, and meeting density. The agent feels like it understands my work rhythm. It doesn’t. It just has really good contextual data.

Why This Beats Every Other Approach

1. Zero-Config Context

Every other framework makes you pass context explicitly. “Here’s the user profile. Here’s the conversation history. Here’s the system prompt.” Data Sloshing says: no. Context is infrastructure, not input. You don’t pass electricity to your toaster — you plug it in and the electricity is there.

2. Agents Can’t Forget

Because sloshing happens on every call, agents can’t accidentally ignore context. A new developer writes a new agent, forgets about user preferences? Doesn’t matter. Sloshing delivered those preferences before perform() even fired.

3. Progressive Intelligence

The system gets smarter automatically. Store more memories → richer echoes. Use it more → better behavioral hints. Express preferences → stronger priors. You’re not training a model. You’re just using the system, and the system is learning your patterns.

4. It’s Stupid Simple

No vector database. No embeddings. No RAG pipeline. No semantic search. It’s set intersections, datetime math, and dictionary lookups. The whole sloshing system is ~150 lines of Python. It runs in milliseconds.

I could’ve built something fancier. I didn’t need to.

The Sloshing Playbook

Here’s how to add Data Sloshing to your own agent system:

1. Define Your Signal Layers

Pick 3-5 categories of context that matter for your domain:

LayerWhat It Answers
Temporal“When is this happening?”
Query“What are they really asking?”
Memory“What do we already know?”
Behavioral“How do they like to work?”
Orientation“How should we approach this?”

2. Make It Implicit

The cardinal rule: agents don’t ask for context. If your agents have to call getContext() or pass a context parameter, you’ve failed. Sloshing happens in the base class, before the agent runs. Period.

class BasicAgent:
    def execute(self, **kwargs):
        self.context = self.slosh(query)  # Implicit. Always.
        return self.perform(**kwargs)     # Agent just does its job

3. Build the Synthesis Layer

Raw signals are noise. The orientation layer turns them into decisions:

  • Multiple hints pointing the same way? High confidence, go direct.
  • Found a stored preference? Use it, don’t ask again.
  • Nothing to go on? Low confidence, ask for clarification.
  • Quarter end? Flag urgency on everything.

4. Keep It Local

Every signal in my sloshing system comes from local data. ~/.openrappter/memory.json. System clock. The query itself. No API calls, no network requests, no latency. Sloshing should be free and instant.

When NOT to Slosh

Let’s be honest — not everything needs context:

Don’t slosh when:

  • Your agent does exactly one thing regardless of context (a calculator)
  • Latency matters more than intelligence (a health check)
  • The context would leak between users in a multi-tenant system

Do slosh when:

  • Agents interact with humans
  • “Same question, different answer” is a feature
  • You want your system to feel personalized without a settings page
  • You’re building something that should get better over time

The Part Nobody Talks About

Here’s what makes Data Sloshing philosophically different from every other context system I’ve seen.

Most systems treat context as input. You gather it, you format it, you pass it in. The agent consumes it. It’s a pipeline.

Data Sloshing treats context as environment. It’s not something you give the agent. It’s something the agent lives inside of. Like humidity. Like gravity. The agent doesn’t process the context — the context changes what the agent is for that call.

When my calendar agent runs at 8 AM on a Monday in Q4, it’s not the same agent that runs at 3 PM on a Wednesday in July. Same code, same logic, same perform() method. But the sloshing layer turned it into a different thing. A morning-Monday-Q4 thing that knows you’re starting your week in crunch time.

This is the future I’m building toward. Not smarter agents. Not bigger models. Not more complex orchestration.

Just agents that always know what’s going on.

What will you slosh into yours?