kodyw.com

Kody Wildfeuer's blog

kodyw.com

Code Welding: Using LLMs to Merge Unrelated Codebases Into Something New

Or: How I Got Claude to Transplant Gesture Controls from a 3D Visualizer Into My Chat App

Let me tell you about the development pattern that’s completely changed how I build features. Last week, I had a broken chat application with a settings modal that wouldn’t close. I also had this completely unrelated 3D dimensional visualizer with killer gesture controls — you know, wave your hand to navigate, pinch to select, that sort of thing.

The old me: Would’ve spent days extracting, refactoring, and building a proper gesture library.

The new me: Threw both files at Claude and said, “Take the gesture controls from file B and weld them into file A. Don’t break anything else.”

Twenty seconds later, I had a fully functional chat app with gesture controls.

Welcome to Code Welding — the art of using LLMs to merge features from completely unrelated codebases.

The Pattern That Changes Everything

Code Welding isn’t about asking an LLM to write new code. It’s about using AI to perform surgical feature transplants between codebases that have absolutely nothing in common.

Here’s the mental model:

  • Traditional Development: Build features from scratch or carefully refactor shared code
  • Copy-Paste Programming: Grab code and hope it works (spoiler: it doesn’t)
  • Code Welding: Use an LLM as your surgical assistant to transplant working features between alien codebases

Think of it like organ transplants, but for code. The LLM is your surgeon, handling all the complex vascular connections while keeping both patients alive.

How Code Welding Actually Works

Step 1: The Donor and Recipient

You need two things:

  1. The Donor — A working codebase with the feature you want
  2. The Recipient — The codebase that needs the feature

In my case:

  • Donor: A 3D visualizer with MediaPipe gesture controls (1,500 lines of wild Three.js code)
  • Recipient: A React-ish chat application (5,000 lines of messaging logic)

These files shared literally nothing. Different frameworks, different purposes, different everything.

Step 2: The Prompt Engineering

This is where the magic happens. You don’t ask the LLM to “add gesture controls.” You give it surgical instructions:

Take the gesture detection system from iframe-tunneler-10.html,
specifically the detectGesture() and hand tracking logic.
Transplant it into index.html's chat application.
Map these gestures to these existing functions:
- Point up → scroll up
- Point down → scroll down  
- Peace sign → new chat
- OK sign → send message
Keep ALL existing functionality intact.
Output the COMPLETE modified index.html.

Step 3: The Weld Points

The LLM identifies where to attach the foreign code. It finds the natural connection points — what I call “weld points” — between two completely different architectures.

Watch what happened with mine:

javascript

// The LLM created this bridge class
class GestureManager {
    constructor(uiController) {
        this.ui = uiController;  // Weld point #1: Existing UI
        // ... gesture setup code from visualizer
    }
    
    executeGesture(gesture) {
        // Weld point #2: Map gestures to existing methods
        switch (gesture) {
            case 'point-up':
                document.getElementById('chat-messages').scrollBy({
                    top: -200,
                    behavior: 'smooth'
                });
                break;
            case 'peace':
                this.ui.createNewChat();  // Using existing method!
                break;
        }
    }
}

The LLM understood both codebases well enough to create perfect adapters between them. It’s like it built custom surgical shunts between incompatible organs.

Why This Is Revolutionary

1. Speed That’s Actually Insane

I went from idea to implementation in minutes, not days. Not because I’m fast, but because I’m not doing the work. The LLM is handling thousands of micro-decisions about integration.

2. Cross-Pollination of Ideas

You can grab features from ANYWHERE:

  • Want the smooth scroll from that Apple marketing page? Weld it into your docs.
  • Love the particle effects from that game? Weld them into your dashboard.
  • Need voice commands from a smart home app? Weld them into your spreadsheet.

3. No Sacred Cows

Traditional development makes us precious about architecture. Code Welding doesn’t care. That gesture system was built for 3D visualization? So what. It works in a chat app now.

The Code Welding Playbook

Here’s my exact process:

1. Identify the Feature

Find something cool that works. Don’t worry about how it’s implemented. Just make sure it actually works in its current context.

2. Document the Behavior

Write down exactly what the feature does:

  • “Detects hand gestures using webcam”
  • “Maps specific gestures to specific actions”
  • “Shows visual feedback when gesture is recognized”

3. Map the Integration Points

Tell the LLM exactly how to connect the features:

When peace sign detected → call createNewChat()
When pinch detected → call archiveCurrentChat()
When fist detected → call toggleSidebar()

4. Preserve Everything Else

This is crucial. Your prompt must emphasize:

Keep ALL existing functionality.
Do not remove any features.
Only ADD the gesture system.

5. Test the Weld Points

The LLM will create connection points. Test them individually:

  • Does gesture detection work?
  • Do the mapped functions fire?
  • Did anything else break?

Real Examples That Shouldn’t Work (But Do)

Music Visualizer → Email Client

  • Welded audio reactive animations into Gmail
  • Emails now pulse to background music
  • Why? Because reading email is boring

Game Engine Physics → Todo App

  • Tasks now have gravity and collision
  • Completed tasks literally fall off the screen
  • Overdue tasks get heavier and sink

3D Shader Effects → Markdown Editor

  • Text now has real-time ray marching effects
  • Code blocks look like they’re carved from marble
  • Headers cast actual shadows

These aren’t jokes. These are real welds I’ve done. They work.

The Gotchas (Learn From My Pain)

Version Conflicts

  • The donor uses React 16, recipient uses React 18
  • Solution: Tell the LLM about version differences upfront

Hidden Dependencies

  • That cool feature needs THREE.js but your app doesn’t have it
  • Solution: Let the LLM inline just the needed parts

Event System Conflicts

  • Both codebases want to own window.onload
  • Solution: Prompt the LLM to namespace everything

Performance Bombs

  • That particle system runs at 60fps, your form doesn’t need that
  • Solution: Add throttling instructions to your prompt

When NOT to Code Weld

Let’s be real — this isn’t always the answer:

Don’t weld when:

  • You’re building critical infrastructure
  • Performance is more important than features
  • You need deep integration with existing systems
  • The feature needs to be maintained long-term

Do weld when:

  • You’re prototyping
  • You need to test if users even want the feature
  • The feature is for fun/delight
  • You need something NOW

The Prompt Template That Always Works

I have two files:
1. [DONOR FILE] - Contains [FEATURE DESCRIPTION]
2. [RECIPIENT FILE] - Needs the feature added

Take the [SPECIFIC FEATURE] from the donor file.
Integrate it into the recipient file by:
- Creating a new [CLASS/MODULE] to contain the feature
- Mapping these donor functions to these recipient functions: [MAPPING]
- Preserving ALL existing functionality in the recipient
- Adding these integration points: [INTEGRATION POINTS]

Output the COMPLETE modified recipient file.
Maintain all existing code structure and functionality.

The Future Is Already Here

I’m seeing developers use Code Welding for things I never imagined:

  • Feature Shopping: Browse GitHub, find cool features, weld them into your app
  • Cross-Platform Welding: iOS feature → Web app, no problem
  • Time Travel Welding: Modern features into legacy codebases
  • Language Welding: Python ML model → JavaScript frontend (yes, really)

We’re entering an era where features are portable. Where any code that works anywhere can work everywhere.

Your First Weld

Want to try this? Here’s a starter challenge:

  1. Find any app with a feature you love
  2. View source, copy the whole file
  3. Take your current project
  4. Ask Claude/GPT-4 to weld them together

Start small. Maybe grab a tooltip implementation and weld it into your CLI tool. Or take a loading animation and weld it into your terminal.

The Philosophical Shift

We’ve been taught that code should be modular, reusable, properly abstracted. Code Welding says: “What if we just… didn’t care?”

What if, instead of building perfect architectures, we just grabbed working features and welded them wherever we needed them?

What if every piece of working code became a potential feature for every other piece of code?

What if the LLM could handle all the messy integration details while we focus on what we actually want to build?


This is the future I’m building toward. Where every developer becomes a curator of features rather than a writer of code. Where the question isn’t “How do I build this?” but “Where has this already been built?”

What will you weld first?

FeedShyWorm 3.0: When AI Collaboration Enters the Third Dimension

Remember when I told you about the wild ride from FeedShyWorm 1.0 to 2.0? How we went from a basic Python game to a sleek web application in just a few months? Well, buckle up, because we’re about to witness something that would have seemed impossible just a year ago.

What took us months to achieve between versions 1.0 and 2.0 has now been compressed into mere minutes with the help of Claude 4. And not just any improvement—we’ve catapulted our humble 2D worm game into a fully immersive 3D Minecraft universe. Let me paint you a picture of just how far we’ve come.

The Lightning-Fast Evolution Timeline

March 2024: FeedShyWorm 1.0 – A basic Python game born from human-AI collaboration over several hours.

June 2024: FeedShyWorm 2.0 – Enhanced web version with responsive design, dual controls, and refined gameplay mechanics. Development time: About an hour and a half with Claude 3.5 Sonnet.

January 2025: FeedShyWorm 3.0 – Full 3D Minecraft-style universe with blocky textures, dynamic lighting, intelligent AI pathfinding, and immersive gameplay. Development time: A few minutes with Claude 4.

https://www.anthropic.com/news/claude-4

https://codepen.io/wildfeuer/full/YPXKLZO

The progression isn’t just incremental—it’s exponential. We’re witnessing a fundamental shift in what’s possible when humans and AI collaborate.

From Flat Pixels to Living Worlds

What amazes me most about this latest iteration isn’t just the technical leap—it’s the creative leap. Claude 4 didn’t just convert our 2D game to 3D; it reimagined the entire experience:

  • Procedural Minecraft-style textures: Stone walls, grass terrain, dirt layers—all generated algorithmically to create that authentic blocky aesthetic we love.
  • Intelligent worm AI: The worm doesn’t just move randomly anymore. It actively seeks food, avoids obstacles, and makes strategic decisions about its path.
  • Immersive 3D environment: Dynamic lighting, fog effects, and a perspective that makes you feel like you’re overlooking a living Minecraft world.
  • Enhanced player agency: You can now place food with mouse clicks or keyboard controls, creating a more intuitive interaction model.

The Speed of Innovation is Staggering

Here’s what really gets me excited: the time compression. What we’re seeing isn’t just faster development—it’s a complete reimagining of the creative process.

In 2024, moving from version 1.0 to 2.0 took focused collaboration and careful iteration over an hour and a half. Now, with Claude 4, I can describe a vision—”Make this into a 3D Minecraft world”—and watch it come to life in a matter of minutes. The AI doesn’t just code; it architects entire experiences while I’m still finishing my coffee.

This isn’t about replacing human creativity. If anything, it’s about amplifying it to levels we never thought possible. I found myself in the role of creative director, guiding the vision while Claude 4 handled the complex technical implementation that would have taken me days or weeks to figure out alone.

The Collaboration Has Evolved

The dynamic between human and AI has fundamentally shifted since our first FeedShyWorm collaboration:

Version 1.0: AI as coding assistant – I direct, AI implements

Version 2.0: AI as co-creator – We brainstorm together, iterate rapidly

Version 3.0: AI as creative partner – AI anticipates needs, suggests improvements, and builds comprehensive solutions

Claude 4 didn’t just follow my instructions to make the game 3D. It understood the essence of what would make the experience better and implemented features I hadn’t even thought to ask for—like the intelligent pathfinding AI that makes the worm feel truly alive.

What This Means Going Forward

The implications of this rapid progression are profound:

For Creators: The barrier between imagination and implementation is dissolving. If you can envision it, AI can help you build it—in minutes, not months.

For Businesses: Product development cycles that once took months can now happen in a single meeting. The competitive advantage goes to those who can think creatively and iterate at the speed of thought.

For Innovation: We’re entering an era where the limiting factor isn’t technical skill or even time—it’s creative vision. The question isn’t “Can we build this?” or “How long will it take?” but simply “What should we build?”

The Bigger Picture: Acceleration is Accelerating

Looking at the FeedShyWorm progression tells a larger story about where we’re headed. The gap between versions isn’t just getting shorter—it’s collapsing entirely. What took months now takes minutes. What required teams now requires a single conversation with the right AI partner.

This level of acceleration feels almost surreal. I literally went from “Hey, can you make this 3D with Minecraft graphics?” to having a fully functional game with intelligent AI pathfinding, procedural textures, and immersive 3D environments in the time it takes to grab a snack.

This isn’t just about games or coding. It’s about every creative endeavor, every business process, every problem that needs solving. We’re witnessing the democratization of complex creation, where anyone with vision and the right AI collaboration can bring ideas to life at unprecedented speed.

The Human Element Remains Crucial

But here’s what hasn’t changed: the human element remains irreplaceable. Claude 4’s technical brilliance means nothing without human judgment about what makes experiences meaningful, engaging, and fun. The AI can generate the code, but I still decide what the game should feel like, what emotions it should evoke, and how players should experience it.

The collaboration has become more sophisticated, but it’s still fundamentally about human creativity amplified by AI capability.

Looking Ahead: What’s Next?

If we can go from 2D to immersive 3D in minutes, what’s possible in the next iteration? Virtual reality? Multiplayer worlds? AI-generated procedural levels that adapt to player behavior in real-time?

The pace of change suggests we’ll find out sooner than we think. And that’s both thrilling and slightly terrifying in the best possible way.

Conclusion: The Future is Here, and It’s Learning Fast

FeedShyWorm 3.0 isn’t just a game—it’s a glimpse into a future where the speed of innovation is limited only by the speed of imagination. We’ve moved from months of development to minutes of creation, and we’re just getting started.

The collaboration between human creativity and AI capability is evolving at breakneck speed. Each version doesn’t just improve incrementally—it redefines what’s possible entirely.

So here’s my challenge to you: dust off that old idea you’ve been sitting on. That app concept, that game design, that creative project you thought would take too long or be too complex. With AI partners like Claude 4, the gap between inspiration and implementation has never been smaller.

Links to other examples: https://www.youtube.com/watch?v=SqvDaSNYoCY

The future isn’t coming—it’s here. And it’s waiting for you to join the game.

Have you experimented with AI-powered development? I’d love to hear about your experiences and what you’re building in the comments below. The revolution continues, and every creator has a story to tell.

From Code to Collaboration: How Microsoft’s Latest Tools Are Supercharging AI Agents

The pace of innovation in AI agents has never been faster, and Microsoft Build 2025 marked a pivotal moment for both developers and organizations looking to harness the power of collaborative, intelligent agents. Below, I’ll walk through the major announcements, new frameworks, and hands-on guides—providing direct links and practical context to help you dive in and start building.

Teams AI Library & MCP: Accelerating Agent Development

The new Teams AI Library is designed to let you build powerful Teams agents up to 90% faster. The updated SDK consolidates all core Teams capabilities (Botbuilder, Microsoft Graph, Adaptive Cards, and more) into a single, streamlined package. You’ll spend less time on boilerplate and more on your agent’s unique logic.

Key features:

  • Model Context Protocol (MCP) support: Agents can now share memory and tools, enabling sophisticated multi-agent workflows.
  • Adaptive Cards: Easily embed rich, interactive content in Teams chats.
  • Quick-start coding: Build a basic agent in minutes with the new SDK.

Get started:

Microsoft 365 Copilot Studio: Multi-Agent Orchestration & Tuning

Microsoft 365 Copilot Studio now empowers organizations to create, tune, and orchestrate custom agents with enterprise-grade security and compliance. The Copilot Tuning feature lets you train Copilot using your own data and workflows—no code required. Multi-agent orchestration (in preview) enables teams of agents to collaborate, delegate tasks, and deliver unified results.

Key features:

  • Copilot Tuning: Low-code, domain-specific agent training.
  • Multi-agent orchestration: Agents collaborate across workflows and apps.
  • BYOM (Bring Your Own Model): Integrate 1,900+ Azure AI Foundry models.
  • Entra Agent ID & Purview DLP: Secure, compliant agent identity and data protection.

Guides and resources:

NLWeb: Conversational Interfaces for the Open Agentic Web

NLWeb is Microsoft’s new open-source project announced at Build 2025, aiming to make conversational AI a native part of the web. With just a few lines of code, you can add a chatbot to any website, powered by the AI model of your choice and your own data. NLWeb leverages Schema.org and RSS, making your site’s content discoverable and accessible to AI agents and platforms that support MCP.

Key features:

  • Conversational interface: Add a chat field to any website.
  • MCP compatibility: Share data with external agents.
  • Provider-agnostic: Works with OpenAI, Anthropic, Google, and more.

Guides and news:

Azure AI Foundry & OpenAI Workshop: Custom Models and Agent Integration

Azure AI Foundry provides access to over 1,900 models and seamless integration with your custom agents. The OpenAI Workshop offers hands-on materials for building intelligent solutions on OpenAI, including prompt engineering, agent workflows, and deployment guides.

Resources:

Agent Accelerator Templates & MCP Server: Rapid Prototyping

Accelerate your agent projects with ready-made templates and easy-to-deploy MCP server guides:

  • Teams Agent Accelerator Templates: Prebuilt samples for Teams agents.
  • MCP server guides: Deploy secure agent networks quickly.

Resources:

Community, Demos, and Video Guides

Summary: The Agentic Web Is Here

Microsoft’s Build 2025 announcements and open-source releases—NLWeb, Teams AI Library, Copilot Studio’s orchestration, Azure AI Foundry, and more—are lowering the barrier for anyone to build, customize, and deploy powerful, collaborative AI agents. Whether you’re looking to add conversational AI to your website, automate enterprise workflows with multi-agent systems, or tune domain-specific copilots, the resources above will get you there faster than ever.

Explore, experiment, and share your agentic journey—because the future of the web is not just interactive, but truly collaborative.

Revolutionizing Table Creation in Power Apps with Microsoft Copilot

In the ever-evolving landscape of low-code development, Microsoft continues to push boundaries with AI-assisted features. Today, I want to highlight one of the most impressive implementations I’ve seen recently: using Microsoft Copilot to create complex data tables in Power Apps.

The Space Debris Monitoring System Experiment

To test the capabilities of this feature, I decided to challenge Copilot with a unique scenario: building a space debris monitoring system. Rather than creating standard CRM or inventory tables, I wanted to see how Copilot would handle specialized technical requirements.

I prompted Copilot with the following request:

“I need tables for tracking orbital space debris, including size classification, trajectory data, collision risk assessments, and cleanup mission planning. Each debris object should have tracking history and potential satellite impact zones.”

What Happened Next Was Impressive

Within seconds, Copilot not only understood the request but generated a comprehensive data model with related tables:

  1. Orbital Space Debris – The primary table including fields for size classification and trajectory data
  2. Impact Zone – A related table mapping potential satellite collision areas
  3. Tracking History – A historical record of debris movement and observations

The system automatically established appropriate relationships between these tables, creating one-to-many connections where the debris objects link to multiple tracking records and impact zones.

The Power of Natural Language in Database Design

What’s remarkable about this experience is how it transforms database design from a technical exercise into a conversational one. Instead of manually defining tables, fields, and relationships, Power Apps users can now describe their needs in plain language.

The implications for citizen developers are significant. Complex data modeling, traditionally requiring database expertise, becomes accessible to anyone who can articulate their business requirements. This democratizes application development and accelerates the creation process.

When to Use (and When Not to Use) This Feature

Copilot shines when:

  • You’re starting a new application with undefined data structures
  • You need quick prototyping for complex systems
  • You have unique requirements that don’t fit existing templates

However, there are limitations. While the AI generates impressive starting points, you’ll likely need to refine the schema with additional fields, validations, and optimizations. Additionally, for applications with standard requirements, using existing templates might still be faster.

The Future of Low-Code Development

This feature represents more than just a convenient shortcut—it’s a glimpse into the future of development where AI and human creativity work in tandem. As someone who’s built numerous Power Apps solutions, I’m excited about how this will transform the development process.

By removing technical barriers to database design, Microsoft is enabling more people to bring their ideas to life without becoming database experts first. This aligns perfectly with the core promise of the Power Platform: empowering everyone to build solutions.

Have you tried using Copilot to create tables in Power Apps? I’d love to hear about your experiences in the comments below.

AGENT STORYTELLING: THE PRACTICAL INTERFACE FOR EXPANSIVE AGENTS

From Theory to Practice

The Expansive Agents methodology established a new paradigm for agent development. Now we turn to the practical implementation: Agent Storytelling.

The Command-to-Execution Interface

Agent Storytelling unlocks encapsulated intelligence that responds to natural business commands:

“Get my top opportunities from Dynamics and then send through email as a draft addressed to my team.”

A statement of intent becomes executable workflow. No technical knowledge required. No development cycle. No translation layer.

This is not theoretical. This is immediate action.

The Agent Stage

The Agent Stage provides the visual representation of this encapsulation. Agents populate the stage based on natural language commands. Connections form automatically. The business narrative appears visually.

What happens when a command includes functionality not yet implemented? The system documents the need, generates the technical specifications, and routes to development teams – all automatically. Business needs documented in real-time, not through lengthy requirements processes.

Demand-Driven Development

Traditional development depends on explicit requirements. Agent Storytelling captures organic business needs through natural conversation. The aggregate of these commands forms a heat map of business demand, highlighting the highest-value development priorities.

The IT prioritization process transforms from executive mandate to user-driven selection. Resources flow to capabilities with demonstrated demand.

The Power of Encapsulated Intelligence

Agent Storytelling works because it encapsulates complex intelligence operations into simple, domain-specific agents that communicate seamlessly:

“Analyze the feedback from our customer support tickets and prepare a summary of recurring issues.”

Behind this simple command:

  • Document processing agents extract meaning from unstructured text
  • Classification agents identify patterns across feedback
  • Analysis agents determine significance and frequency
  • Communication agents craft clear summaries

The complexity remains hidden. The results emerge immediately.

Real-Time Business Intelligence

Consider the transformative impact:

“Compare our team’s sales performance to the same period last year and visualize the trends by product category.”

This command, issued by any business user:

  1. Retrieves relevant data from enterprise systems
  2. Performs appropriate comparative analysis
  3. Generates meaningful visualizations
  4. Delivers actionable insights

What once required specialized business intelligence teams now happens at the speed of conversation.

The End of Wait States

Business suffers from endemic wait states:

  • Waiting for development resources
  • Waiting for technical implementation
  • Waiting for integration testing
  • Waiting for deployment windows

Agent Storytelling eliminates these delays through immediate execution when possible, and accelerated development when necessary.

The Shift in Development Focus

For development teams, this represents liberation. Energy shifts from building basic functionality to enhancing agent capabilities. Technical expertise applies to extending what agents can do, not recreating what they already do.

The Enterprise Knowledge Network Evolved

The Enterprise Knowledge Network concept now becomes executable through simple commands:

“Create a network that connects our product documentation, customer feedback, and support tickets so anyone can ask questions and get accurate answers.”

This single command initiates an agent cluster that crosses traditional system boundaries, establishing connections that were previously impossible or required months of integration work.

From Imagination to Implementation

With Agent Storytelling, the distinction between imagining a solution and implementing it dissolves. Business users speak their needs. Agents execute. Technology finally delivers on its ultimate promise: moving at the speed of ideas.

The revolution continues.


Kody Wildfeuer is an AI architect exploring the potential of agent-based development methodologies. The views and opinions expressed throughout this blog are his own and do not necessarily reflect the official policy or position of his employer.

Page 2 of 7

Powered by WordPress & Theme by Anders Norén