Kody Wildfeuer's blog

kodyw.com

Author: Kody Wildfeuer Page 1 of 5

FeedShyWorm 2.0: A Testament to AI-Powered Game Evolution

Remember a few months back when I shared the story of “FeedShyWorm,” our little game that showcased the power of human-AI collaboration?

Link here: “FeedShyWorm”: A Human-AI Collaboration Case Study – kodyw.com

Well, buckle up, because we’re about to take a wild ride through the rapid evolution of not just a game, but the very landscape of AI-assisted development.

Worm game v1 ^

Now we improved it to this version:

Worm game 2.0 ^

https://codepen.io/wildfeuer/full/oNRrQXE

Based on this twitter post I wanted to try out how good the new Claude Sonnet 3.5 model and how it could improve my very basic game that I created last time with AI.

The Quantum Leap: From Python to Web

It’s been just a few months since our initial creation, but FeedShyWorm has undergone a transformation that would have seemed like science fiction not long ago. The most significant change? We’ve ported the entire game from Python to a web application using HTML, CSS, and JavaScript. This isn’t just a technical upgrade – it’s a leap into accessibility, allowing anyone with a web browser to join in on the fun.

Key improvements include:

  1. Responsive Design: Play on your desktop or your phone – the game adapts to you.
  2. Enhanced Visuals: A sleek, modern interface that’s easy on the eyes.
  3. Dual Control System: Use arrow keys or mouse movements – your game, your choice.

But here’s where it gets really interesting. These changes weren’t just dreamed up by yours truly. They were the result of a dynamic collaboration with Claude 3.5 Sonnet, our AI partner in crime. The time from beginning to pasting the original code, improving it, and then writing this full blog post took me about an hour and a half.

AI: From Assistant to Co-Creator In our initial collaboration, AI served as a coding assistant and idea generator. Now, with Claude 3.5 Sonnet, it’s become more of a co-creator. It didn’t just help with the coding; it suggested game mechanics that I hadn’t even considered.

For instance:

  1. Center Reset for Food: A simple change that adds a new layer of strategy.
  2. Refined Collision Detection: Making the game more challenging as the worm grows.
  3. New Game Over Conditions: Three consecutive self-collisions when the worm is longer than 5? Game over, buddy.

These aren’t just tweaks; they’re fundamental changes to the gameplay that make FeedShyWorm 2.0 a wholly new experience.

The Human Touch in a Sea of Algorithms Now, you might be wondering: with AI this advanced, where does the human fit in? Let me tell you, we’re more important than ever. While Claude 3.5 Sonnet can generate complex algorithms and suggest innovative features, it’s still up to us humans to decide what makes the game fun, engaging, and meaningful.

I found myself in a new role – less of a coder and more of a curator. My job was to sift through the AI’s suggestions, picking out the gems that would enhance the player’s experience without overwhelming them. It’s a delicate balance, and one that I believe only a human can truly judge.

Lessons from the Digital Time Capsule This project taught me several valuable lessons:

  1. Old code isn’t just a relic; it’s a learning opportunity. Revisiting FeedShyWorm with fresh eyes (and AI assistance) was incredibly educational.
  2. AI isn’t here to replace creativity; it’s here to amplify it. Claude 3.5 Sonnet didn’t do the work for me – it empowered me to do better work.
  3. The pace of technological advancement is staggering. Features that would have been cutting-edge when we first created FeedShyWorm are now considered basic expectations.

The Bigger Picture

As I sit here, looking at the before-and-after versions of FeedShyWorm, I can’t help but ponder the implications for the broader world of software development. How many brilliant ideas are lying dormant in repositories and hard drives around the world, just waiting for a bit of AI-powered polish to shine?

This experience has inspired me to start a new project: “Code Revival.” The idea is to create a platform where developers can submit their old, abandoned projects for AI-assisted renovation. Imagine the innovations we could unearth, the lessons we could learn, and the progress we could make by giving new life to old code.

Conclusion: The Future is Built on the Past

In the fast-paced world of tech, we’re often focused on the next big thing, always looking forward. But this journey has reminded me of the value of looking back. Our old code, our past projects – they’re not just relics. They’re the foundation upon which we build the future.

FeedShyWorm 2.0 is more than just an updated game – it’s a testament to the rapid progress we can make when leveraging AI in creative projects. It’s a small but significant step in understanding how we can harness AI to augment human creativity and technical skills in game development and beyond.

So, I encourage you all: dust off those old projects. Feed them to an AI. See what emerges. You might just find that your past self-had some pretty great ideas – ideas that, with a little help from our AI friends, could change the future.

Until next time, keep coding, keep playing, and keep pushing the boundaries of what’s possible. The future is here, it’s learning fast, and it’s waiting for you to join the game.

Solving Business Problems, Not Software Sudoku: Why I’m Pumped About Dynamics 365’s New Table Visual Designer

Hey there, tech enthusiasts and business problem solvers! I’m about to take you on a journey that’s got me excited for the future of Dynamics 365 and the Power Platform.

The Old Ways: A Trip Down Memory Lane

Let’s rewind a bit. Remember the “good old days” of building business applications? If you’ve been in the game as long as I have, you’ll recall the pain points:

  1. SQL Server Management Studio (SSMS) Gymnastics: Hours spent crafting CREATE TABLE statements, where one typo meant starting over.
  2. Dynamics 365 Configuration Marathon: Navigating endless menus to create tables one at a time, setting up relationships like a game of database Jenga.
  3. The Integration Nightmare: Wrestling with SDKs and custom plugins to integrate external data.
  4. Documentation Headaches: Creating separate ERD diagrams in Visio after all that work.

The New Frontier: ERD View and Copilot

Now, Microsoft has dropped a game-changer that’s about to make a lot of people very comfortable with something that used to be pretty daunting. And that’s a good thing!

  1. Visual Data Modeling: It’s like going from assembly code to a high-level programming language. You can see your entire data model at a glance, relationships and all.
  2. AI-Powered Schema Generation: Copilot is like having a senior database architect at your beck and call. Describe your model in plain English, and it generates the schema for you.
  3. Intelligent Data Import: Copilot analyzes your Excel or SharePoint data and suggests appropriate structures and relationships.
  4. Dynamic Relationship Management: Creating relationships between tables is now as simple as dragging a line. No more SQL JOIN statements!

Why This is a Big Deal

  1. Visual Learning FTW: Not everyone thinks in code. This ERD view lets you see your data model like a map. It’s like going from written directions to Google Maps.
  2. Copilot: Your AI Sidekick: Need sample data? Bam! Want to create a new column? Boom! It’s like coding with autocomplete on steroids.
  3. From Excel to ERD in Seconds: Turn that monster Excel sheet into a proper database with drag-and-drop simplicity.
  4. Rapid Prototyping: What used to take weeks can now be done in hours. Iterate quickly without the overhead of traditional database design.
  5. Lowered Technical Barrier: A sales manager who understands the business process can now contribute directly to the data model design without needing to learn T-SQL.

Democratizing Software Development

Here’s why I’m really excited: it’s lowering the barrier to entry for creating powerful business apps. You don’t need to be a coding wizard to create a multi-table data model anymore. Got an idea? Describe it in plain English, and let Copilot do the heavy lifting.

This is huge for small businesses, startups, and anyone with a great idea but limited tech skills. It’s like we’re democratizing software development, and I am here for it!

The Future of Problem Solving

Imagine a world where anyone in your organization can turn their industry expertise into a working app. Your sales team could create a custom CRM tailored exactly to your business. Your HR folks could whip up an employee management system that fits like a glove.

Real-World Impact

Let me paint you a picture. In the past, if a client came to me wanting a custom solution in Dynamics 365, we’d be looking at weeks of requirements gathering, database design, and implementation. Now? We can sit down together, describe the system, and have a working data model in a single session. It’s not just faster – it’s a completely different way of working.

Embracing the Change

This new ERD view isn’t just an incremental improvement – it’s a quantum leap in how we approach data modeling in the Microsoft ecosystem. It’s democratizing a skill that used to require years of experience and technical knowledge.

Conclusion

This ERD view in Power Apps is more than just a cool feature. It’s a glimpse into a future where technology adapts to how we think, not the other way around. It’s about making powerful tools accessible to everyone, regardless of their coding skills.

So, my challenge to you? Go try it out. Play with it. See what you can create. Push its limits. See how it can transform your development process. Who knows? The next big app that revolutionizes your industry might be just a few drag-and-drops away.

Go forth and visually model that data! Your future self (and your clients) will thank you.

Announcement link: Work with complex data models in an ERD view assisted by Copilot – Microsoft Power Platform Blog

Are Intelligent Agents the Missing Link to AGI?

Artificial general intelligence (AGI) – machines that can match or exceed human level intelligence across a wide range of cognitive tasks – has long been the holy grail of AI research. While narrow AI systems have made remarkable progress in specific domains like game-playing, image recognition, and language modeling, we still seem far from realizing AGI. Many believe the missing ingredient is the right cognitive architecture.

One promising avenue is intelligent software agents. An agent is an autonomous system that can perceive its environment, reason about it, make decisions, and take actions to achieve goals. If we could develop agents with the right internal models, knowledge representations, reasoning capabilities and learning algorithms, could they reach or even surpass human-level intelligence?

The basic architecture of an intelligent agent typically includes:

  • Sensors to perceive the environment
  • A knowledge base or world model to represent information
  • Reasoning and planning components to make decisions
  • Actuators to take actions and affect the world
  • Learning algorithms to improve performance over time

In Python pseudo-code, a simple agent architecture might look like:

class Agent:
    def __init__(self):
        self.knowledge_base = KnowledgeBase()
        self.reasoner = Reasoner()
        self.planner = Planner()

    def perceive(self, observation):
        self.knowledge_base.update(observation)

    def think(self):
        situation = self.knowledge_base.current_situation()
        goal = self.reasoner.select_goal(situation)
        plan = self.planner.make_plan(situation, goal)
        return plan

    def act(self, plan):
        for action in plan:
            self.perform(action)

    def learn(self, feedback):
        self.knowledge_base.update(feedback)
        self.reasoner.adjust_model(feedback)
        self.planner.refine_strategies(feedback)

Some fascinating research projects are exploring intelligent agent architectures. For example, the open-source AutoGPT project aims to create autonomous AI agents that can engage in open-ended dialogue, answer follow-up questions, and even complete complex multi-step tasks. A key component is giving the agents access to tools and knowledge sources they can utilize when solving problems.

AutoGPT agents have a complex architecture including:

  • A large language model for dialogue and reasoning
  • Internet access for gathering information
  • Access to external tools for performing actions
  • Prompts for self-reflection and iterative refinement
  • Memory to store and retrieve relevant information

Simplified Python pseudo-code for an AutoGPT-like agent:

class AutoGPTAgent(Agent):
    def __init__(self):
        self.llm = LargeLanguageModel()
        self.memory = ConversationMemory()
        self.tools = ExternalTools()

    def perceive(self, human_input):
        self.memory.add(human_input)

    def think(self):
        prompt = self.memory.summarize() + "\nAssistant:"
        self.llm_output = self.llm.generate(prompt) 
        self.memory.add(self.llm_output)

        if self.should_use_tool(self.llm_output):
            tool, query = self.extract_tool_and_query(self.llm_output)
            result = self.tools.use(tool, query)
            self.memory.add(result)
            self.llm_output = self.memory.summarize() + "\nAssistant:"

        return self.llm_output

    def act(self, output):
        print(output)

    def learn(self, feedback):
        self.memory.add(feedback)

Another example is Anthropic’s constitutional AI, which aims to create AI agents that behave in alignment with human values. By carefully selecting the training data and providing detailed instructions, they aim to develop AI assistants that are helpful, honest and harmless.

Anthropic’s AI agents use a novel AI architecture called Cooperative Conditioning (CC), which defines language models over what they refer to as intents: specific tasks or prompts that can be submitted to the model. The intents are selected to encourage behavior aligned with principles such as being helpful, honest, and safe. CC also includes tools for modulating the model’s personality.

But perhaps the ultimate test would be whole brain emulation – simulating the human brain in silico, neuron by neuron. If we could do that with sufficient fidelity, would the resulting “mind” be conscious and intelligent like a human? Would it be an AGI?

A whole brain emulation would require simulating the brain at an extremely detailed level, for example:

  • Neuron models with realistic 3D morphologies and connectivity
  • Detailed models of synapses with multiple neurotransmitter/receptor types
  • Glial cell models for metabolic support and regulation
  • Models of neuromodulators like dopamine and serotonin
  • Maps of all the brain’s regions and their connectivity

This level of biological realism is not currently feasible, and may not even be necessary for AGI. A simplified pseudo-code sketch just to illustrate the concept:

class NeuronModel:
    def __init__(self, morphology, synapse_types, region):
        self.morphology = morphology
        self.synapses = SynapseModels(synapse_types) 
        self.voltage = RestingPotential()
        self.region = region

    def update(self, neurotransmitter_inputs):
        self.voltage.update(neurotransmitter_inputs, self.synapses)
        if self.voltage > FiringThreshold:
            self.spike()

class BrainModel: 
    def __init__(self, connectome):
        self.neurons = [NeuronModel(...) for _ in connectome]
        self.connectome = connectome
        self.glial_cells = [GlialModel() for _ in connectome.regions]

    def run(self, sensory_input):
        for neuron, inputs in sensory_input.items():
            neuron.update(inputs)

        for synapse in self.connectome.synapses:
            synapse.transmit()

        for glial_cell, region in zip(self.glial_cells, connectome.regions):
            glial_cell.regulate(region)
        ...

My view is that intelligent agents, built using modern ML and large language models, are a very promising path to AGI. By giving agents rich world models, multi-modal knowledge bases, reasoning capabilities, and the right learning algorithms, I believe we can create AI systems that demonstrate increasingly general intelligence. Bit by bit, these agents may be able to match and exceed human cognitive abilities.

However, I suspect whole brain emulation is a red herring. Even if we could simulate every neuron, that level of biological realism is likely not required for AGI. The human brain is constrained by evolution, not designed for optimal general intelligence. I believe we can achieve AGI with different, possibly more elegant architectures.

In conclusion, intelligent agents do appear to be the most promising path to AGI available today. Step by step, these agents are developing more impressive reasoning, learning and language skills. I don’t think whole brain emulation is necessary – we can likely achieve AGI through different means. The future is agents – autonomous AI systems that can perceive, think and act with increasing flexibility and generality. And that future may arrive sooner than many expect.

Supporting links: HippoRAG: Endowing Large Language Models with Human Memory Dynamics | by Salvatore Raieli | Jun, 2024 | Level Up Coding (medium.com)

Unleashing Creativity with AI Art: Accessible Tools for Endless Inspiration

Unleashing Creativity with AI Art: Accessible Tools for Endless Inspiration

I’ve been really intrigued lately by the explosion of AI art tools and their potential to make creative expression more accessible than ever before. Want to brainstorm a surreal landscape or dream up an otherworldly creature? AI art generators put mind-blowing visuals at your fingertips, no artistic skills required.

The implications are huge. Suddenly, anyone with an idea can bring it to life visually. No more being held back by lack of technical ability. With AI, if you can imagine it, you can create it (and then tweak it endlessly). This opens up a whole new realm of creative possibilities for both professional and amateur creators.

But beyond just making art creation easier, I think tools like DALL-E and Midjourney can be incredible brainstorming aids. Struggling to come up with a concept? Plug a few keywords into the AI and watch it generate dozens of interpretations to spark ideas. The AI becomes a brainstorming partner, serving up endless variations to jolt you out of creative ruts.

Now, some might argue this is “cheating” or that it devalues traditional art skills. I get that perspective. But I see AI art more as a complement to human creativity rather than a replacement. It’s another tool in the toolbox, one that lowers barriers and helps more people tap into their imagination. For professional artists, it can streamline workflows and open up new stylistic avenues.

Personally, I’ve been having a blast playing with these tools and seeing what strange, beautiful creations I can concoct (check out the images in this post for a sample). The instantaneous nature is addicting – every prompt yields something unexpected. It gamifies the creative process.

So if you haven’t yet, I highly recommend giving one of the popular AI art tools a whirl, whether you’re a seasoned artist or can barely draw a stick figure. Incredible technology is at our fingertips to augment creativity and make art/design accessible to all. Let’s embrace it.

I’m excited to see what you all create! Drop your favorite AI art tools and creations in the comments. Now if you’ll excuse me, I have some cyborg dinosaurs to generate…

Stay creative!

“FeedShyWorm”: A Human-AI Collaboration Case Study

You can follow the actual conversation through this link: ChatGPT – Collaboration with AI on new game(openai.com)

gist of referenced code: https://gist.github.com/kody-w/019b788107b359dc7cf10fe477bb17a4

replit: feedShyWorm.py – Replit

Game GIF explanation: player is moving the grey dot (the food) to try and make impact with the worm’s “head” block. If that happens you get one point. Try to get as many points as you can but the heat gets turned up when the worm continues to grow with each piece. The highest score I have been able to achieve before knotting up is 8 (so far!).

When we dive into the realm of artificial intelligence (AI), we often find ourselves at a crossroads of potential and partnership. It’s a dance between the algorithmic agility of AI and the nuanced intuition of human creativity. Recently, I embarked on a project that exemplified this synergy, breathing new life into the classic ‘Snake’ game by reimagining it as “FeedShyWorm.”

The Genesis of “FeedShyWorm”

Like a nostalgic tune remixed for a new generation, “FeedShyWorm” reinvigorates the simple joy of the ‘Snake’ game with a twist—here, the player tempts a worm with food, indirectly steering its growth and ensuring its survival. The challenge? To grow the worm without entangling it into a self-made knot.

The Symbiotic Workflow

AI provided a solid foundation for the game’s development, offering a library of coding patterns and potential solutions. From rendering the game window to defining the worm’s wriggling motion, AI-generated pseudo-code laid out a clear path forward. Yet, it was human oversight that steered the project, filtering through the AI’s suggestions to find the perfect blend of innovation and tradition.

AI: The Technical Muse

In this collaboration, AI shone as a technical muse, suggesting complex algorithms for the worm’s growth and navigation. It handled pathfinding and error resolutions, effortlessly juggling logical structures to suggest efficient and robust solutions.

Human: The Creative Conductor

The human element brought irreplaceable intuition and judgment to the table. From the game’s initial concept to its final nuances, it was the human touch that molded AI’s raw output into a game that’s engaging and enjoyable. The decision to make the worm grow by two blocks with every piece of food and to introduce a game-over condition based on consecutive self-collisions came from a place of understanding the player’s experience—something AI is yet to grasp fully.

The Perfect Pairing

The crazy thing is this work took just a few hours of this collaboration work. If I were to do this task just by itself, I would have to invest A LOT more time to get the same result and I would argue that it is trivial work compared to delivering the actual value of the game.

The true beauty of “FeedShyWorm” lies in its balance. AI’s strength in handling the complexities of code is paired with the human ability to infuse emotion and appeal into the game. The AI proposes, the human disposes, and the result is a game that respects the player’s intelligence and capacity for strategy.

Conclusion: The Harmonious Blend

“FeedShyWorm” is a testament to the potential of human-AI collaboration. AI’s contributions are invaluable, but without human ingenuity, they are merely pieces of a puzzle waiting to be put together. This case study exemplifies the most optimal use of AI—to amplify human creativity, not replace it. Together, they unlock new dimensions of innovation, leading to outcomes that are greater than the sum of their parts.

As we move forward, “FeedShyWorm” stands as a prime example of this collaboration process, showcasing that the best way to harness AI is in tandem with the unique aspects of human creativity. Here’s to many more human-AI partnerships, where we explore uncharted territories with the wisdom of experience and the insights of intelligence—artificial and otherwise.

Until next time, remember—it’s not just about the code; it’s about crafting experiences. Experiences that teach us, entertain us, and most importantly, bring us together.

Page 1 of 5

Powered by WordPress & Theme by Anders Norén