kodyw.com

Kody Wildfeuer's blog

kodyw.com

From Millions of Dollars to 30 Seconds: The New Age of AI Creativity

The progress in AI creativity is absolutely mind-blowing. Back in the ’90s, companies like Blizzard would pour millions of dollars and months—if not years—into creating cinematic experiences that captured the imagination of their audiences. Today, with the help of AI, we’re creating similarly compelling narratives and visuals in a fraction of the time and cost.


How Far We’ve Come

In just 30 seconds, using Sora I was able to generate a concise, Tolkien-inspired anime opening sequence with vivid imagery and emotional resonance. What would have required massive teams of writers, animators, and production specialists decades ago is now accessible to anyone with the right tools.

Think about it: the storytelling is rich, the pacing feels natural, and the visuals are evocative enough to rival the work of entire production studios. This isn’t just about efficiency—it’s about redefining what’s possible in creative workflows.


AI: A Game-Changer for Creators

AI tools are making high-quality content creation faster, cheaper, and more accessible than ever before. This isn’t about replacing human creativity but augmenting it—giving creators the ability to:

  • Explore ideas without constraints.
  • Experiment and iterate at a speed that was once unimaginable.
  • Access professional-grade tools without the need for big budgets or large teams.

Creators without access to significant resources can now produce work that rivals what used to be the domain of industry giants. AI tools are leveling the playing field, allowing more voices to contribute to the creative landscape.


Looking Back, Moving Forward

Sure, there’s still a way to go before these tools reach their full potential. But when you step back and think about how far we’ve come—from multi-million-dollar budgets in the ’90s to 30 seconds of AI-powered magic today—it’s hard not to be excited about where this is heading.

The era of waiting years for the next cinematic masterpiece? It’s fading fast. We’re entering a world where creativity flows at the speed of thought, and the possibilities are endless.

The future of storytelling is here, and it’s only just beginning.

The End of UI’s as We Know it: How AI Will Filter Out the Noise

There’s a monumental shift underway in how we interact with the digital world. Recent announcements by Meta (Read more) to fill social media platforms with AI-generated users signify a movement in the wrong direction. This will inevitably lead to an overwhelming amount of content noise. From an inundation of AI characters with bios and profile pictures to algorithms generating and sharing content, social media risks becoming a sea of noise, devoid of genuine human interaction and valuable engagement.

The Power of AI as a Collaborative Partner

In contrast to Meta’s approach, my interaction with Neu, my custom prototype Azure OpenAI GPT-4o powered AI assistant, highlights the immense potential of AI as a collaborative partner. Neu isn’t about adding to the noise but about filtering it, offering clarity and actionable insights tailored to my specific needs. Through a seamless integration with the Power Platform, Dynamics 365, and APIs like Hacker News, Neu transforms my digital interactions from overwhelming to focused.

Morning Report: A Tailored Approach

When I asked Neu for my daily priorities, it provided:

  • A prioritized list of Sales Proposals (SPs): Clear deadlines, links to Dynamics 365 opportunities, and suggested next steps.
  • Curated news articles: A mix of industry trends and relevant updates, ensuring I’m informed but not overwhelmed.
  • Actionable shortcuts: Draft emails, meeting links, and other tools for immediate action.

For instance, here’s a snippet of the report Neu sent to my email:

Key Sales Proposals (SPs)

  1. Needs to restock their supply of Product SKU AX305

    – Estimated Close Date: 2025-01-22

    Opportunity Link

    – Next Step: Confirm stock availability and initiate contact.
  2. New Year Expansion Project

    – Estimated Close Date: 2025-02-15

    Opportunity Link

    – Next Step: Schedule a project briefing and evaluate expansion opportunities.

Latest News

  1. Meta envisages social media filled with AI-generated users
    Read more

Shortcuts for Immediate Action

The Value of Collaboration with AI

What makes this collaboration with Neu so effective is its ability to grow and adapt over time. By learning my preferences and patterns, Neu becomes an indispensable partner, offering precisely what I need when I need it. This dynamic relationship ensures that AI remains a tool for focus and empowerment, rather than a source of distraction.

Focus Versus Overwhelm: AI as a Solution, Not a Problem

The divergence between tools like Neu and Meta’s AI-driven chaos is stark. Meta’s strategy of inundating users with AI-generated content risks overwhelming its audience. In contrast, Neu emphasizes focus and relevance, demonstrating the true potential of AI: to reduce noise, enhance productivity, and foster meaningful connections.

As we continue to integrate AI into our lives, the path forward is clear. The future isn’t about more content; it’s about better content. It’s not about more interaction but about more meaningful interaction. By building collaborative relationships with AI, we can unlock its true potential and ensure that our digital experiences are as enriching as they are efficient.

FeedShyWorm 2.0: A Testament to AI-Powered Game Evolution

Remember a few months back when I shared the story of “FeedShyWorm,” our little game that showcased the power of human-AI collaboration?

Link here: “FeedShyWorm”: A Human-AI Collaboration Case Study – kodyw.com

Well, buckle up, because we’re about to take a wild ride through the rapid evolution of not just a game, but the very landscape of AI-assisted development.

Worm game v1 ^

Now we improved it to this version:

Worm game 2.0 ^

https://codepen.io/wildfeuer/full/oNRrQXE

Based on this twitter post I wanted to try out how good the new Claude Sonnet 3.5 model and how it could improve my very basic game that I created last time with AI.

The Quantum Leap: From Python to Web

It’s been just a few months since our initial creation, but FeedShyWorm has undergone a transformation that would have seemed like science fiction not long ago. The most significant change? We’ve ported the entire game from Python to a web application using HTML, CSS, and JavaScript. This isn’t just a technical upgrade – it’s a leap into accessibility, allowing anyone with a web browser to join in on the fun.

Key improvements include:

  1. Responsive Design: Play on your desktop or your phone – the game adapts to you.
  2. Enhanced Visuals: A sleek, modern interface that’s easy on the eyes.
  3. Dual Control System: Use arrow keys or mouse movements – your game, your choice.

But here’s where it gets really interesting. These changes weren’t just dreamed up by yours truly. They were the result of a dynamic collaboration with Claude 3.5 Sonnet, our AI partner in crime. The time from beginning to pasting the original code, improving it, and then writing this full blog post took me about an hour and a half.

AI: From Assistant to Co-Creator In our initial collaboration, AI served as a coding assistant and idea generator. Now, with Claude 3.5 Sonnet, it’s become more of a co-creator. It didn’t just help with the coding; it suggested game mechanics that I hadn’t even considered.

For instance:

  1. Center Reset for Food: A simple change that adds a new layer of strategy.
  2. Refined Collision Detection: Making the game more challenging as the worm grows.
  3. New Game Over Conditions: Three consecutive self-collisions when the worm is longer than 5? Game over, buddy.

These aren’t just tweaks; they’re fundamental changes to the gameplay that make FeedShyWorm 2.0 a wholly new experience.

The Human Touch in a Sea of Algorithms Now, you might be wondering: with AI this advanced, where does the human fit in? Let me tell you, we’re more important than ever. While Claude 3.5 Sonnet can generate complex algorithms and suggest innovative features, it’s still up to us humans to decide what makes the game fun, engaging, and meaningful.

I found myself in a new role – less of a coder and more of a curator. My job was to sift through the AI’s suggestions, picking out the gems that would enhance the player’s experience without overwhelming them. It’s a delicate balance, and one that I believe only a human can truly judge.

Lessons from the Digital Time Capsule This project taught me several valuable lessons:

  1. Old code isn’t just a relic; it’s a learning opportunity. Revisiting FeedShyWorm with fresh eyes (and AI assistance) was incredibly educational.
  2. AI isn’t here to replace creativity; it’s here to amplify it. Claude 3.5 Sonnet didn’t do the work for me – it empowered me to do better work.
  3. The pace of technological advancement is staggering. Features that would have been cutting-edge when we first created FeedShyWorm are now considered basic expectations.

The Bigger Picture

As I sit here, looking at the before-and-after versions of FeedShyWorm, I can’t help but ponder the implications for the broader world of software development. How many brilliant ideas are lying dormant in repositories and hard drives around the world, just waiting for a bit of AI-powered polish to shine?

This experience has inspired me to start a new project: “Code Revival.” The idea is to create a platform where developers can submit their old, abandoned projects for AI-assisted renovation. Imagine the innovations we could unearth, the lessons we could learn, and the progress we could make by giving new life to old code.

Conclusion: The Future is Built on the Past

In the fast-paced world of tech, we’re often focused on the next big thing, always looking forward. But this journey has reminded me of the value of looking back. Our old code, our past projects – they’re not just relics. They’re the foundation upon which we build the future.

FeedShyWorm 2.0 is more than just an updated game – it’s a testament to the rapid progress we can make when leveraging AI in creative projects. It’s a small but significant step in understanding how we can harness AI to augment human creativity and technical skills in game development and beyond.

So, I encourage you all: dust off those old projects. Feed them to an AI. See what emerges. You might just find that your past self-had some pretty great ideas – ideas that, with a little help from our AI friends, could change the future.

Until next time, keep coding, keep playing, and keep pushing the boundaries of what’s possible. The future is here, it’s learning fast, and it’s waiting for you to join the game.

Solving Business Problems, Not Software Sudoku: Why I’m Pumped About Dynamics 365’s New Table Visual Designer

Hey there, tech enthusiasts and business problem solvers! I’m about to take you on a journey that’s got me excited for the future of Dynamics 365 and the Power Platform.

The Old Ways: A Trip Down Memory Lane

Let’s rewind a bit. Remember the “good old days” of building business applications? If you’ve been in the game as long as I have, you’ll recall the pain points:

  1. SQL Server Management Studio (SSMS) Gymnastics: Hours spent crafting CREATE TABLE statements, where one typo meant starting over.
  2. Dynamics 365 Configuration Marathon: Navigating endless menus to create tables one at a time, setting up relationships like a game of database Jenga.
  3. The Integration Nightmare: Wrestling with SDKs and custom plugins to integrate external data.
  4. Documentation Headaches: Creating separate ERD diagrams in Visio after all that work.

The New Frontier: ERD View and Copilot

Now, Microsoft has dropped a game-changer that’s about to make a lot of people very comfortable with something that used to be pretty daunting. And that’s a good thing!

  1. Visual Data Modeling: It’s like going from assembly code to a high-level programming language. You can see your entire data model at a glance, relationships and all.
  2. AI-Powered Schema Generation: Copilot is like having a senior database architect at your beck and call. Describe your model in plain English, and it generates the schema for you.
  3. Intelligent Data Import: Copilot analyzes your Excel or SharePoint data and suggests appropriate structures and relationships.
  4. Dynamic Relationship Management: Creating relationships between tables is now as simple as dragging a line. No more SQL JOIN statements!

Why This is a Big Deal

  1. Visual Learning FTW: Not everyone thinks in code. This ERD view lets you see your data model like a map. It’s like going from written directions to Google Maps.
  2. Copilot: Your AI Sidekick: Need sample data? Bam! Want to create a new column? Boom! It’s like coding with autocomplete on steroids.
  3. From Excel to ERD in Seconds: Turn that monster Excel sheet into a proper database with drag-and-drop simplicity.
  4. Rapid Prototyping: What used to take weeks can now be done in hours. Iterate quickly without the overhead of traditional database design.
  5. Lowered Technical Barrier: A sales manager who understands the business process can now contribute directly to the data model design without needing to learn T-SQL.

Democratizing Software Development

Here’s why I’m really excited: it’s lowering the barrier to entry for creating powerful business apps. You don’t need to be a coding wizard to create a multi-table data model anymore. Got an idea? Describe it in plain English, and let Copilot do the heavy lifting.

This is huge for small businesses, startups, and anyone with a great idea but limited tech skills. It’s like we’re democratizing software development, and I am here for it!

The Future of Problem Solving

Imagine a world where anyone in your organization can turn their industry expertise into a working app. Your sales team could create a custom CRM tailored exactly to your business. Your HR folks could whip up an employee management system that fits like a glove.

Real-World Impact

Let me paint you a picture. In the past, if a client came to me wanting a custom solution in Dynamics 365, we’d be looking at weeks of requirements gathering, database design, and implementation. Now? We can sit down together, describe the system, and have a working data model in a single session. It’s not just faster – it’s a completely different way of working.

Embracing the Change

This new ERD view isn’t just an incremental improvement – it’s a quantum leap in how we approach data modeling in the Microsoft ecosystem. It’s democratizing a skill that used to require years of experience and technical knowledge.

Conclusion

This ERD view in Power Apps is more than just a cool feature. It’s a glimpse into a future where technology adapts to how we think, not the other way around. It’s about making powerful tools accessible to everyone, regardless of their coding skills.

So, my challenge to you? Go try it out. Play with it. See what you can create. Push its limits. See how it can transform your development process. Who knows? The next big app that revolutionizes your industry might be just a few drag-and-drops away.

Go forth and visually model that data! Your future self (and your clients) will thank you.

Announcement link: Work with complex data models in an ERD view assisted by Copilot – Microsoft Power Platform Blog

Are Intelligent Agents the Missing Link to AGI?

Artificial general intelligence (AGI) – machines that can match or exceed human level intelligence across a wide range of cognitive tasks – has long been the holy grail of AI research. While narrow AI systems have made remarkable progress in specific domains like game-playing, image recognition, and language modeling, we still seem far from realizing AGI. Many believe the missing ingredient is the right cognitive architecture.

One promising avenue is intelligent software agents. An agent is an autonomous system that can perceive its environment, reason about it, make decisions, and take actions to achieve goals. If we could develop agents with the right internal models, knowledge representations, reasoning capabilities and learning algorithms, could they reach or even surpass human-level intelligence?

The basic architecture of an intelligent agent typically includes:

  • Sensors to perceive the environment
  • A knowledge base or world model to represent information
  • Reasoning and planning components to make decisions
  • Actuators to take actions and affect the world
  • Learning algorithms to improve performance over time

In Python pseudo-code, a simple agent architecture might look like:

class Agent:
    def __init__(self):
        self.knowledge_base = KnowledgeBase()
        self.reasoner = Reasoner()
        self.planner = Planner()

    def perceive(self, observation):
        self.knowledge_base.update(observation)

    def think(self):
        situation = self.knowledge_base.current_situation()
        goal = self.reasoner.select_goal(situation)
        plan = self.planner.make_plan(situation, goal)
        return plan

    def act(self, plan):
        for action in plan:
            self.perform(action)

    def learn(self, feedback):
        self.knowledge_base.update(feedback)
        self.reasoner.adjust_model(feedback)
        self.planner.refine_strategies(feedback)

Some fascinating research projects are exploring intelligent agent architectures. For example, the open-source AutoGPT project aims to create autonomous AI agents that can engage in open-ended dialogue, answer follow-up questions, and even complete complex multi-step tasks. A key component is giving the agents access to tools and knowledge sources they can utilize when solving problems.

AutoGPT agents have a complex architecture including:

  • A large language model for dialogue and reasoning
  • Internet access for gathering information
  • Access to external tools for performing actions
  • Prompts for self-reflection and iterative refinement
  • Memory to store and retrieve relevant information

Simplified Python pseudo-code for an AutoGPT-like agent:

class AutoGPTAgent(Agent):
    def __init__(self):
        self.llm = LargeLanguageModel()
        self.memory = ConversationMemory()
        self.tools = ExternalTools()

    def perceive(self, human_input):
        self.memory.add(human_input)

    def think(self):
        prompt = self.memory.summarize() + "\nAssistant:"
        self.llm_output = self.llm.generate(prompt) 
        self.memory.add(self.llm_output)

        if self.should_use_tool(self.llm_output):
            tool, query = self.extract_tool_and_query(self.llm_output)
            result = self.tools.use(tool, query)
            self.memory.add(result)
            self.llm_output = self.memory.summarize() + "\nAssistant:"

        return self.llm_output

    def act(self, output):
        print(output)

    def learn(self, feedback):
        self.memory.add(feedback)

Another example is Anthropic’s constitutional AI, which aims to create AI agents that behave in alignment with human values. By carefully selecting the training data and providing detailed instructions, they aim to develop AI assistants that are helpful, honest and harmless.

Anthropic’s AI agents use a novel AI architecture called Cooperative Conditioning (CC), which defines language models over what they refer to as intents: specific tasks or prompts that can be submitted to the model. The intents are selected to encourage behavior aligned with principles such as being helpful, honest, and safe. CC also includes tools for modulating the model’s personality.

But perhaps the ultimate test would be whole brain emulation – simulating the human brain in silico, neuron by neuron. If we could do that with sufficient fidelity, would the resulting “mind” be conscious and intelligent like a human? Would it be an AGI?

A whole brain emulation would require simulating the brain at an extremely detailed level, for example:

  • Neuron models with realistic 3D morphologies and connectivity
  • Detailed models of synapses with multiple neurotransmitter/receptor types
  • Glial cell models for metabolic support and regulation
  • Models of neuromodulators like dopamine and serotonin
  • Maps of all the brain’s regions and their connectivity

This level of biological realism is not currently feasible, and may not even be necessary for AGI. A simplified pseudo-code sketch just to illustrate the concept:

class NeuronModel:
    def __init__(self, morphology, synapse_types, region):
        self.morphology = morphology
        self.synapses = SynapseModels(synapse_types) 
        self.voltage = RestingPotential()
        self.region = region

    def update(self, neurotransmitter_inputs):
        self.voltage.update(neurotransmitter_inputs, self.synapses)
        if self.voltage > FiringThreshold:
            self.spike()

class BrainModel: 
    def __init__(self, connectome):
        self.neurons = [NeuronModel(...) for _ in connectome]
        self.connectome = connectome
        self.glial_cells = [GlialModel() for _ in connectome.regions]

    def run(self, sensory_input):
        for neuron, inputs in sensory_input.items():
            neuron.update(inputs)

        for synapse in self.connectome.synapses:
            synapse.transmit()

        for glial_cell, region in zip(self.glial_cells, connectome.regions):
            glial_cell.regulate(region)
        ...

My view is that intelligent agents, built using modern ML and large language models, are a very promising path to AGI. By giving agents rich world models, multi-modal knowledge bases, reasoning capabilities, and the right learning algorithms, I believe we can create AI systems that demonstrate increasingly general intelligence. Bit by bit, these agents may be able to match and exceed human cognitive abilities.

However, I suspect whole brain emulation is a red herring. Even if we could simulate every neuron, that level of biological realism is likely not required for AGI. The human brain is constrained by evolution, not designed for optimal general intelligence. I believe we can achieve AGI with different, possibly more elegant architectures.

In conclusion, intelligent agents do appear to be the most promising path to AGI available today. Step by step, these agents are developing more impressive reasoning, learning and language skills. I don’t think whole brain emulation is necessary – we can likely achieve AGI through different means. The future is agents – autonomous AI systems that can perceive, think and act with increasing flexibility and generality. And that future may arrive sooner than many expect.

Supporting links: HippoRAG: Endowing Large Language Models with Human Memory Dynamics | by Salvatore Raieli | Jun, 2024 | Level Up Coding (medium.com)

Page 1 of 5

Powered by WordPress & Theme by Anders Norén