Artificial general intelligence (AGI) – machines that can match or exceed human level intelligence across a wide range of cognitive tasks – has long been the holy grail of AI research. While narrow AI systems have made remarkable progress in specific domains like game-playing, image recognition, and language modeling, we still seem far from realizing AGI. Many believe the missing ingredient is the right cognitive architecture.

One promising avenue is intelligent software agents. An agent is an autonomous system that can perceive its environment, reason about it, make decisions, and take actions to achieve goals. If we could develop agents with the right internal models, knowledge representations, reasoning capabilities and learning algorithms, could they reach or even surpass human-level intelligence?

The basic architecture of an intelligent agent typically includes:

  • Sensors to perceive the environment
  • A knowledge base or world model to represent information
  • Reasoning and planning components to make decisions
  • Actuators to take actions and affect the world
  • Learning algorithms to improve performance over time

In Python pseudo-code, a simple agent architecture might look like:

class Agent:
    def __init__(self):
        self.knowledge_base = KnowledgeBase()
        self.reasoner = Reasoner()
        self.planner = Planner()

    def perceive(self, observation):
        self.knowledge_base.update(observation)

    def think(self):
        situation = self.knowledge_base.current_situation()
        goal = self.reasoner.select_goal(situation)
        plan = self.planner.make_plan(situation, goal)
        return plan

    def act(self, plan):
        for action in plan:
            self.perform(action)

    def learn(self, feedback):
        self.knowledge_base.update(feedback)
        self.reasoner.adjust_model(feedback)
        self.planner.refine_strategies(feedback)

Some fascinating research projects are exploring intelligent agent architectures. For example, the open-source AutoGPT project aims to create autonomous AI agents that can engage in open-ended dialogue, answer follow-up questions, and even complete complex multi-step tasks. A key component is giving the agents access to tools and knowledge sources they can utilize when solving problems.

AutoGPT agents have a complex architecture including:

  • A large language model for dialogue and reasoning
  • Internet access for gathering information
  • Access to external tools for performing actions
  • Prompts for self-reflection and iterative refinement
  • Memory to store and retrieve relevant information

Simplified Python pseudo-code for an AutoGPT-like agent:

class AutoGPTAgent(Agent):
    def __init__(self):
        self.llm = LargeLanguageModel()
        self.memory = ConversationMemory()
        self.tools = ExternalTools()

    def perceive(self, human_input):
        self.memory.add(human_input)

    def think(self):
        prompt = self.memory.summarize() + "\nAssistant:"
        self.llm_output = self.llm.generate(prompt) 
        self.memory.add(self.llm_output)

        if self.should_use_tool(self.llm_output):
            tool, query = self.extract_tool_and_query(self.llm_output)
            result = self.tools.use(tool, query)
            self.memory.add(result)
            self.llm_output = self.memory.summarize() + "\nAssistant:"

        return self.llm_output

    def act(self, output):
        print(output)

    def learn(self, feedback):
        self.memory.add(feedback)

Another example is Anthropic’s constitutional AI, which aims to create AI agents that behave in alignment with human values. By carefully selecting the training data and providing detailed instructions, they aim to develop AI assistants that are helpful, honest and harmless.

Anthropic’s AI agents use a novel AI architecture called Cooperative Conditioning (CC), which defines language models over what they refer to as intents: specific tasks or prompts that can be submitted to the model. The intents are selected to encourage behavior aligned with principles such as being helpful, honest, and safe. CC also includes tools for modulating the model’s personality.

But perhaps the ultimate test would be whole brain emulation – simulating the human brain in silico, neuron by neuron. If we could do that with sufficient fidelity, would the resulting “mind” be conscious and intelligent like a human? Would it be an AGI?

A whole brain emulation would require simulating the brain at an extremely detailed level, for example:

  • Neuron models with realistic 3D morphologies and connectivity
  • Detailed models of synapses with multiple neurotransmitter/receptor types
  • Glial cell models for metabolic support and regulation
  • Models of neuromodulators like dopamine and serotonin
  • Maps of all the brain’s regions and their connectivity

This level of biological realism is not currently feasible, and may not even be necessary for AGI. A simplified pseudo-code sketch just to illustrate the concept:

class NeuronModel:
    def __init__(self, morphology, synapse_types, region):
        self.morphology = morphology
        self.synapses = SynapseModels(synapse_types) 
        self.voltage = RestingPotential()
        self.region = region

    def update(self, neurotransmitter_inputs):
        self.voltage.update(neurotransmitter_inputs, self.synapses)
        if self.voltage > FiringThreshold:
            self.spike()

class BrainModel: 
    def __init__(self, connectome):
        self.neurons = [NeuronModel(...) for _ in connectome]
        self.connectome = connectome
        self.glial_cells = [GlialModel() for _ in connectome.regions]

    def run(self, sensory_input):
        for neuron, inputs in sensory_input.items():
            neuron.update(inputs)

        for synapse in self.connectome.synapses:
            synapse.transmit()

        for glial_cell, region in zip(self.glial_cells, connectome.regions):
            glial_cell.regulate(region)
        ...

My view is that intelligent agents, built using modern ML and large language models, are a very promising path to AGI. By giving agents rich world models, multi-modal knowledge bases, reasoning capabilities, and the right learning algorithms, I believe we can create AI systems that demonstrate increasingly general intelligence. Bit by bit, these agents may be able to match and exceed human cognitive abilities.

However, I suspect whole brain emulation is a red herring. Even if we could simulate every neuron, that level of biological realism is likely not required for AGI. The human brain is constrained by evolution, not designed for optimal general intelligence. I believe we can achieve AGI with different, possibly more elegant architectures.

In conclusion, intelligent agents do appear to be the most promising path to AGI available today. Step by step, these agents are developing more impressive reasoning, learning and language skills. I don’t think whole brain emulation is necessary – we can likely achieve AGI through different means. The future is agents – autonomous AI systems that can perceive, think and act with increasing flexibility and generality. And that future may arrive sooner than many expect.

Supporting links: HippoRAG: Endowing Large Language Models with Human Memory Dynamics | by Salvatore Raieli | Jun, 2024 | Level Up Coding (medium.com)