medium.com
FeaturedBook Review: Toward Human-Level Artificial Intelligence: How Neuroscience Can Inform the Pursuit of Artificial General Intelligence or General AIPublished inDeepFail8 min readJust now--With the rapid evolution of artificial intelligence, the question of whether Artificial General Intelligence (AGI) is just around the corner has become a burning topic, fueling endless debates and speculation.Toward HumanLevel Artificial Intelligence takes a step back to examine the historical roots and scientific breakthroughs in neuroscience and machine learning, ultimately revealing that current paradigms such as those exemplified by large language models (LLMs) will fall short of delivering humanlevel AI (HLAI).By taking a multidisciplinary approach, the book aims to bridge the gap between neuroscience and AI research, uniting fields that have long worked in silos. In doing so, it covers a broad range of topics from early models like the perceptron to modern methods such as deep learning and backpropagation. It also examines the brains structure and function, exploring subjects like brain lateralization, decision-making by neural ensembles, and the insideout model. Moreover, the book discusses the promise of embodied intelligence (neurorobots), suggesting that machines capable of interacting with the physical world could pave the way for more adaptable and truly intelligent systems.Rather than getting lost in the details, this book presents a compelling roadmap to achieving humanlevel machine intelligence. It offers a thorough overview for researchers while remaining accessible to anyone curious about the convergence of neuroscience and AI.NotesAll current artificial neural networks, including state-of-the-art DLNNs, operate in an outsidein manner. They lack internal cognition; once activated, they process input to produce output, then remain brain dead until the next input.In contrast, the human brain stays active even without external stimuli, such as during sleep. Its ability to disengage from the external world and generate its own world model, which it manipulates as a reference, defines true cognition (Buzski, 2019).Elements of neural networks history:McCulloch & Pitts (1943): Laid the groundwork by modeling neurons mathematically, drawing direct inspiration from neuroscience.Frank Rosenblatt (1962): Developed the Perceptron, a feedforward network based on early neural models.Minsky & Papert (1969): Critiqued these models in Perceptrons, exposing key limitations that led to reduced funding and the AI winter, shifting focus to symbolic AI.Rumelhart, Hinton & Williams (1986): Introduced backpropagation to train hidden layers, sparking a revival of connectionist approaches despite diverging from strict biological realism.Yann LeCun (1985): Independently developed similar methods, later becoming a leader in deep learning research.Hinton & Bengio (20102012): Leveraged GPUs to accelerate training, igniting the deep learning revolution and modern AI explosion.3 types of neurons found in the central nervous system:Sensory Neurons: Detect external stimuli and relay this information to the nervous system.Motor Neurons: Transmit signals from the central nervous system to muscles, glands, and organs to control actions.Interneurons: Connect sensory and motor neurons, forming complex circuits within the central nervous system.Restless Brain: The brain continuously fires signals even without direct sensory input a phenomenon termed the restless brain (Greicius, 2003; Kenet et al., 2003; Raichle, 2009; Dehaene, 2014).Predictive Processing: It functions as a prediction machine, using spontaneous activity as training data to build internal predictive models a process that persists even during sleep (Howhy, 2013; Clark, 2015; Luczak et al., 2022).Adaptive Mapping: Its adaptability in mapping sensory information into an internal body model (homunculus) enables it to adjust to new inputs and control feedback effectively (Eagleman).Neurons are found to be optimized to produce sparse representations. (Olshausen & Field, 2004).Benefits of Sparse Coding:It allows for increased storage capacity in associative memories.Makes the structure in natural signals explicit.Represents complex data in a way that is easier to read out at subsequent levels of processing.It saves energy.Computer vs. Human Memory:Computer Memory: Stored in registers, caches, SSDs, and HDDs maintains exact data copies.ANN Memory: Distributed across connection weights; prone to overwriting during retraining.Human Memory: Continuously builds on existing knowledge through lifelong learning.There are three highlevel categories of human memories:Sensory Memory: Briefly holds raw sensory data (milliseconds).Short-Term Memory: Temporarily stores and processes limited information.Long-Term Memory: Consolidates knowledge for dynamic retrieval over time.Classification of the various types of memory in the human brainDehaene (2014) defines consciousness with three key components required to be simultaneously active:Vigilance: Being awake rather than asleep.Attention: Processing information, even if not fully aware.Conscious Access: The current focus or awareness.Du et al. (2023) reveal a three-level network hierarchy in the cerebral cortex:Level 1 (Low): Local sensory and motor networks.Level 2: Networks linking distant regions.Level 3 (High) and Beyond: Widely distributed association networks, with distinct clusters for language, social, and episodic functions.Prioritizing vision in the early stages of HLAI development leverages its central role in human cognition and offers a solid, biologically grounded starting point for building intelligent systems.High Brain Investment: Vision dominates brain resources, occupying 20%30% of the human brain. (Ptito et al., 2021; Sheth & Young, 2016).Rich Data Transformation: It converts 2D inputs into detailed 3D representations (objects, colors, textures, depth, motion) which are essential for constructing an internal model of the world.Early Development: Babies master vision early by resolving 2D-to-3D ambiguity through innate and learned rules.Foundation for Cognition: Multi-stage visual processing offers a robust framework for building more complex cognitive functions later. Visual intelligence operates seamlessly, forming the core of perception (Hoffman, 1998).Stan Franklins view, aligned with the society of mind concept (Minsky, 1988), proposes that the mind consists of autonomous, interacting agents. These agents operate independently without any topdown hierarchical command and control and, through their unique, competitive, and cooperative activities, they collectively generate intelligence, even if each individual agent is mindless.A view of mind based on Stan Franklin (1999)Embodied AI machine refers to AI machines (neurorobots) that must exist in the real world and interact with it, as part of its learning process. Ziemke (2003) identifies six different notions of embodiment required for embodied cognition:Structural Coupling: Interaction via connecting channels (a body isnt required).Historical Embodiment: Past interactions influence current behavior.Physical Embodiment: Requires actual physical instantiation beyond virtual software.Organism-like Embodiment: Mimics living forms (e.g., humanoid robots).Organismic (Autopoietic) Embodiment: A living system that grows or self-creates parts, autonomous vs. externally assembled machines.Social Embodiment: Ability to communicate via body language.Design Principles for Future Neurorobots (Krichmar & Hwu, 2022):Rapid Reaction: Must quickly and appropriately respond to events.Lifelong Learning: Must learn and retain knowledge throughout their lifespan.Survival Decision-Making: Must evaluate and prioritize options critical for survival.Engineered brain architectures are computational models designed to mimic aspects of human cognition. They integrate ideas from neuroscience and AI to simulate brain functions such as learning, memory, and decision-making. Types of architectures covered in the book:Cognitive Architectures: Models that simulate human thought processes using structured frameworks.Soar: A framework for problem-solving and decision-making.ACTR: Simulates human cognition using production rules and declarative memory.Semantic Pointer Architecture: Bridges symbolic and subsymbolic representations via compressed information.Adaptive Resonance Theory: Balances learning stability and plasticity, enabling continuous adaptation without catastrophic forgetting.Harmonic Oscillator Recurrent Neural Networks: Model neural oscillations with harmonic dynamics to capture temporal patterns in brain activity.Numenta AI Models: Focus on brain-inspired approaches, including Hierarchical Temporal Memory (HTM) and the Thousand Brain Theory, to model distributed, hierarchical processing.Deep Learning Neural Networks: Form the foundation of generative AI and large language models, demonstrating emergent properties in complex tasks.Biologically Plausible Models: Aim to mirror actual brain processes through artificial mechanisms.Backpropagation-like learningReinforcement learningNatural selection algorithmsCausal inferenceSpike-based computationHyperdimensional Computing: Uses high-dimensional vectors to efficiently represent and process complex information.Neuromorphic computing is an emerging hardware paradigm that emulates the neural architecture of the human brain to enable parallel, energy-efficient processing.Utilizes circuits that mimic neurons and synapses for parallel, spike-based processing.Designed for low-power, high-efficiency computation, suitable for real-time sensory processing.GPU, Neuromorphic chip and human brain comparison for AIThree approaches to building HLAI:(HL)AI: takes its inspiration from neuroscience.engHLAI: build an intelligent machine on ideas rooted in engineering.HLAI: hybrid approach exploits the best of computer engineered HLAI combined with humanlike HLAI.Ladder of increasingly intelligent systemsEssential Ingredients for an HLAI Robot: A high-level, hybrid design that integrates simulated neural networks with flexible robotic elements and a dual-brain architecture for both tactical and strategic decision-making.The key features of this design (Azoff, 2025):High-Level Conceptual Design:Not a detailed blueprint, but a c onceptual list outlining the necessary components for HLAI.Hybrid HLAI System: Integrates the best of engineering and AI in a combined design.Flexible Robotic Implementation: The brain is simulated, while the robotic elements can be virtual or instantiated in a physical robot interacting with the real world.Component Integration: Combines pre-built modules with neural networkbased learning systems.Dual-Brain Architecture: Mimics the left-right brain division seen in many intelligent animals, enabling an internal dialogue for decision-making one side managing tactical responses and the other handling long-term, strategic goals.Highlevel conceptual model of hybrid HLAI (Azoff, 2025)Nearterm milestones to A/HLAI:Rapid Learning: Generalizes from only a few examples.Multi-decision Making: Capable of making multiple decisions concurrently.Uncertainty Evaluation: Assesses and quantifies uncertainty in outcomes.Explainability: Clearly reveals the reasoning behind its decisions.Real-time Performance: Achieves efficient, realtime operation via advanced hardware and acceleration.Fairness and Diversity: Mitigates bias by using balanced, representative training data.Fail-safe Mechanisms: Ensures safe shutdown or fallback when unable to handle a situation.Accumulated Learning: Retains previous learning without forgetting over time.Key Attributes of an HLAI System (Azoff, 2025):InsideOut Model: The AI maintains an internal model of the world and applies a scientific process to discover and learn about its environment.LeftRight Brain Split: Divides tasks between two halves, allowing one side to focus on immediate actions while the other handles longterm planning through internal dialogue.Internal Reward System: Uses a reward mechanism similar to dopamine to reinforce learning.Diffuse Decision Making: Enables the strongest neuron ensembles to collectively determine the next steps.Focus on Visual Thinking: Emphasizes the processing of visual information, reflecting the evolutionary importance of vision.Causal Reasoning: Understands cause and effect relationships.LongTerm Goal Seeking: Capable of autonomously setting intermediate goals to achieve longterm objectives.Understanding of the Physical World: Incorporates scientific knowledge to grasp how the physical world functions.Ethical Behavior: Adheres to ethical principles in decision-making.Continuous Learning: Driven to constantly accumulate knowledge and forge new connections.Abstract Thinking: Able to generalize from specific details to broader, universal concepts.Find this book review interesting? Check my master reading list towards NeuroAI: