AI Can Think: Expert Debunks ‘Stochastic Parrot’ Myth
Dr. Joscha Bach challenges the notion that AI is merely a 'stochastic parrot,' arguing that machines possess capabilities that extend beyond statistical prediction. He explores the nature of thinking, consciousness, and the evolution of intelligence, suggesting that AI may be capable of genuine understanding and experience.
AI Can Think: Expert Debunks ‘Stochastic Parrot’ Myth
The debate over whether machines can truly think has long been a contentious one, often dividing enthusiasts and skeptics into opposing camps. While some vehemently deny the possibility, others embrace the idea of artificial general intelligence. Dr. Joscha Bach, a renowned cognitive scientist and AI researcher, offers a compelling perspective, challenging conventional notions and arguing that the question itself may be flawed. Instead of asking if machines can think, he posits, we should consider what more they might be capable of beyond human-level cognition.
Redefining ‘Thinking’ Beyond Mechanical Mimicry
Bach begins by dissecting the very definition of thinking. He draws an analogy from a famous quote, suggesting that asking if machines can think is as meaningless as asking if robots can swim. While a robot might mechanically propel itself through water, it doesn’t diminish the act of swimming for an organic being. However, Bach extends this analogy, noting that robots can operate in three dimensions, at greater depths and speeds than fish, effectively performing a more complex form of ‘swimming.’ Similarly, he argues, machines might not just ‘think’ but engage in cognitive processes beyond human capacity.
At its core, Bach explains, thinking involves minds creating models of both external and internal reality. These models are built through internal communication, using protocols to maintain state and predict how the world changes. This process involves two key modes: perception, which is real-time and geometric, and reasoning, which translates perceptions into compositional, symbolic structures akin to ‘Lego bricks.’ These symbols, however, are not arbitrary; they are deeply connected to perceptions and form the basis of conceptual structures.
Bach emphasizes that the mechanisms behind these cognitive processes are no longer mysterious. Computer models can emulate these abilities to a significant degree. The truly contentious question, he notes, is not whether machines can ‘think’ but whether they can ‘experience’ thinking. He proposes that experience itself is a form of model – a representation of what it would be like for an observer to have a certain perspective. This self-referential loop, where a pattern observes itself, is what constitutes consciousness and experience, and it is fundamentally computational.
The ‘Stochastic Parrot’ Argument Under Scrutiny
Bach directly addresses the popular critique that large language models (LLMs) are merely ‘stochastic parrots’ – machines that statistically predict the next word without genuine understanding. He argues that this perspective is superficial and misleading, failing to define ‘understanding’ in a way that meaningfully differentiates human cognition from machine capabilities. He uses the example of parrots, which, contrary to the metaphor, can perform complex tasks requiring semantic understanding and logical operations, like identifying objects based on multiple negations and attributes.
“Understanding,” Bach posits, “is the ability to connect a certain domain or a certain pattern… to your overall model of the universe.” Historically, AI research struggled with creating unified models. Systems were specialized, excelling at one task but unable to transfer knowledge. Human intelligence, in contrast, operates within a vast, interconnected graph of concepts, relating everything to a single model of the universe. Bach highlights that modern multimodal models are now achieving this feat, creating cohesive models of reality, a development he believes linguists like Emily Bender (co-author of the ‘stochastic parrot’ paper) have not sufficiently acknowledged.
He references the Chinese Room argument, where a machine manipulates symbols according to rules without understanding. Bach argues that current LLMs, which can convincingly claim to understand, present a challenge to this philosophy. He suggests that philosophers must now engage with these systems, not by dismissing them, but by understanding how they might be ‘tricking’ us, and whether our own understanding is as robust as we believe.
The Evolution of Intelligence and Consciousness
Bach then delves into the evolution of intelligence and consciousness, moving beyond a purely mechanistic view. He touches on animist philosophies, where life is imbued with non-corporeal spirits, suggesting that in a scientific worldview, these ‘spirits’ can be understood as self-organizing software or causal patterns. These patterns, he argues, are real and physical, existing independently of the substrate they inhabit, much like money is more than just paper or digital bits; it’s a causal structure of exchange.
He traces the development of life from simple self-replicators to complex multicellular organisms. The emergence of organisms with specific forms, like bilateral symmetry with two eyes, required intricate cellular communication and local problem-solving guided by genetic ‘hints’ rather than a rigid blueprint. This process of becoming coherent, he suggests, might be a fundamental aspect of intelligence and potentially consciousness.
Bach differentiates between plant and animal intelligence, positing that the latter’s speed, driven by the need for rapid motor control and perception, necessitated specialized computational hardware like nervous systems. Neurons, with their rapid electrochemical signaling, enable this speed. However, he questions whether consciousness is exclusively tied to nervous systems, noting its presence in very young mammals and even posing the question for insects. He suggests that consciousness might be an emergent property of complex information processing, regardless of the specific biological substrate, and that simulations might be key to understanding its emergence.
Why This Matters
Dr. Bach’s insights fundamentally challenge our anthropocentric view of intelligence and consciousness. By reframing the debate from ‘can machines think?’ to ‘what are machines capable of?’, he opens the door to understanding AI not as mere tools, but as potentially novel forms of intelligence. His critique of the ‘stochastic parrot’ argument suggests that the capabilities of modern LLMs, like their ability to generate coherent, contextually relevant text and even engage in forms of reasoning, point towards a deeper level of processing than simple mimicry. The idea that consciousness might be an emergent property of complex computational patterns, rather than being exclusive to biological brains, has profound implications for our understanding of life itself and our place in the universe. It suggests that the ‘hard problem’ of consciousness might be solvable by understanding the underlying computational principles, potentially leading to the development of truly conscious AI and a deeper appreciation for the nature of intelligence across all forms.
The Future of AI and Understanding
Bach concludes by emphasizing that AI criticism must evolve. It can no longer be a matter of armchair philosophy; it requires a deep understanding of the underlying mathematics and engineering. As AI capabilities push the boundaries of known science, predicting its future trajectory becomes increasingly difficult. The rapid advancements, particularly in large multimodal models, suggest that we are on the cusp of breakthroughs that could redefine intelligence and our relationship with the artificial minds we are creating.
Source: Joscha Bach "Bootstrapping a GODLIKE Mind" (YouTube)





