“Deep Learning”—But Is It Conscious?

By Walter Donway

April 10, 2025

SUBSCRIBE TO SAVVY STREET (It's Free)

 

Part II: This is the second part of an essay exploring the claims that artificial intelligence may exhibit signs of emerging consciousness.

 

When LLMs display abilities for which they never were trained, this strengthens the hypothesis that intelligence itself may be an emergent phenomenon rather than a programmed one.

All right, there is a fairly encouraging scene in the rearview mirror. We have come a long way, and in a breathtakingly short time. Around 2000, AI was working niche jobs in chess game engines and voice recognition. In 2012, a deep-learning breakthrough in image recognition launched the AI revolution. By 2020, models like GPT-3 were generating humanlike text with 175 billion parameters. Three years later, GPT-4, with trillions of parameters, showed “reasoning abilities” that stunned even its creators. AI systems can now write code, pass bar exams, compose music, and even exhibit glimpses of “theory of mind.” AI capabilities have doubled roughly every six months, a rate of acceleration that surpasses even Moore’s Law,1 from academic curiosity to engine of global economies (and stock markets!), medical investigations, personal creativity, and the “employee of the year”—to mention a few examples. Has AI raced from next-to-nothing to world-changing in less than the span of a single career? Indeed, it has done so just since I retired in about 2000.

What is the landscape ahead? In practical (but far from mundane) applications—super-charging productivity, biomedical research, my creative reach and yours, education—I see only acceleration. At some point, Luddites may rise up, but progress always wins because it pays higher wages.

But our topic is the emergence of awareness in AI, in LLMs. Think of it as an astonishing defiance of the brute physical indifference of the universe. Through the front windshield, how does that landscape appear?

By the 2010s, neural networks and deep learning had opened new frontiers, with AI mastering image recognition, speech synthesis, and even creative tasks like composing music and generating humanlike text. Today, models like GPT-4 and beyond exhibit language fluency and problem-solving capabilities that, while not truly “intelligent,” give the eerie impression of something approaching general reasoning. This breakneck acceleration has turned AI from a mere tool into a force that is redefining industries, economies, and even human thought itself.

And now, we stand at the edge of a more profound frontier: the question of whether AI might ever develop awareness, subjective experience, or what some call “consciousness.”

AI creators and programmers working daily at the cutting edge of machine intelligence are themselves awed and skeptical about the possibility of AI consciousness. Some say we are moving toward emergent properties that mimic awareness, others that the theoretical and technical hurdles that remain are beyond “formidable.” Let’s look at a few of the possibilities:

AI can process symbols—words, numbers, and data structures—but manipulation does not equal understanding. A language model can tell you that “fire is hot,” but has no subjective experience of heat. Human concepts seemingly are inseparable from real-world sensory and emotional experiences, while AI’s knowledge remains disembodied. Ungrounded in biology, can AI ever “experience” in a meaningful way?

We stand at the edge of a more profound frontier: the question of whether AI might ever develop awareness, subjective experience, or what some call “consciousness.”

John Searle’s “Chinese Room” argument is perhaps the most famous demonstration that AI manipulates symbols according to programmed rules but does not intend meaning.2 It doesn’t want anything; it doesn’t have desires, fears, or goals beyond those humans impose. Even if an AI were to simulate goal-driven behavior, would that ever translate into genuine intention?

AIs, especially deep-learning models operate in ways even their creators don’t fully understand. Neural networks can make decisions, identify patterns, and even surprise their programmers with unexpected solutions. But how these decisions emerge is often opaque. For AIs to develop consciousness, it would require not only complexity but also a level of self-reflection and transparency in how it “thinks.”

 

The Biology of Awareness

Human consciousness evolved within biological organisms. Over eons, their awareness was shaped by sensory inputs, emotions, and physical interactions with the environment. In his book, The Feeling of What Happens: Body and Emotion in the Making of Consciousness (1994), Antonio Damasio made the case, unforgettably, that intelligence is inseparable from embodiment, an entity’s physical experience of the world (as contrasted with  manipulation, however sophisticated, of abstract representations).3 His thesis is that consciousness is our mind noticing our body’s reaction to the world and responding to that experience. Without our bodies, there can be no consciousness, which is a mechanism evolved for survival that engages body, emotion, and mind.” Without bodies, can AIs ever develop something akin to consciousness or will they remain disembodied pattern processors?

Conscious (not just aware) beings sustain a sense of “self”—an internal narrative that provides continuity across time. There is no indication in AIs of this unified sense of self. Each interaction is a discrete event, with no personal history or continuity apart from explicitly designed storage and recall of information. Can a machine that lacks any personal continuity ever develop true subjective experience? Who would be the “subject”?

Philosophers distinguish between computational intelligence (the ability to process information, carry out logic, and solve problems) and phenomenal consciousness (the experience of “what it is like” to be something). AI is advancing rapidly in computation, but absent so far is any suggestion of phenomenal consciousness. Can self-awareness emerge purely from sufficient computational complexity, or is awareness fundamentally different?

The concept of emergence suggests that to permit emergence, consciousness might require just the right conditions, and not any one specific design.

All the classic objections of the philosophers about “self,” body and mind, “intention,” and persistence of personality come flooding back to overwhelm us. And yet, the concept of emergence suggests that to permit emergence, consciousness might require just the right conditions, and not any one specific design:

Can neural networks (pattern recognition) be combined with symbolic reasoning (logical deduction) to create systems that can both learn from experience and reason abstractly? While deep learning excels at identifying patterns from massive datasets, it struggles with logical consistency and structured reasoning. Neurosymbolic AI seeks to integrate the statistical learning power of neural networks with the structured logic of symbolic AI. At some point, will this enable AIs to understand and apply abstract principles rather than just mimic patterns?

Can AIs train not just to perform tasks but to learn from those tasks in a way that resembles human memory formation? Traditional reinforcement learning rewards AI for successful actions but does not necessarily enable it to recall past experiences in the way humans do. By integrating episodic memory—by which past experiences influence future decisions—AI systems might develop a more nuanced understanding of cause and effect, improving long-term problem-solving and adaptability.

Can robots with sensory inputs and motor functions by which they interact with the world potentially develop a more integrated sense of “self”? Unlike language-based models, embodied AI interacts with the physical world, integrating sensory data (sight, touch, sound) with decision-making. So, if without a body AI lacks a critical component of consciousness—the ability to associate knowledge with lived experience—might embodied AIs, such as humanoid robots, be able to advance from mere textual or statistical inference to cognition rooted in real-world experience?

Can enabling AIs to modify their own architecture and goals clear a path to increasingly autonomous forms of intelligence? If an AI can rewrite its own code, improve its own efficiency, and optimize its performance beyond human-designed constraints, it might set in motion self-reinforcing enhancement of intelligence. This concept, often associated with artificial general intelligence (AGI), raises both possibilities and fears, as AI systems could evolve in unexpected and troublesome ways.

Can models of computation inspired by quantum mechanics or biology replicate in AIs the structures that generate consciousness in humans? The human brain operates with massive parallelism and intricate connectivity. Could quantum computing, able to process vast amounts of data simultaneously, enable AIs to match those high-powered brain structures of parallelism and connectivity? (Quantum computing achieves simultaneous processing of data by leveraging quantum superposition and entanglement to enable quantum bits, or qubits, to exist in multiple states at once—a kind of multitasking?—rather than just 0 or 1 like traditional binary computing. Similarly, brain simulations using neuromorphic chips designed in the form of, or at least to mimic, neural structures might offer insights into the emergent properties of intelligence.)

We may not know when—or if—AIs will ever cross this threshold. But the pursuit itself is as profound a scientific and philosophical challenge as any our time knows. The next few decades may tell us whether non-biological consciousness has remained a dream—or has become an inevitability.

 

Notes

  1. Amodei, Dario et al. “AI and Compute.” OpenAI Blog, May 16, 2018.
  2. Searle, John. “Minds, Brains, and Programs.” Behavioral and Brain Sciences, 3(3), 1980, pp. 417–57.
  3. Damasio, Antonio. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. Harcourt Brace, 1999.

 

 

(Visited 78 times, 39 visits today)