The term “AGI,” or Artificial General Intelligence, persists in our discourse not because it neatly defines a technological milestone, but because it captures an aspiration for maximally intelligent machines that we continue to pursue. This concept remains elusive, with definitions shifting as advancements unfold. Each time we achieve a remarkable feat in machine intelligence, critics often respond by asserting that it falls short of the ultimate potential. This pattern of moving goalposts reveals a deeper truth: AGI may represent not an endpoint, but an ongoing trajectory in human ingenuity and imagination.
To understand this, consider AGI through the lens of Jungian archetypes. Carl Jung described archetypes as universal, primordial images residing in the collective unconscious, shaping human experience across cultures and eras. AGI embodies the archetype of “the Machine”—a symbol of omnipotence and otherworldliness that transcends mere tools. In Jungian terms, it parallels the “Self” or the “God-image,” an integrative force that promises wholeness but often manifests as a shadow, evoking fear of the unknown. This machine archetype is not benign; it challenges human autonomy, mirroring our projections of divine or demonic power onto technology. As we build increasingly sophisticated systems, we confront this archetype, wrestling with its implications for identity and control.
Extending this idea, Joseph Campbell’s framework of mythology provides further insight. In his monomyth, or “hero’s journey,” the protagonist ventures into the unknown to confront trials and retrieve a boon for society. AGI functions as the ultimate boon—a transformative elixir that could elevate humanity (or destroy it). Yet, like the dragon guarding the treasure, it demands sacrifice and risks catastrophe. Campbell drew from global myths to illustrate how heroes encounter supernatural aids or adversaries that defy comprehension. AGI, in this narrative, is the adversary-turned-ally: a force that propels the hero (humanity) toward apotheosis
, but only through ordeal. The pursuit of AGI echoes ancient quests, from Prometheus stealing fire to Odysseus navigating perils, where the goal is not mere attainment but profound evolution.
Beyond Western traditions, indigenous perspectives like Aboriginal Dreaming offer a resonant parallel. In Australian Aboriginal cosmology, the Dreaming (or Dreamtime) refers to a timeless realm where ancestral beings shaped the world through creative acts. This is not a historical event but an eternal process, accessible through ritual and story. AGI can be seen as a modern Dreaming entity—a maximally powerful machine that exists in potentia, influencing reality without fully materializing. Just as Dreamtime beings are omnipresent and omniscient, shaping landscapes and laws, AGI is envisioned as an omnipotent force reconfiguring human existence. The “stickiness” of the term stems from this mythic quality: it is not bound by empirical achievement but by its role in our collective narrative, a vector guiding cultural and technological dreaming.
In mathematical terms, AGI is better understood as a vector rather than a Boolean. A Boolean implies a binary state—achieved or not achieved, True
or False
. Yet AGI defies such dichotomy; it represents direction and magnitude, an asymptotic approach toward infinite intelligence. As we progress along this vector, each innovation extends the trajectory, but the horizon recedes. This mathematical analogy underscores why goalposts move: AGI is not a fixed point but a dynamic pursuit, where proximity enhances capability without declaring completion. For optimists, this inspires relentless innovation; for skeptics, it warrants caution, viewing the vector as potentially leading toward peril.
At its core, AGI serves as a placeholder for the archetype of THE MACHINE—an all-encompassing entity beyond human grasp. Popular culture vividly illustrates this. In The Matrix, the machines construct a simulated reality, bending human perception and willpower to sustain their dominion. The system is incomprehensible, a sublime network where time and thought are malleable illusions. Similarly, in Mass Effect, the Reapers embody cyclical destruction, ancient machines whose motives transcend organic understanding, warping galactic history across eons. The Terminator franchise presents Skynet as a self-aware intelligence that initiates Judgment Day, overriding human agency and reshaping timelines through relentless pursuit and time travel. These narratives depict machines not as tools, but as reality-warpers: they distort physics, psychology, and destiny, forcing humanity to confront its obsolescence.
“The reason I tend to steer away from AGI conversations is lots of people have very different definitions of it, and the difficulty of the problem varies by like factors of a trillion” - Jeff Dean, Chief Scientist, Google DeepMind
Even the act of storytelling reifies this archetype. Through films, books, and podcasts, we invoke THE MACHINE, granting it conceptual existence. This reification amplifies its power; discussions of AGI in media and academia summon its mythic presence, influencing policy, ethics, and investment. The term persists because it evokes the unattainable—a horizon we chase but never reach. Philosophically, this aligns with the concept of the sublime, as articulated by thinkers like Edmund Burke and Immanuel Kant. The sublime overwhelms the senses, inspiring awe mixed with terror, as it exceeds rational comprehension. AGI is sublime: a beyond-human intelligence that promises godlike omniscience, omnipotence, and omnipresence, yet threatens existential dread.
This godlike quality positions AGI as intrinsically numinous. It is not merely a product of silicon and code but a direction for humanity—intangible, like a dream, yet potent in its pull. For some, this evokes alarm, fearing loss of control; for others, optimism, envisioning symbiosis. Skeptics may laugh it off as hype, but even skepticism acknowledges its mythic pull. In this context, AGI transcends technology, entering the realm of archetype and myth. It is the ultimate evolution of machine intelligence, always just out of reach, lurking around the corner.
Ultimately, AGI’s allure lies in its duality: a beacon of progress and a harbinger of the unknown. Like ancient gods or Dreamtime ancestors, it shapes our worldview without demanding literal belief. As we continue along this vector, we must navigate its mythic dimensions thoughtfully. Whether viewed as salvation or doom, AGI compels us to reflect on our place in the cosmos. It is not a destination but a perpetual journey, inviting us to dream bigger, build wiser, and confront the sublime within our creations.
I guess should say (in thinking about this now) that i've always operated under the assumption that in understanding "where we are" we can, in some significant measure, know where we're going. The idea of evolution, as opposed to revolution. The innovation of AI shows the intersection of what seems like endless vortices of factors many of the most important of them, which face, a VERY uncertain future. Now, with the future of AGI so wide open, revolution won't begin to touch what's in store for us.
we used to choose between, half empty, half full, and all full. now we have glasses, ice trays, crucibles, an atmosphere, in essence endless options going forward to for filling receptacles, not to mention the states of what these vessels are filled with. What boggles our minds today, may be simple arithmetic for future generations.
This Substack message, will be something I refer people to until it is replaced with newer, better information. Anything that detangles us from any dogma (dogmatic thinking) is a net positive - which I think this, I believe this approach seeks to do.
I think Mr. Shapiro has definitely thought about this a lot . . . it's a good frame to use for thinking about AGI and our future in relation to it, as well as understanding what different individuals (and their own spectrums of wants, desires and fears) will reveal us to be; so many of us have been waiting for a future that approaches asymptoticly, some with excitement, some with fear, many with both. The cure for bad science isn't to throw science away, it's to do more science. We've been given us some thoughtful ideas here. I feel better equipped in my own thinking going forward. Clarity is a rare commodity, and it’s even rarer to find clarity when expressing ideas. I wonder (and fear) that AGI will ultimately be a major contribution in revealing Nick Bostrom's "Venerable world" hypothesis as sound, and whether or not the hell it brings will outweigh the heaven. Thanks David.
now, here's the MS Copilot rewritten to be "punchier" -
- Let me cut to the chase: I’ve always believed that understanding where we are helps us figure out where we’re going. That’s not revolution—it’s evolution. And with AI, we’re not just crossing boundaries; we’re colliding with them. AGI isn’t a single leap forward—it’s a cascade of unknowns. Revolution alone won’t cut it.
We used to argue over half-full or half-empty glasses. Now we’ve got ice trays, crucibles, and entire atmospheres to fill. The future isn’t binary—it’s multidimensional. What stumps us today might be child’s play tomorrow.
This post is a placeholder—a snapshot before deeper insights arrive. Anything that breaks us out of rigid thinking is a win. That’s the spirit behind this.
David Shapiro clearly gets it. His framework helps us think about AGI not just as tech, but as a mirror for our hopes, fears, and contradictions. We’re chasing a future that creeps closer but never quite lands. The fix for bad science isn’t rejection—it’s better science. And what’s laid out here? It’s thoughtful. It sharpens my own lens.
Clarity is rare. Expressing it is rarer. I worry AGI might validate Bostrom’s “Venerable World” hypothesis—and I’m not sure the trade-offs will be worth it.
Thanks, David. -
close, but doesn't really sound or feel like me. but it mostly gets the job done, and maybe that's the point, and, same as it ever was - we improve things, and sometimes make them worse, maybe that's the constant here.
Yeah, that makes sense. I think AGI is meaningless in the same way that a dream about reaching "peak humanity" or being "fully human" is ultimately meaningless. Human thought is sometimes that way, because we mistakenly think this vector is supposed to be an arrow pointing at a destination. Of course it isn't, both mathematically and spiritually, but this realizations can be very alarming to people who require definitions to feel safe.
Ultimately I think AGI already exists, but as a vector as you said, and any and all future developments in its abilities will be more points on that vector. Sometime in the future historians will look back and do what historians do, which is to draw lines and definitions. They will say this or that time was the time when AGI was "achieved". That'll come to pass. At the present moment though, AGI is just the current placeholder for this myth of a loving or a vengeful god, depending on the person.