On becoming a "useless eater" and your core dread about AI and robotics
This is what you truly fear about AI and robots—becoming worthless to society, and unable to justify your existence to a world that no longer needs you.
In a world increasingly shaped by artificial intelligence and automation, a primal fear lurks beneath the surface of our collective consciousness. It’s not just about losing jobs or being outpaced by machines; it’s about something far more fundamental: the dread of becoming irrelevant, of no longer mattering to society. This fear, I believe, is encapsulated in the chilling term “useless eater.”
The phrase “useless eater” is a dark remnant of history, originating from Nazi ideology during World War II. In their twisted worldview, it referred to individuals with disabilities or serious medical conditions—people seen as burdens on society, consuming resources without contributing anything in return. The term was a justification for horrific policies, a way to dehumanize and devalue those deemed unproductive. Today, it has been repurposed in conspiracy theories and online discourse, often wielded by those who fear a future where technology renders vast swathes of humanity obsolete.
But the term’s resonance goes beyond its historical origins. It taps into a deep, primal anxiety that has always been part of our social fabric. As social apes, our survival has historically depended on our value to the tribe. If we couldn’t hunt, gather, or protect, we risked being cast out, left to fend for ourselves in a world that offered little mercy. This instinctual fear of irrelevance is hardwired into us, a remnant of our evolutionary past that still shapes our psyche today.
From the moment we are born, we are conditioned to seek validation from our community. We learn early on that our worth is tied to what we can offer—whether it’s food, shelter, knowledge, or emotional support. To be seen as useless, to be perceived as having no value to the tribe, is to face a form of social death. It’s not just about survival in a physical sense; it’s about our place in the social order, our identity, and our sense of self. In modern society, this anxiety manifests in various ways. For some, it’s the fear of losing a job to automation. For others, it’s the dread of being left behind as technology advances at a breakneck pace. But at its core, it’s the same ancient fear: “If I don’t matter to society, what is the point of my existence?”
It’s important to distinguish this fear from the more straightforward concern of losing instrumental value—such as the ability to make money. While financial stability is undoubtedly crucial, many people find their self-esteem through other means. Artists derive satisfaction from creating music, paintings, or literature. Social butterflies thrive on popularity and connection. Parents find purpose in raising their children. These avenues of self-expression and validation are not tied to economic productivity but are deeply meaningful nonetheless.
Yet, even for those who find fulfillment outside of work, the underlying anxiety remains. The fear is not just about losing a job or a specific role; it’s about the fraying of the baseline contract between the individual and society. This contract, often unspoken, is the understanding that we all have something to offer, that our presence is needed and valued. When that contract breaks down, when society no longer wants or needs us, we are left adrift, definitionally worthless—a “useless eater.”

This is where the true emotional weight of the fear lies. It’s not just about personal failure or inadequacy; it’s about a systemic shift that threatens to undermine the very foundation of our social existence. As AI and robots become more capable, they threaten to supplant human labor, creating a new class of workers that require no wages, no safety, no concern for their wellbeing. They are obedient, intelligent, and productive—the ideal worker for any capitalist. If most of us get supplanted by this new automation slavery class, then we have nothing to offer to society. There is no fair exchange we can possibly offer to validate our existence to the rest of the human tribe.
The dread of becoming a “useless eater” is not just about liberation; it’s about “how do I validate my existence to humanity if I have nothing of value to offer? If I don’t matter to humanity, what’s the point of my existence?” This fear, I believe, is the core psychological anxiety around AI and robotics and automation. It’s a fear that strikes at the heart of our identity, our place in the world, and our very reason for being. And as we move further into this brave new world, it’s a fear we must confront, not just as individuals, but as a society. For if we cannot find a way to ensure that everyone matters, then we risk losing something far more precious than jobs or money—we risk losing our humanity itself.
Really resonated with this, Dave. The psychological core you surface—“If I don’t matter to society, what’s the point of my existence?”—feels timely and raw. You’ve done a brilliant job articulating the primal, tribal fear of irrelevance that automation and AI are bringing to the surface.
Reading this through a developmental lens, I couldn’t help but reflect on how much of this fear is rooted in what some frameworks call the “Orange” stage of consciousness—where our worth is often tied to productivity, achievement, and measurable output. It’s a mindset that has served us well in industrial and capitalist systems, but which may now be showing its limits.
There’s an evolving body of thought—drawing on the work of Ken Wilber (Integral Theory) and Frederic Laloux (Reinventing Organizations)—that proposes a shift to a more holistic stage sometimes called “Teal.” In that worldview, human worth isn’t contingent on usefulness or output. Everyone matters intrinsically, as part of a living, evolving system. The idea of a “useless eater” is unthinkable in such a frame—not because it’s morally wrong (though it is), but because it reflects a fundamental misreading of what human value actually is.
Relatedly, the philosopher Andy Clark recently published a piece in Nature Communications called “Extending Minds with Generative AI.” He argues that humans have always been hybrid cognitive systems—“natural-born cyborgs”—and that AI doesn’t replace human thought so much as extend it. It’s a shift from being productivity machines to becoming collaborative, creative ecosystems.
If the Orange fear is, “What happens if I’m no longer useful?”, the Teal possibility might be, “What new forms of contribution, connection, and consciousness can emerge when we let go of that question altogether?”
The post-work future must be in the zeitgeist. I just recently finished a speculative fiction treatment of this topic, projecting forward for a couple of hundred years. You might enjoy it: https://sisyphusofmyth.substack.com/p/in-the-garden-of-eden-baby?r=5m1xrv