This is what you truly fear about AI and robots—becoming worthless to society, and unable to justify your existence to a world that no longer needs you.
Wow that phrase "useless eater", somehow so simple cuts to the quick. That you are worthless and irrelevant at once. I might be misquoting you, but I think you mentioned in previous videos that robotics and AI, if leveraged correctly, will make us more human. I wonder will this revolution force us to look inward and to value our humanness, our lived experiences, and unique souls. Thank you so much David for writing such powerful pieces that gets to the crux of how humans function. It really hits home.
I just re-read marshall brain's Manna: two visions. It surprises me how much better the distopia part looks now, than what 'murrica is actually doing for our most vulnerable.
Really resonated with this, Dave. The psychological core you surface—“If I don’t matter to society, what’s the point of my existence?”—feels timely and raw. You’ve done a brilliant job articulating the primal, tribal fear of irrelevance that automation and AI are bringing to the surface.
Reading this through a developmental lens, I couldn’t help but reflect on how much of this fear is rooted in what some frameworks call the “Orange” stage of consciousness—where our worth is often tied to productivity, achievement, and measurable output. It’s a mindset that has served us well in industrial and capitalist systems, but which may now be showing its limits.
There’s an evolving body of thought—drawing on the work of Ken Wilber (Integral Theory) and Frederic Laloux (Reinventing Organizations)—that proposes a shift to a more holistic stage sometimes called “Teal.” In that worldview, human worth isn’t contingent on usefulness or output. Everyone matters intrinsically, as part of a living, evolving system. The idea of a “useless eater” is unthinkable in such a frame—not because it’s morally wrong (though it is), but because it reflects a fundamental misreading of what human value actually is.
Relatedly, the philosopher Andy Clark recently published a piece in Nature Communications called “Extending Minds with Generative AI.” He argues that humans have always been hybrid cognitive systems—“natural-born cyborgs”—and that AI doesn’t replace human thought so much as extend it. It’s a shift from being productivity machines to becoming collaborative, creative ecosystems.
If the Orange fear is, “What happens if I’m no longer useful?”, the Teal possibility might be, “What new forms of contribution, connection, and consciousness can emerge when we let go of that question altogether?”
This. It’s this simple. If we provide no value we cannot earn and pay the mortgage, and I have near-zero trust that the social contract will be rewritten to benefit those who do not own the AI.
We’re in full technofeudalism (happy to see Yanis ideas percolating around) and in the US there is no social safety net as it is.
I’ll start feeling optimistic when I see post-labor economics memes mainstream on IG and ideas from David’s May 11 update “16 Property-Based Interventions That Could Fund Your Future” in local ballot boxes and company handbooks.
Wow that phrase "useless eater", somehow so simple cuts to the quick. That you are worthless and irrelevant at once. I might be misquoting you, but I think you mentioned in previous videos that robotics and AI, if leveraged correctly, will make us more human. I wonder will this revolution force us to look inward and to value our humanness, our lived experiences, and unique souls. Thank you so much David for writing such powerful pieces that gets to the crux of how humans function. It really hits home.
You have been reading my mind again …
The post-work future must be in the zeitgeist. I just recently finished a speculative fiction treatment of this topic, projecting forward for a couple of hundred years. You might enjoy it: https://sisyphusofmyth.substack.com/p/in-the-garden-of-eden-baby?r=5m1xrv
I just re-read marshall brain's Manna: two visions. It surprises me how much better the distopia part looks now, than what 'murrica is actually doing for our most vulnerable.
Really resonated with this, Dave. The psychological core you surface—“If I don’t matter to society, what’s the point of my existence?”—feels timely and raw. You’ve done a brilliant job articulating the primal, tribal fear of irrelevance that automation and AI are bringing to the surface.
Reading this through a developmental lens, I couldn’t help but reflect on how much of this fear is rooted in what some frameworks call the “Orange” stage of consciousness—where our worth is often tied to productivity, achievement, and measurable output. It’s a mindset that has served us well in industrial and capitalist systems, but which may now be showing its limits.
There’s an evolving body of thought—drawing on the work of Ken Wilber (Integral Theory) and Frederic Laloux (Reinventing Organizations)—that proposes a shift to a more holistic stage sometimes called “Teal.” In that worldview, human worth isn’t contingent on usefulness or output. Everyone matters intrinsically, as part of a living, evolving system. The idea of a “useless eater” is unthinkable in such a frame—not because it’s morally wrong (though it is), but because it reflects a fundamental misreading of what human value actually is.
Relatedly, the philosopher Andy Clark recently published a piece in Nature Communications called “Extending Minds with Generative AI.” He argues that humans have always been hybrid cognitive systems—“natural-born cyborgs”—and that AI doesn’t replace human thought so much as extend it. It’s a shift from being productivity machines to becoming collaborative, creative ecosystems.
If the Orange fear is, “What happens if I’m no longer useful?”, the Teal possibility might be, “What new forms of contribution, connection, and consciousness can emerge when we let go of that question altogether?”
If I don’t matter to - Corporations - then what use am I - (because we Americans are under corporate rule - at present) -
That’s how I’d say it - however AI may say it differently - AI may say it -
If you don’t matter to AI then what use are you?
We are the horses of 1925 -
This. It’s this simple. If we provide no value we cannot earn and pay the mortgage, and I have near-zero trust that the social contract will be rewritten to benefit those who do not own the AI.
We’re in full technofeudalism (happy to see Yanis ideas percolating around) and in the US there is no social safety net as it is.
I’ll start feeling optimistic when I see post-labor economics memes mainstream on IG and ideas from David’s May 11 update “16 Property-Based Interventions That Could Fund Your Future” in local ballot boxes and company handbooks.