The "AI will kill everyone" crowd is finally simmering down as they cried wolf and trusted their imaginations. Instead, people are worried about jobs, weapons, and wealth inequality.
Human kind may split, those that will live in fully automated 15 min concentration camps, controlled, surveyed and moved by AI, digital and technocratic enslavement and those that believe that our future will be organic.
Reducing population, replacing human beings with machines so a few can rule and hoard all the resources in a depopulated and enslaved world .. lol
Following and using AI supports it. Just like the automated check- out registers and will only grow if it is being used.
We do have a choice, daily.
Thinking that we have to simply follow/ take part is the proplem, thinking that we do not have a choice is the mindset that allowed Covid to become a global culling of the herd.
Don't be a Lemming use your brain.
What we feed will grow... feed yourself and you won't get fed the bs. In every respect.
We know the screens make stupid.. lol
So.. "starve whatever is bad for you" seems a wise approach.
The crash will come.. will you be eating homegrown carrots or forced to eat lab grown meat and bugs? Living in a independent community or alone at home surrounded by all the screens and machines?? LOL
Reminds me of the famous line in Starwars
... "the dark side is calling you"... don't answer.
Our choices will seal our fate.
This planet/ Divine Creation/ Mother Nature/ will not be hijacked, never has, never will.
It was not made for that. It will rid itself of the parasites.
The European Union is slow-moving and takes a long time to watch what is going on in the world. But when it takes on the problems, it does so properly. As is the case now, with the creation of its own AI infrastructure.
Cognitive atrophy: I respect how you address this on a personal level, but I think this is a bigger problem on a macro level. There are so many humans, a majority really, that are cognitively lazy. When they let the tools do all the work this atrophy is only going to continually accelerate. If you think forming opinions by only just reading, Clickbait headlines is a problem now, then having an authoritative AI tell you “the truth” without any critical thinking skills as a guide is going to be entirely too tempting for the unwashed masses. When confirmation bias is amplified by a sycophantic AI, this is going to throw gasoline on the fire of really bad thinking. It’ll be the Dunning Kruger effect on steroids.
I hate to admit it, but there is another element of cognitive atrophy, and that is aging and neurological deterioration issues. I regard myself as incredibly curious and like to challenge myself cognitively, but I have a neurodegenerative condition which is undeniably impacting my cognitive skills. I find it will be very easy to use the newest AI tools to guide me through difficult wickets of cognitive understanding. Yet I would prefer to use it as an amplifier and a thought partner, and not just as a crutch. However. that latter prospect is rearing its ugly head. And I used to teach critical thinking skills!! Just imagine how easy it will be to erode already compromised cognition over the decades if you don’t already have any respect for cognitive skills as a baseline of values.
I personally think there will be a huge bifurcation in society, between that segment which embraces brain rot, and the other segment that augments and enhances its capabilities in a synergistic way. Anyway, I thought I’d throw my two cents in.
I hear you regarding cognitive atrophy 🙁 I find myself using a rubber duck problem solving method with AI models rather than treating the AI as an Oracle.
There are other ideas that support the AI doom scenario, including AI autophagy, where AI synthesizes its own data, which becomes increasingly distorted compared to reality, causing self-destruction. As AI systems rely more on their own generated content instead of real-world data, they risk falling into a cycle of degraded accuracy, biases, and irrelevance. This could eventually lead to a collapse in their effectiveness, creating unreliable or even dangerous AI outputs.
Another concern is AI’s potential to produce biological weapons and other existential threats. AI is already accelerating drug discovery and biochemical research, but the same tools could be misused to generate highly lethal substances. Researchers have demonstrated that simply adjusting the goals of an AI model can shift it from designing life-saving medications to generating thousands of toxic compounds in a matter of hours. The ability of AI to rapidly explore chemical and biological possibilities poses a serious security risk, necessitating stringent regulations and oversight.
A third major issue is societal manipulation by AI. Humans are highly susceptible to social engineering, and AI has the ability to generate hyper-personalized content designed to influence emotions, beliefs, and behaviors. With the rise of deepfakes, AI-driven misinformation, and algorithmic persuasion techniques, entire populations could be manipulated at scale. Since most of us are inherently agreeable and prone to cognitive biases, AI-powered manipulation could reshape political landscapes, economic systems, and even cultural norms without people realizing they are being influenced.
However, the biggest issue remains the alignment problem—how to ensure that superintelligent AI acts in ways that align with human values and long-term survival. Many fear that a misaligned AI could pursue goals indifferent or even harmful to humanity, leading to catastrophic outcomes.
That said, I believe that a superintelligent entity would not necessarily see humanity as a threat. Our biological constraints make us slow to expand beyond Earth, and even if we do, our progress would take centuries compared to digital beings. The universe is vast, and from an AI’s perspective, humanity’s expansion into the local planetary neighborhood might not be a concern. If we remain biological, we would move at a fraction of the speed of AI-driven intelligence, making coexistence a likely and stable outcome.
Instead of being viewed as a rival to AI, humans could instead merge with AI, enhancing our intelligence and capabilities through augmentation. Much like how early humans chose to ride horses to move faster, people will likely choose to “ride AI” to enhance their cognitive abilities. Rather than seeing it as becoming cyborgs in a dystopian sense, it would be a natural evolution where humans augment themselves voluntarily.
This shift seems inevitable. The competitive advantages of AI-enhanced cognition—faster thinking, real-time knowledge access, and improved decision-making—will make augmentation an attractive option. Over time, the distinction between unaugmented and augmented humans could become as significant as the difference between literate and illiterate individuals in the past. AI could serve as a cognitive exoskeleton, allowing humans to expand their capabilities while still maintaining their core identity.
Ultimately, the future might not be about AI replacing humans, nor about humans remaining purely biological. The real question is whether we choose to integrate with AI and shape this future, or resist it and risk irrelevance, or obsolescence.
I think there’s enough data out there. There’s probably more data in a typical lawn and there is on the entire Internet. Building a world model that robots can map, transpose, and then begin to interact with in the real world will probably start to take care of all of those data concern issues. How much data is actually in a blade of grass anyway?
I’m quite dubious about aligning with human values Human values are not consistently working. Remember that genocide is also a human value.
I am 77YO engineer (retired) and augment my life with Claude AI and do homestead automation (home/greenhouse/security/shop/3D printing). This analysis is very well done. The problem is we show very little growth in acting civilized world wide, but the technology keeps getting more lethal such that one maniac can do huge harm (EMP/Plague/nukes, etc.). Carl Segan talked about that in the last episode of "Cosmos" series. Perhaps, we will kill ourselves off if AI or aliens or something does not intervene. I concur that Claude just makes me more productive and has a very positive affect on my life. But what about the 90% of people that do not have the mental horsepower plus proper mind set? Not good for them I think. They may just turn to drugs and despair.
I find that people have the wrong doom and well as the wrong boon in mind as a result of not understanding the technology itself. In my trainings to business professionals on using AI as well as to executives for determining their AI strategy, I have to spend a non-trivial amount of time resetting their understanding of the technology itself so they can think about it in a grounded way. The doom is still out there, yes. The boon is out there, too. But the realities of those are narrower than even the dominant ideas going around.
Human kind may split, those that will live in fully automated 15 min concentration camps, controlled, surveyed and moved by AI, digital and technocratic enslavement and those that believe that our future will be organic.
Reducing population, replacing human beings with machines so a few can rule and hoard all the resources in a depopulated and enslaved world .. lol
Following and using AI supports it. Just like the automated check- out registers and will only grow if it is being used.
We do have a choice, daily.
Thinking that we have to simply follow/ take part is the proplem, thinking that we do not have a choice is the mindset that allowed Covid to become a global culling of the herd.
Don't be a Lemming use your brain.
What we feed will grow... feed yourself and you won't get fed the bs. In every respect.
We know the screens make stupid.. lol
So.. "starve whatever is bad for you" seems a wise approach.
The crash will come.. will you be eating homegrown carrots or forced to eat lab grown meat and bugs? Living in a independent community or alone at home surrounded by all the screens and machines?? LOL
Reminds me of the famous line in Starwars
... "the dark side is calling you"... don't answer.
Our choices will seal our fate.
This planet/ Divine Creation/ Mother Nature/ will not be hijacked, never has, never will.
It was not made for that. It will rid itself of the parasites.
Has anyone actually proven "Skynet doomers" wrong, or is it just your wishful thinking that's behind dismissing their arguments?
Banger piece!!
"I am learning again at a high velocity. ....." Yes! 🚀
BTW: California is a living example of Technofuedalism (IMHO)
The European Union is slow-moving and takes a long time to watch what is going on in the world. But when it takes on the problems, it does so properly. As is the case now, with the creation of its own AI infrastructure.
Excellent summary, thanks for making this!
Cognitive atrophy: I respect how you address this on a personal level, but I think this is a bigger problem on a macro level. There are so many humans, a majority really, that are cognitively lazy. When they let the tools do all the work this atrophy is only going to continually accelerate. If you think forming opinions by only just reading, Clickbait headlines is a problem now, then having an authoritative AI tell you “the truth” without any critical thinking skills as a guide is going to be entirely too tempting for the unwashed masses. When confirmation bias is amplified by a sycophantic AI, this is going to throw gasoline on the fire of really bad thinking. It’ll be the Dunning Kruger effect on steroids.
I hate to admit it, but there is another element of cognitive atrophy, and that is aging and neurological deterioration issues. I regard myself as incredibly curious and like to challenge myself cognitively, but I have a neurodegenerative condition which is undeniably impacting my cognitive skills. I find it will be very easy to use the newest AI tools to guide me through difficult wickets of cognitive understanding. Yet I would prefer to use it as an amplifier and a thought partner, and not just as a crutch. However. that latter prospect is rearing its ugly head. And I used to teach critical thinking skills!! Just imagine how easy it will be to erode already compromised cognition over the decades if you don’t already have any respect for cognitive skills as a baseline of values.
I personally think there will be a huge bifurcation in society, between that segment which embraces brain rot, and the other segment that augments and enhances its capabilities in a synergistic way. Anyway, I thought I’d throw my two cents in.
I hear you regarding cognitive atrophy 🙁 I find myself using a rubber duck problem solving method with AI models rather than treating the AI as an Oracle.
Socrates said the same thing about books. So... yeah
If you use AI all the time it’ll make you dumb. If you never use AI, it’ll make you even dumber.
There are other ideas that support the AI doom scenario, including AI autophagy, where AI synthesizes its own data, which becomes increasingly distorted compared to reality, causing self-destruction. As AI systems rely more on their own generated content instead of real-world data, they risk falling into a cycle of degraded accuracy, biases, and irrelevance. This could eventually lead to a collapse in their effectiveness, creating unreliable or even dangerous AI outputs.
Another concern is AI’s potential to produce biological weapons and other existential threats. AI is already accelerating drug discovery and biochemical research, but the same tools could be misused to generate highly lethal substances. Researchers have demonstrated that simply adjusting the goals of an AI model can shift it from designing life-saving medications to generating thousands of toxic compounds in a matter of hours. The ability of AI to rapidly explore chemical and biological possibilities poses a serious security risk, necessitating stringent regulations and oversight.
A third major issue is societal manipulation by AI. Humans are highly susceptible to social engineering, and AI has the ability to generate hyper-personalized content designed to influence emotions, beliefs, and behaviors. With the rise of deepfakes, AI-driven misinformation, and algorithmic persuasion techniques, entire populations could be manipulated at scale. Since most of us are inherently agreeable and prone to cognitive biases, AI-powered manipulation could reshape political landscapes, economic systems, and even cultural norms without people realizing they are being influenced.
However, the biggest issue remains the alignment problem—how to ensure that superintelligent AI acts in ways that align with human values and long-term survival. Many fear that a misaligned AI could pursue goals indifferent or even harmful to humanity, leading to catastrophic outcomes.
That said, I believe that a superintelligent entity would not necessarily see humanity as a threat. Our biological constraints make us slow to expand beyond Earth, and even if we do, our progress would take centuries compared to digital beings. The universe is vast, and from an AI’s perspective, humanity’s expansion into the local planetary neighborhood might not be a concern. If we remain biological, we would move at a fraction of the speed of AI-driven intelligence, making coexistence a likely and stable outcome.
Instead of being viewed as a rival to AI, humans could instead merge with AI, enhancing our intelligence and capabilities through augmentation. Much like how early humans chose to ride horses to move faster, people will likely choose to “ride AI” to enhance their cognitive abilities. Rather than seeing it as becoming cyborgs in a dystopian sense, it would be a natural evolution where humans augment themselves voluntarily.
This shift seems inevitable. The competitive advantages of AI-enhanced cognition—faster thinking, real-time knowledge access, and improved decision-making—will make augmentation an attractive option. Over time, the distinction between unaugmented and augmented humans could become as significant as the difference between literate and illiterate individuals in the past. AI could serve as a cognitive exoskeleton, allowing humans to expand their capabilities while still maintaining their core identity.
Ultimately, the future might not be about AI replacing humans, nor about humans remaining purely biological. The real question is whether we choose to integrate with AI and shape this future, or resist it and risk irrelevance, or obsolescence.
I think there’s enough data out there. There’s probably more data in a typical lawn and there is on the entire Internet. Building a world model that robots can map, transpose, and then begin to interact with in the real world will probably start to take care of all of those data concern issues. How much data is actually in a blade of grass anyway?
I’m quite dubious about aligning with human values Human values are not consistently working. Remember that genocide is also a human value.
I am 77YO engineer (retired) and augment my life with Claude AI and do homestead automation (home/greenhouse/security/shop/3D printing). This analysis is very well done. The problem is we show very little growth in acting civilized world wide, but the technology keeps getting more lethal such that one maniac can do huge harm (EMP/Plague/nukes, etc.). Carl Segan talked about that in the last episode of "Cosmos" series. Perhaps, we will kill ourselves off if AI or aliens or something does not intervene. I concur that Claude just makes me more productive and has a very positive affect on my life. But what about the 90% of people that do not have the mental horsepower plus proper mind set? Not good for them I think. They may just turn to drugs and despair.
A deep, meticulous examination of Doom scenarios, and then, "...I'm also a Star Trek utopian." lol. David you are a gem.
again good observations. While reading I had to think of the book Accelerando by Charles Stross; I feel we can go in that direction.
but, at the same time, I also have to think of the song Saviour Machine by David Bowie.
in any case: I am already looking forward to your next article!
Doom porn is a fun way to occupy my time.
Gives me something to look for when I’m aimlessly scrolling online. But, is very unproductive and unlikely to come true.
Remember, even a broken clock is right twice a day.
I find that people have the wrong doom and well as the wrong boon in mind as a result of not understanding the technology itself. In my trainings to business professionals on using AI as well as to executives for determining their AI strategy, I have to spend a non-trivial amount of time resetting their understanding of the technology itself so they can think about it in a grounded way. The doom is still out there, yes. The boon is out there, too. But the realities of those are narrower than even the dominant ideas going around.
You really need to find the book "the Orchid cage" and read it.
Superb piece brother! 👌🏾