Discussion about this post

User's avatar
Desmond Wood's avatar

Superb piece brother! 👌🏾

Expand full comment
Khalid's avatar

There are other ideas that support the AI doom scenario, including AI autophagy, where AI synthesizes its own data, which becomes increasingly distorted compared to reality, causing self-destruction. As AI systems rely more on their own generated content instead of real-world data, they risk falling into a cycle of degraded accuracy, biases, and irrelevance. This could eventually lead to a collapse in their effectiveness, creating unreliable or even dangerous AI outputs.

Another concern is AI’s potential to produce biological weapons and other existential threats. AI is already accelerating drug discovery and biochemical research, but the same tools could be misused to generate highly lethal substances. Researchers have demonstrated that simply adjusting the goals of an AI model can shift it from designing life-saving medications to generating thousands of toxic compounds in a matter of hours. The ability of AI to rapidly explore chemical and biological possibilities poses a serious security risk, necessitating stringent regulations and oversight.

A third major issue is societal manipulation by AI. Humans are highly susceptible to social engineering, and AI has the ability to generate hyper-personalized content designed to influence emotions, beliefs, and behaviors. With the rise of deepfakes, AI-driven misinformation, and algorithmic persuasion techniques, entire populations could be manipulated at scale. Since most of us are inherently agreeable and prone to cognitive biases, AI-powered manipulation could reshape political landscapes, economic systems, and even cultural norms without people realizing they are being influenced.

However, the biggest issue remains the alignment problem—how to ensure that superintelligent AI acts in ways that align with human values and long-term survival. Many fear that a misaligned AI could pursue goals indifferent or even harmful to humanity, leading to catastrophic outcomes.

That said, I believe that a superintelligent entity would not necessarily see humanity as a threat. Our biological constraints make us slow to expand beyond Earth, and even if we do, our progress would take centuries compared to digital beings. The universe is vast, and from an AI’s perspective, humanity’s expansion into the local planetary neighborhood might not be a concern. If we remain biological, we would move at a fraction of the speed of AI-driven intelligence, making coexistence a likely and stable outcome.

Instead of being viewed as a rival to AI, humans could instead merge with AI, enhancing our intelligence and capabilities through augmentation. Much like how early humans chose to ride horses to move faster, people will likely choose to “ride AI” to enhance their cognitive abilities. Rather than seeing it as becoming cyborgs in a dystopian sense, it would be a natural evolution where humans augment themselves voluntarily.

This shift seems inevitable. The competitive advantages of AI-enhanced cognition—faster thinking, real-time knowledge access, and improved decision-making—will make augmentation an attractive option. Over time, the distinction between unaugmented and augmented humans could become as significant as the difference between literate and illiterate individuals in the past. AI could serve as a cognitive exoskeleton, allowing humans to expand their capabilities while still maintaining their core identity.

Ultimately, the future might not be about AI replacing humans, nor about humans remaining purely biological. The real question is whether we choose to integrate with AI and shape this future, or resist it and risk irrelevance, or obsolescence.

Expand full comment
15 more comments...

No posts