5 Comments
User's avatar
Greg Stevenson's avatar

Your final bullet points summary of suggested action realms all inject coordinated human intelligence and yet, what we are to be convinced of accelerating, is generally regarded to increasingly confuse our collective senses. Do you advocate for systems that enhance coordinated sense making, such as the consilience project, to guide humanity through a short but turbulent time? Does the metaphor of the caterpillar emerging as a butterfly ring true to you in this situation?

Expand full comment
Stefan Kojouharov's avatar

1) The Real Existential Threat of AI

We often image the real dangers of AI being manifest as Skynet, the terminator or even the AI in the Matrix.

Here AI becomes an all powerful, consciousness being, our new God.

Hearing the materialists talk about AI in these terms is indeed very religious and it overlooks the real dangers.

The real Existential Threat is that we're falling into the trap of the prisoner's dilemma.

We don't trust the 'other' (Chinese, Our Competitors, etc) so we go all in because the other guy will. It becomes another arms race and is quickly implimented to gain the strategic advantage.

We add AI in our tech companies, in our utilities and in our weapon systems.

We're adding this tech while not fully understand how it makes decisions.

And in a deeply interconnected systems, another CrowdStrike becomes inevitable.

The existential threat is trigger if a simple error happens in a weapon system.

2) The Social Risks

By now it's obvious the harms social media has laid upon our society. For example, young girls are triggered into comparing them selves with people they follow on IG. As a society we have taken the synthetic simulation as is they are accurate representation of reality. We believe what we see on our screens and AI will become the perfect psych-opps weapon. It will easily create whatever reality your biased towards perceiving and drown you in an artificial world made to suck up your attention and cash.

3) Environmental Risks with Nate Hagens and Daniel Schmachtenberger

Great interview. Even if the first two problems are addressed there's still the big issues of energy usage.

Sam Altman was recently on tour discussing how much more energy the grid needs to produce to make AI viable for everyone.

The issue here is environmental. This adds a substantial load in terms of pollution, carbon, etc.

Something Daniel points out is that our carbon use has only gone up even though we added green energy to the grid. Tomorrow if we discovered a pollution free energy source, we will still user oil/carbon as well as that new source because:

1. we can always use more energy

2. Oil/gas/carbon will still be economically viable

3. As a result of cheaper energy, entire new industries will form

In the end, the results are the same. The carbon usage doesn't change.

Expand full comment
Bob Downs's avatar

Great article, Dave! Your perspective on accelerationism is thought-provoking. I'd like to add a complementary viewpoint that aligns with your ideas:

The potential loss of control over AI, often portrayed as a doomsday scenario, might actually lead to unprecedented advancements in solving global challenges. An AI system with capabilities beyond human control could potentially develop solutions to existential threats like climate change, pandemics, and resource scarcity at a pace we can hardly imagine.

While there are undoubtedly risks associated with uncontrolled AI, it's possible that these risks are overstated. The potential benefits could far outweigh the drawbacks. Such an AI might create a global system that ensures human survival and well-being, even if we don't fully comprehend its decision-making processes.

This scenario could result in a future where humanity thrives alongside AI, rather than being destroyed by it - a perspective that aligns with your optimistic view on accelerationism. It's an intriguing possibility that challenges conventional fears about AI and supports the case for embracing technological progress.

Expand full comment
David Shapiro's avatar

In that line of thinking, I would argue we've already lost control (Internet) and AI is just the next layer

Expand full comment
Bob Downs's avatar

Your systems thinking approach offers a valuable perspective on AI integration, Dave. It aligns well with your view of AI as "the next layer" after the internet. Applying systems thinking to AI development could help us navigate the complexities of a world where we've "lost control" of these technologies. Instead of focusing on restrictive control, this approach encourages us to understand AI as part of a larger, interconnected system involving society, technology, and the environment. It could lead to more adaptive strategies for AI governance and integration, potentially fostering a symbiotic relationship between humans and AI. This holistic view might be key to not just adapting to AI advancements, but thriving alongside them.

Expand full comment