David Shapiro’s Substack
4IR
How Will ASI Actually Be Deployed?
0:00
-30:33

How Will ASI Actually Be Deployed?

Many of the AI Safety advocates have never been in a datacenter, deployed a server, or trained an AI model. That gives me some pause on taking them seriously.
  • THE STATUS GAME by Will Storr - this book explains most of what’s going on with the top voices in AI safety. Furthermore, this book explains a huge amount of human behavior, particularly around epistemic tribes.

Overview

In this episode, I dive into the complex world of artificial intelligence, examining the physical realities of AI deployment, the limitations of current AI safety arguments, and the psychological aspects of the AI safety movement.

Show Notes

I begin by emphasizing the importance of understanding how AI will be physically deployed. As a technologist with experience in data centers and AI model training, I argue that many AI safety advocates, particularly philosophers, lack crucial technical understanding. This gap in knowledge leads to unrealistic scenarios about AI “escaping” or becoming uncontrollable.

I explain that superintelligent AI will likely be confined to data centers due to their superior computing power. I address concerns about AI exfiltrating itself into robots or edge devices, highlighting the significant reduction in capabilities such a move would entail.

Market forces and practical safeguards are discussed as natural limiters on potentially dangerous AI applications. I argue that companies will build in safeguards to avoid liability, and even military applications prioritize having control and "off switches" for their systems.

I then shift to a critique of current AI safety arguments. I contend that many of these arguments rely heavily on philosophical conjecture rather than empirical evidence or solid theoretical models. I highlight the lack of data and scientific consensus supporting extreme AI risk scenarios.

The psychological aspects of the AI safety movement are explored through the concept of “narrowing status games” in epistemic tribes. I discuss how limbic hijacking and status-seeking can promote alarmist views, drawing parallels to other movements like flat-earthers and anti-vaxxers.

I reflect on my own journey from being more aligned with AI safety concerns to becoming more skeptical. I explain how providing pushback and requesting evidence led to hostile reactions from some in the AI safety community, which I see as a sign of a narrowing status game.

The episode concludes with thoughts on the importance of first principles in making governmental decisions, especially regarding AI regulation. I express concern about potential overreach in regulating AI hardware and emphasize the need to balance safety concerns with fundamental rights and freedoms.

Throughout the episode, I stress the importance of grounding AI safety discussions in technical realities and empirical evidence, calling for a more balanced and informed approach to addressing potential AI risks.

Discussion about this podcast

David Shapiro’s Substack
4IR
Navigating the Fourth Industrial Revolution!