David Shapiro’s Substack
David Shapiro
SB 1047 gets Vetoed, State of AI Safety Research, and the "Ineffability of AI"?
1
1
0:00
-21:57

SB 1047 gets Vetoed, State of AI Safety Research, and the "Ineffability of AI"?

Gavin Newsom criticizes SB1047's lack of empirical evidence, something shared by the AI "safety" community, and more magical thinking still reigns in the conversation.
1
1

🧭 New Era Pathfinders

Check out my growth community for navigating the Fourth Industrial Revolution! The New Era Pathfinders is a group of people looking to find meaning and build a career in the next wave. I am now teaching FOUR frameworks for adaptation!

https://www.skool.com/newerapathfinders/about

  • TLC - Therapeutic Lifestyle Changes for a balanced, happy, and healthy lifestyle. This is an 8-pillar, evidence-based framework developed by Dr. Roger Walsh.

  • PBL - Project Based Learning to master any skill or technology, extremely popular in schools, and how I learned everything as an adult.

  • Systems Thinking - To approach problems like I do, and other geniuses like Mark Zuckerberg and Elon Musk. Systems thinking is the most critical cognitive skill.

  • RUPA - My proprietary framework specifically for pivoting into the 4IR and Meaning Economy. It means “Reduce worry, Understand impact, Prepare for changes, Adapt and align”

📜 California AI Regulation Bill Veto

Governor Gavin Newsom vetoed Senate Bill 1047, California’s AI regulation bill. He provided five main reasons for the veto. First, Newsom argued the legislation could create a false sense of security about controlling fast-moving AI technology. Second, he cited a lack of nuance, stating the bill does not account for whether an AI system operates in high-risk environments or involves critical decision-making. Third, Newsom deemed the bill’s application overly broad, applying stringent standards even to basic functions of large systems. Fourth, he considered it an inappropriate approach to protecting the public from real AI threats. Finally, Newsom emphasized the need for AI regulation based on empirical evidence and science.

  1. False sense of security: Newsom stated that the legislation “could give the public a false sense of security about controlling this fast-moving technology”

  2. Lack of nuance: The governor argued that the bill “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data”

  3. Overly broad application: Newsom said, “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it”

  4. Inappropriate approach: In his exact words, Newsom stated, “I do not believe this is the best approach to protecting the public from real threats posed by the technology”

  5. Need for empirical evidence: The governor emphasized that regulation should be based on “empirical evidence and science”

🔬 AI Safety Research Critique

The episode discusses ongoing efforts in AI safety research, highlighting concerns about the qualifications of some self-proclaimed AI safety researchers. It criticizes the AI safety community for lacking emphasis on ensuring risk models correspond with empirical reality. The speaker argues many in the field skip over relevant aspects of how the world works, instead relying on toy model-like arguments and vague reasoning jumps. The critique extends to prominent figures like Nick Bostrom and Eliezer Yudkowsky, questioning the validity of their postulates as scientific foundations for AI safety.

🏛️ OpenAI and Scientific Standards

The discussion touches on a debate involving OpenAI, particularly comments from Noam Brown in response to criticism from Yann LeCun. Brown argued that the widespread use of OpenAI's models serves as validation of their research, countering LeCun’s assertion that blog posts do not meet scientific standards of reproducibility and methodology. I side with LeCun, calling for OpenAI to publish actual papers, conference presentations, and models for scrutiny. For reference, OpenAI hasn’t published papers publicly at conferences since 2020.

🧠 Cognitive Plateaus and AI Capabilities

The episode explores the concept of cognitive plateaus and its implications for artificial general intelligence (AGI). It explains that beyond a certain IQ level (around 120-130), increases in intelligence may not translate to proportional gains in real-world outcomes. This phenomenon, known as the threshold effect, suggests that AGI, even if faster than humans, may not necessarily possess cognitive abilities beyond human comprehension. The speaker challenges the notion that AGI will think thoughts humans cannot understand, arguing that real-world constraints and diminishing returns on increased intelligence will limit AGI's cognitive horizons.

🤖 Ineffability of AI

The “ineffability of AI” refers to the potential for artificial intelligence to develop capabilities beyond human comprehension, while acknowledging shared limitations. This concept explores whether AI might reach levels of understanding or problem-solving that humans cannot grasp or validate, considering both cognitive horizons (the limits of what an intelligence can understand or compute) and cognitive plateaus (points where increased intelligence yields diminishing returns in real-world outcomes). It accounts for various forms of diminishing returns in computation, cognitive abilities, and IQ, as well as fundamental constraints like Bremermann’s limit (the maximum computational speed of a self-contained system in the material universe) and Gödel’s incompleteness theorems (which demonstrate that within any sufficiently complex logical system, there are statements that cannot be proved or disproved within that system). The notion is grounded in empirical and objective outcomes within our shared physical universe, recognizing humans’ existing grasp of fundamental principles. It incorporates game theory concepts of incomplete and imperfect information, acknowledging the inherent uncertainties in real-world scenarios. Despite the lack of historical observations of truly incomprehensible phenomena, the ineffability of AI questions whether artificial intelligence could develop insights or methodologies that transcend human understanding while still operating within universal constraints.

Discussion about this podcast

David Shapiro’s Substack
David Shapiro
Navigating the Fourth Industrial Revolution!