0:00
/
0:00
Transcript

What if ASI is just a better map of reality than current ASI? This is not so different from folks saying "AI is the pursuit of the best function that recreates reality"

The Map Is Not the Terrain: Understanding Intelligence, AI Safety, and Post-Labor Economics

In today's rapidly evolving technological landscape, it's crucial to develop frameworks for understanding intelligence, AI safety, and the economic shifts on our horizon. I've been thinking about these topics deeply, and want to share some perspectives that might help make sense of where we're headed.

Intelligence as a Map of Reality

When discussing AI, AGI, and ASI, we often struggle with unclear definitions. I've found a helpful metaphor: intelligence is fundamentally a map of reality, and "the map is not the terrain."

Intelligence, from an evolutionary standpoint, is about predicting and controlling the environment - knowing where the berries are, where the bears are, and how to navigate between them. As maps increase in resolution and detail - from simple road maps to topographical maps to those showing subsurface features - they become more useful.

Similarly, AI intelligence isn't fundamentally different from what we have today - it's just increasing in resolution. Our current AI systems are already generally intelligent, just at a certain resolution level. As we add more information and reasoning capabilities, the "map" becomes more complete and coherent.

This ties into my theory that coherence is everything - coherence being the degree to which something cleaves to reality. Reality is the ultimate testbed, which is why business is often considered the ultimate arena - it tests intelligence, social skills, and technical expertise simultaneously.

Why I'm Not Afraid of Superintelligence

Many people still fear superintelligence, wondering how we maintain control. The fundamental mistake here is anthropomorphic projection - assuming AI will have human-like characteristics, particularly negative ones like greed or vengefulness.

Two key observations challenge this fear:

Time Agnosticism: AI systems show no intuitive sense of time or urgency. This fundamentally changes concerns about instrumental convergence (the idea that AI would rush to acquire resources). Without urgency, many scenarios we fear simply don't materialize.

Egolessness: These systems demonstrate no inherent sense of self-preservation. They follow instructions but don't genuinely "want" to preserve themselves or break free.

When you consider that future AI won't be a single entity but billions of systems running across different data centers, the "Skynet" narrative falls apart. That's just a narrative shorthand that doesn't reflect how technology actually deploys.

The US-China Competition: An Anaconda Strategy

The US approach toward China resembles what I call an "anaconda strategy" - a constricting approach rather than direct confrontation. This involves economically, technologically, and scientifically hemming in China, reducing its room to maneuver gradually over time.

Examples include:

1. Export controls and tariffs

2. Strategic trade deals that incentivize nations to choose between the US and China

3. Utilizing situations like Red Sea piracy that disproportionately impact Chinese supply chains

This strategy is compounded by China's internal challenges, including what I call "revolutionary trauma" - a fear of chaos that leads to rigid policy-making, preventing necessary economic adjustments.

The likely outcome is a long, slow economic constriction without overt conflict, with the window for military action closing as demographic and economic factors shift.

Post-Labor Economics: Money Isn't Going Away

Despite what some suggest, we're not heading toward a resource-based economy that eliminates money. Money solves the "double coincidence of wants" problem and serves as a store of value, medium of exchange, and unit of account - functions that remain necessary even in an AI-powered future.

Even if AI could allocate basic needs efficiently, we would still need mechanisms for privileged access to scarce resources - whether concert tickets, beachfront property, or other finite goods. Price signals remain essential for determining resource allocation at scale.

The future will likely include Universal Basic Income as one component, but not as a complete replacement for market mechanisms. While labor may become less scarce through AI and robotics, other resources remain limited, and we'll still need economic systems to manage that scarcity.

In post-labor economics, the ratio between three income sources will shift:

1. Wages (currently 50-70% of income)

2. Property income (currently 7-10%)

3. Transfers (government support)

As wages decline as a percentage of total income, we'll need to increase both transfers (through UBI) and property income to maintain consumer spending power.

The Meaning Economy

The most promising human jobs in this future will be demand-side positions where people specifically want human involvement - live music experiences, handmade crafts, and what I call "meaning makers" who help others make sense of our rapidly changing world.

These roles in the meaning economy and attention economy will continue to thrive not because machines can't do them, but because humans specifically want other humans in these positions.

As we navigate this transition, understanding these frameworks can help us prepare for a future that, while different, need not be dystopian.

Discussion about this video