12 Comments
User's avatar
Intel for the Quantum Info Age's avatar

Brilliant, the right phrase at the right time! I was just writing up findings for a journal publication on using llms to synthesize novel research findings and was looking for a better phrase to describe convergence as our human collective intelligence increases, i.e. as the n increases on what we know that it is available to train a large language model the ability to interpolate has a higher confidence in approximating some ineffable universal truth... or informational universe hypothesis as it were.

Expand full comment
DC Reade's avatar

This post is all very thought-provoking, but still almost entirely in the realm of speculative futurism and abstract theory. Empirically speaking, I've encountered some AI applications, and while I comprehend the utility of some of them--they're certainly useful for clinical scientific experiments in controlled conditions--I'm not finding anything all that inherently supergenius about any of them. Clever, yes; ingenious, sometimes. But as an authentic breakthrough, not impressed so far. Granted, I'm about as ordinary level of an Internet user as it gets. But your overview of the very near future--with pronouncements like "AI is getting smarter by the week"--has yet to register with my experience of what's out there in ground level real world conditions. The robot acrobat simulation that everyone has seen was nice, for example. But why isn't it cordless, leashless, without a tether?

I'm able to follow your speculations far enough to be in general agreement on these points:

"Regulatory capture is bad. This sets us on the cyberpunk dystopian pathway.

Open source is good. This is a form of transparency which also increases safety research and democracy. It increases free market competition.

Transparency is good. Accountability for corporations and politicians. Information asymmetry is bad."

The problem is where the Internet is at now, in reality. Regulatory capture--the siren server/platform/advertising attention economy model--has become the new status quo of privatized Enclosure, and I remember that this was not the case 23 years ago, the era of Google 1.0, so to speak.

Open source is still around to some extent, thankfully. But at the level of general information sharing of text by humans, it's been impaired, on account of increasing resort to Paywalls, censorship schemes, and disinterest in diligently archiving knowledge--primary source communications, historical findings, general access knowledge of all sorts. A lot of pages are gone from the Internet, and only some of the missing material is deleted because duplicated. Unless NSA data servers have been fulfilling the functions similar to a cyber-Library of Congress, or something along that line; a super-archive of recorded information. Which brings us to the last point:

Transparency is a laudable ideal, for corporations and politicians. It doesn't exist. Data mining is a one-way mirror. The common folk are encouraged to share every conceivable aspect of their lives and social networks and expose it to--at some level--supervisory review, but there's no reciprocality

present.

It's my impression that the Internet has regressed from a formerly promising open frontier with a widening horizon, as compared with the early 2000s. Will AI reset that situation, or continue the enclosure? Can ordinary users at least obtain some agency in how a platform's algorithm curates and suggests information input, for example? In my interactions with social media platforms, I'm increasingly wanting an algorithm that permits personal agency to allow adding some quotient of random input to whatever the machine has concluded about my tropisms, for example. It's difficult to find novelty and unpredictability from an information feed that's programmed to analyze my interests, desires, and antipathies and deliver only what I "want." Paradoxically enough, that isn't what I want from an information feed. I find the notion of an information/communication conduit that tells me only what I'd prefer to hear to be patronizing. And suspiciously malign, frankly. I don't get how any of that has become the default standard, with no way to escape it within the context of the platform. I'm not interested in trying to inform an algorithm in order to "steer" it. I can do my own keyword searches! Too bad keyword search functions are so markedly inferior, compared to the Google 1.0 era.

Expand full comment
Herbert Heyduck's avatar

If I may use the analogy: you won't find anything ingenious in your everyday dealings with people either. You have to look for it. That's exactly how I see dealing with AI at the moment. There are already some ingenious ideas that Claude has come up with, but I had to look for them; I first had to experiment with the questions and spent a lot of time doing so. When I asked Claude to critique one of my images, I was very surprised when Claude offered me metaphorical comparisons that I hadn't even thought of myself.

Claude couldn't have found these comparisons on the internet; she had probably figured them out on her own.

But of course, it would take a more detailed investigation to find out if this new “thought” meant a completely new idea.

Expand full comment
Alvin W. Graylin's avatar

Dave, We all want to get to the utopian outcome but your belief that we're on the path towards it already feels more like hopeful thinking. I've been talking extensively to folks in the AI industry and policy makers around the world. The forces towards regulatory capture still exists. The escalation of geopolitical conflict and militarization of AI is increasing. The rhetoric and spend on AI arms race/war is trending higher. The preparation of defense/mitigation for bad actors' use of AI is almost non-existent. The planning for UBI/UBS to deal with job displacement in major countries hasn't started... What evidence exactly gives you confidence we're in the attractor well for utopia? I want to believe, I just don’t see we’re at a point where we just sit back and let it play itself out. (I wish we were)

Expand full comment
Red Young's avatar

We are going to end up in the maximally desirable state. Because reasons. OPTIMISM!!!

Expand full comment
Captain Mavis 23's avatar

I'm curious what you think about the current use of energy and water concerning your plea to use AI as much as possible.

I do use it quite a bit, but with cognitive dissonance in regard of ressource consumption.

I do not quite yet understand your argument saturating information and spreading the use of AI will lead to a more desirable outcome.

Can you elaborate a bit on this point?

The rest agreed... we do not know what the outcome of this technological revolution will be.

Still putting belief and imagination towards the more desirable solar punk utopian.

Expand full comment
Intel for the Quantum Info Age's avatar

In statistics, the law of large numbers guarantees that as the sample (of a larger unknown population that you're trying to describe) increases you can have confidence that it will come closer and closer to revealing the truth about that population. As it relates to AI the more information it has that represents the entirety of existence and in particular our human communication like writing, the more likely its outputs and ours will align (globally, mind you). Which is a very strong case for never restricting the universe of information it has because in our lizard brains we think we need to impose some filter for ethics or decency etc.

Expand full comment
Intel for the Quantum Info Age's avatar

This is also corollary to what's referred to as the wisdom of crowds, which is if you have a jar of pennies (since no one seems to want them anymore) and you independently ask people to guess the number, if they don't know the guesses of others, the more guesses you have the more the average of those guesses will be closer to the exact number.

P.S: This is completely tangential and nerdy but I'm wondering what the outcome would be asking everyone in the US or everyone on Earth how many stars there are in the universe or some unfathomably large number (perhaps infinite? The universe is still expanding)

Expand full comment
David Shapiro's avatar

Um what. AI doesn't use that much water. You've been lied to.

Expand full comment
Red Young's avatar

I think AI will figure out energy and climate issues with ease. But it will take time and cooperation.

Expand full comment
Captain Mavis 23's avatar

Possible.

Expand full comment
User's avatar
Comment deleted
Feb 26
Comment deleted
Expand full comment
Red Young's avatar

same. it feels comforting to have a guide for the future. or at least a good guess at it.

Expand full comment