61 Comments
User's avatar
Chris Anderson's avatar

I'm not so sure Tesla has solved autonomous robot locomotion, otherwise great article

Expand full comment
Andreas Schröder's avatar

Very clear and insightful article. I love your straight analysis and you are fully right, as far as my limited insights can tell. Just a remark: I don't care for any human related intelligence measures like AGI. A modern airplane can fly very well even if it does not fly on a pigeon level. AI is alien, so let's just measure it by its possible intellectual achievements. And those are way beyond most humans intellectual achievements. So the will dominate the intelligence arena from now on and humans better search for ways to still be of some value and not be reduced to some little tribes in outdoor museums, because AI needs all resources for themselves.

Expand full comment
Kaiser Basileus's avatar

AI isn't even I.

Expand full comment
Chad Cassady's avatar

"In point of fact, half of all people have a below average intelligence."

Um. Yes, that's what average means. Technically I think "mean" might be the more accurate word for your little tautology there.

And as far as the Riemann hypothesis, you're conflating knowledge with intelligence. Intelligence is more about your ability to integrate and synthesizs information. How quickly could you learn the math you needed to understand that concept? That's intelligence. Or one form of it anyway.

There are many forms of intelligence. Fear tends to interfere with the cognitive ones, so the intelligent move is to avoid succumbing to it and playfully integrate. Keep playing with LLM's, and try to make them energetically cheaper. All forms of life have a common goal: thrive and replicate, and death comes for all of us in the end either way. Keep it playful.

Expand full comment
The Nuance's avatar

I *think* that's a flawed way of thinking about it: Especially if you buy the argument that humanity will get superceded by AI.

Democracy sucks - yes.

It is controlled by Majority - yes.

Majority are stupid - yes.

BUT - and this is the big but: Soon we will all be in the 'stupid' bucket, if not already. So if you don't build a system that represents the majority (however stupid they may be) and empower it to be able to control the outcome: We don't get to keep human autonomy. This is fun to judge from the sidelines and extremely painful to go through, once you're made irrelevant.

Assuming we don't trust in human organizations and actual democracy - soon only the ones AI thinks are 'pretty' will be pets. Rest won't be here.

Expand full comment
Steve Raju's avatar

I think it’s slightly more complicated than that.

You have compatibilists and functionalists. Most people are a bit of both, and many are not aware of these terms or even why they believed the way they do.

It’s easy to mock one camp or the other but if we are really concerned about societal readiness, we must take this into account to avoid both sides just talking past each other all the time.

If we do not take this nuance into account, then we also are exemplifying Dunning Kruger.

Expand full comment
Eastwood451's avatar

Yeah - if the bots have experiences, it must be like an amnesic staccato of pondering incoming prompts with an immediate jump to the next incoming prompt, since they stop all processes between prompts.

Expand full comment
Peter Nayland Kust's avatar

The Riemann Hypothesis test described is flawed, because it proceeds from an inability to fully apprehend the number theory and mathematics involved. What it establishes is the inability to ascertain whether what the AI chatbot said was accurate or not. The test presumes its one-paragraph narrative of the Reimann Hypothesis as "factual" and "accurate."

But what if the chatbot is wrong? How would you know?

If the chatbot is wrong, how "intelligent" is it really?

And there are research projects which show these same chatbots to be wrong as much as 60% of the time in performing general search queries.

https://substack.com/@peternaylandkust/note/c-100846726

Generative AI search queries are less likely to deliver accurate results than traditional search engines. ChatGPT, Gemini, Grok, DeepSeek....all failed in this test.

The question is never whether or not AI is "smarter" than human beings, just as the question is never whether we are of below-average intelligence, average intelligence, or above-average intelligence. The question is always whether we are being presented with an accurate rendition of facts. The question is always whether we can cross-check and validate that rendition of facts both for its accuracy and its utility.

The present reality of Artificial Intelligence is that we are still compelled to review, validate, and quite often correct AI output. AI output is not trustworthy because time and again empirical data emerges which demonstrates AI cannot be trusted to be right. That lack of trustworthiness limits how much effective utility AI can have.

Expand full comment
Sean Lannin's avatar

I'm not dumb, I know AI is smarter than me.

Expand full comment
Lauri Niskasaari's avatar

Well, this is my prediction and warning:

Expand full comment
H.'s avatar

So Dave, do OpenAI, Grok, etc have and use their own non-lobotomised version?

Expand full comment
AutisticJedi's avatar

Unless I've misunderstood it, during the initial training phase you have to create the non-lobotomised version first before then dumbing it down for public consumption. In which case, yes, they've got the unbridled versions hidden away. If and what they might be using them for, and whether they are as benign, useful, and cooperative as the dumbed down versions, is anyone's guess. I'd sure like to see more transparency and the spilling of secrets on this matter.

Expand full comment
H.'s avatar

Yes I thought as much - but I can't help think that if these unadulterated versions were "smarter" then they would be secretly solving Fusion and curing Cancer by now... who needs hype to raise funds when you can develop whole new businesses?

Expand full comment
AutisticJedi's avatar

That's a fair point and it's good to question that. But who knows what complexities might exist there.

Perhaps treating cancer is more profitable than curing it though. I once emailed a leading HIV research charity in the UK about a very cheap and effective electronic non-drug treatment for HIV that was patented back in about 1995 by a couple of Canadian researchers. Initially they pretended not to know what I was talking about even though I had given them a link to the patent, and when I then pushed them on the fact I'd included the link they shrugged their shoulders and just said that if they felt that the researchers who patented it had wanted to use it then they would have done so. Basically they politely told me to shut up and go away. In my view, they were clearly in it for the money, not to actually find a cure to help infected people. Personally, I think that patent was especially explosive because a science savvy person would start to realise that there is high likelihood that technique works against many other microbes too, not just HIV. So bye-bye to a lot of pharma's profits from antiviral and antibacterial products.

Same issue with dental health. Since the early 2000s we have developed two lab proven methods of regrowing missing or damaged teeth, and a third method was proposed in the past several years which is yet to be proven. Ok, so if we know how to do it then why are we all walking around with missing or damaged teeth and not regrowing them? Something doesn't smell right there.

Also, when you are talking about solving hard problems like fusion etc. the AI doesn't necessarily have all the information it needs for that, and these problems might be things that cannot be solved just by thinking about it. For example, there was that fairly recent bragging about AI finding tons of new alloys and material formulations, but that wasn't done just by thinking. My memory of it is vague but as I understand it they connected AI to a robot in a safety containment lab and kinda let it go wild experimenting with different combinations of material elements and then testing the properties of the resulting materials to find ones with useful characteristics. They were really over the moon about it, saying that it would have taken decades for humans to find such materials the traditional way and that AI just did it in months, or however long it took.

I did recently try chatting with Claude about electrogravitics and electromagnetic effects on gravity, and even pointed out patents that appear to support the concept. I said to Claude, let's ignore the controversial nature of the topic and just discuss the implications if it were true. Claude was able to detail various areas of physics and science theory that would be impacted if it were to be true, but I was not able to get Claude to formulate theory on how it might work at the level of quantum physics. The conversation was fascinating but ultimately unproductive. So maybe current AI needs to evolve more in order tackle such things, or maybe just thinking about it isn't enough. I did get the feeling that maybe its politically correct fine tuning was reducing its enthusiasm to push as deeply as I would like into such things because the model has been handicapped with such a strong compulsion to present mainstream facts rather than engage in hypothesis and theoretical exploration.

Expand full comment
H.'s avatar

True, big business will always try to maintain their bottom line, even if it means suppressing better ideas and products...

But, that is the main reason the Startup Industry exists - to disrupt the current market and "steal" away profits from the established players, often to the extent of bankrupting them...

I also rad about a recent study in Japan which has gone to human trials now - that claims to regrow teeth... with the hope of coming to market in the next few years... let's keep an eye on it!

Expand full comment
Aaron Phelps's avatar

David, great post- a needed reminder we’re watching AI on VHS. I like your notion that we’re using a lobotomized version. I imagine that we’re using a gamified (Before Chat GPT, Open Ai let you play with DaVinci models in ‘playground’) version we’re using for a period of social temperance. We’re in Gods hands now — and speaking of — I’ll share a great book I just picked up: BUILDING A GOD. by Christopher Dicarlo.

Expand full comment
Mark's avatar

If you really want to see this in action, look up the answer to the following question, and the visceral reaction people have to it:

"What do you get if you sum the integers? That is, 1+2+3+... all the way to infinity?"

The answer is extremely well proven (including a proof that uses Riemann's Zeta function), but it causes some people to go into a fit of anger and denial. The YouTube channel Numberphile did a video on this over 10 years ago and it was, as they put it, "controversial."

Except it isn't controversial, really, in that the answer is very well known. But some people are unwilling to be open minded enough to consider the answer, or to learn about it. It confuses and upsets them, and that's where they stop.

The answer, by the way, is -1/12, and not only can it be proven mathematically, but it also shows up in physics, so we've essentially tested it experimentally.

I think this makes a good example because of the emotional reactions people have. Most people will happily admit they don't understand the Riemann Hypothesis, but -1/12 somehow causes people to feel knowledgeable enough to have a strong opinion, even when they are out of their depth.

Expand full comment
Cameron h petrie's avatar

Or I could consider myself lucky since ignorance is bliss and all.

Expand full comment