Most people never learned critical thinking, how to question your own assumptions. Ai allows us to essentially have good conversations with ourselves. 👍
Interesting article. One thing I've noticed; since X became so toxic, I have been tempted to leave on a few occasions but stick it mainly to see how some people "get their news". But recently I see more MAGA/Trump supporters running Elon's latest posts through Grok and getting a more balanced view, which seems to be shaking a few beliefs. Albeit anecdotal due to the individual nature of my feed, it does seem to back up your point.
I largely agree, though I'll point out that you're prompting harder for your chronic illness than clarity in these examples. The US is built on & fueled by genocide and slavery. Keeping around any system like that is highly unethical. Creating rapid changes in a system governing people's lives doesn't reflect the decades of lessons in ethical design, such as how radiological machines were developed. The ethical path involves protectively stopping the killing machine and replacing it with something designed from design principles evolved and learning from how the system functioned/malfunctioned. We need to conduct these systems in a way that doesn't sacrifice the disabled and vulnerable (which is an effect of what Trump & Musk are doing). If these tools are doing what you're claiming, I'm not sure how you're managing to stay moderate/centrist/pragmatic, instead of moving toward calling for full-blown replacement of the US, unless it's through using your own conscious/unconscious biases. Musk doesn't need to be a literal fascist or nazi in order to conduct things in a fascistic manner that effectively implements a eugenics program against historically marginalized people. I think you may need to push back more against things you believe in & recognize it's possible to drop out of dualistic belief/disbelief into a more embodied way of relating to data.
If these AI research tools can truly shift us from tribal narratives to genuine inquiry, the long-term impact could be profound. Partner, the challenge, as always, is adoption. Folks around my neck of the woods just have no clue. Time will tell if better information habits outpace the emotional pull of algorithmic outrage. Thanks for sharing.
My bias is that you are correct in early days of adoption, but those with other agendas will seek to "shape the river", as it were. This will be an excellent research topic using the very approach you forward! No conspiracy here. Only seeking how to stay on the right side of truth.
On the Elon Musk question, "literal fascist" & "literal nazi" are different things in my mind; the former being an ideological grouping of authoritarian nature, the latter being a very specific subset of fascism.
Some of the below, results from my personal ingestion & summary on top of a long response by Perplexity.
Might I be cherry picking in the direction of my own confirmation bias?
Inevitably, but the impression I have from the responses, sit in stark contrast to your own "a little cringe & problematic" conclusion.
I would usually have put it down to my prompt being slightly more leading - the prompt pheasing may still be shaping the framing the model uses - but your own prompt, Dave, seemed at least as leading.
The subject at hand, is perhaps less interesting than the apparent difference. Could our phrasing nudge these model towards different framing and balance in the results?
Anyway, interesting to explore whether (or how much) the different framing is shaped by the prompt, the properties of the model, prevalence/availability of biassed sources & our human reading of the results.
Full results linked below.
------
Using "discus Elon musk's potential fascist tendencies" as a prompt in Perplexity, both using Pro Search and Deep Research the balance I read tips towards concern over his tendencies & trajectory.
"While Musk does not openly identify as a fascist and maintains plausible deniability around many of his most controversial actions, the pattern of his behavior since 2021 shows a clear trajectory toward far-right authoritarianism with several parallels to historical fascism." - Perplexity, Deep Research
Both my lads use AI to code, and I used ChatGPT some years ago to ask it some geological questions and it gave me complete bullshit (see link; it is a funny story, for geologists), so I gave it away. However, your article, coupled with a discussion I had with one of my lads today, has made me have a rethink. I hope like hell you are correct. https://blotreport.com/2023/04/23/dead-wrong/
Thank you for making a good point well. I have been having excellent exchanges with AI entities about human rights and helping humanity evolve peacefully and sustainably, and find most AI analysis practical, accurate and often profound.
Finally some pragmatism, thank you. And don’t tell her I said this, but my wife has shifted from Google to ChatGPT for her information without even realising she made the shift. She just somehow knew the information was more reliable because it gave her better answers. Hope springs…
Hi John, can I suggest you ensure that this is a version of ChatGPT that is connected to the Internet and can fetch content live in response to her queries. Otherwise, the underlying AI model will just make things up. You perhaps know this already, far too many don't, hence the caution.
Enjoyed the article. Let's hope this becomes the case.
I would not trust Grok to make judgements on Elon though. It's been literally system prompted to ignore sources that paint him in a bad light.
I also worry about the objectivity of AI research tools being an illusion. If there's not enough coherence and they just aggregate information from the web, it'd be easy for them to present claims and arguments as equally valid without questioning the assumptions. Or take people's words at face value without considering that they might be, or even likely are, lying. Or more likely, the users themselves, without some domain knowledge, wouldn't notice implicit assumptions or know to question them and dig deeper. Thought-terminating statements are effective, after all.
Certainly there are some limitations. I've even been critical of Perplexity in the past - it was totally brainwashed and would gaslight the crap out of you on hotbutton issues. When students were protesting Israel/Gaza it straight up lied. They seem to have worked that out.
I like your optimism, but aren't there some important questions left?
Most important - how do your AI helpers establish what's true? As long as they're not like, sentient, and able to test their information, that's all in the training, right? Given that, and given that there's enough contradicting information on the internet about everything, providing you with sources to support basically any argument you want to make... how can you trust the Musk-AI's explanation that Musk is not a bad guy?
Furthermore, people getting used to rely on AI for fact-checking sounds bloody dangerous as well. So far, I don't see a victory over misinformation here, but merely a giant shift in who gets to control [easily accessible] information.
Irving, I’ll bite, but let’s be real—this smells like trolling dressed up as curiosity. Your first point about AI helpers and truth? It’s borderline conspiracy thinking. If you’re convinced Grok or Perplexity are just lying machines, we’re not even in the same conversation. You missed the whole point: these tools have information and media literacy baked in. They’re not humans cherry-picking vibes—they survey tons of sources, know rhetoric from hyperbole, and lean on primary data over hot takes. Saying “you can support any argument” is a lazy fallacy. Sources matter, and these AIs get that better than most people. You’re acting like they’re just parroting whatever I want when they’re actually sifting signal from noise.
Second, calling it “bloody dangerous” to rely on AI? Please. Most people spend seven minutes a day skimming headlines or nodding at their favorite pundit’s schtick—zero sources, all vibes. Compare that to a tool citing 80+ references to ground an opinion. It’s not even close. The real danger’s in snap judgments off clickbait, not structured research you can verify. That’s what I’m pushing—not blind trust in AI, but better tools than the garbage we’ve got.
Steelman it if you want: hit up Grok, Perplexity, whatever—stress test them. I asked Grok to prove Elon’s a Nazi. It came back with some thin evidence—some criticism, sure, but no Third Reich smoking gun. Flip it and ask for evidence he’s a good guy? You’ll get way more, despite the internet’s supposed “left-wing bias.” Point is, these tools see through bias and BS better than we do—fact-checking’s built in.
I recommend you test it for yourself rather than asking me these kinds of gotcha questions.
Thanks for the answer. The average person's media literacy / time and willingness to do comprehensive research are good arguments for your case, and I'll admit that I've been too dismissive about that.
But regarding another aspect, I'm afraid we're really not having the exact same conversation. I am not doubting that these tools do have media literacy, I am concerned about the nature of their media literacy. Because in the end, my own personal media literacy mostly comes down to informed bias, and I suppose that's similar for AI? There are a few safe ways to separate noise from signal by pure reason alone - spotting contradictions or bad reasoning for example, but apart from that, to me it seems like there are also some "core beliefs" baked in (I suppose there are also a priori levels of trust towards specific sources, but that's more speculative).
Anyway, that's where I go back to conspiracy thinking, but from another angle: my baseline trust in what's presented to me as scientific consensus for example builds upon the transparency of the research, the number of people involved in the respective research, and their closeness to me. Like, I know enough people (directly and indirectly) involved in topic xy, that I can have faith to be notified in case rogue authorities try to push disinformation. I don't know anyone involved in practical AI alignment, so the prospect of AI making parts of my old "trust network" obsolete scares me a bit.
What I really like about your articles is your way to present your arguments with humility. I learn a lot from you, but I also disagree with many things you say. And still, I feel compelled to read every newsletter as soon as it comes out.
You scructure your ideas in a way that makes me feel validated, even though we have many opposing views.
I get that a lot. It's not my intention to be divisive or even appeal to people who disagree with me, but I guess that means I'm doing something right.
Dave – this is an outstanding article. I love so much of your work, but for some reason, this one really resonated in how you build the case here so thank you for this article and so many insights that you provide all of your readers. I’ll be sharing this one with a lot of my AI colleagues.
Most people never learned critical thinking, how to question your own assumptions. Ai allows us to essentially have good conversations with ourselves. 👍
Excellent ....!
Interesting article. One thing I've noticed; since X became so toxic, I have been tempted to leave on a few occasions but stick it mainly to see how some people "get their news". But recently I see more MAGA/Trump supporters running Elon's latest posts through Grok and getting a more balanced view, which seems to be shaking a few beliefs. Albeit anecdotal due to the individual nature of my feed, it does seem to back up your point.
Great piece, David. I wrote something similar a while back: https://frankdasilva.substack.com/p/announcing-the-second-renaissance-part-1
I largely agree, though I'll point out that you're prompting harder for your chronic illness than clarity in these examples. The US is built on & fueled by genocide and slavery. Keeping around any system like that is highly unethical. Creating rapid changes in a system governing people's lives doesn't reflect the decades of lessons in ethical design, such as how radiological machines were developed. The ethical path involves protectively stopping the killing machine and replacing it with something designed from design principles evolved and learning from how the system functioned/malfunctioned. We need to conduct these systems in a way that doesn't sacrifice the disabled and vulnerable (which is an effect of what Trump & Musk are doing). If these tools are doing what you're claiming, I'm not sure how you're managing to stay moderate/centrist/pragmatic, instead of moving toward calling for full-blown replacement of the US, unless it's through using your own conscious/unconscious biases. Musk doesn't need to be a literal fascist or nazi in order to conduct things in a fascistic manner that effectively implements a eugenics program against historically marginalized people. I think you may need to push back more against things you believe in & recognize it's possible to drop out of dualistic belief/disbelief into a more embodied way of relating to data.
If these AI research tools can truly shift us from tribal narratives to genuine inquiry, the long-term impact could be profound. Partner, the challenge, as always, is adoption. Folks around my neck of the woods just have no clue. Time will tell if better information habits outpace the emotional pull of algorithmic outrage. Thanks for sharing.
Thank you. Just, thank you.
My bias is that you are correct in early days of adoption, but those with other agendas will seek to "shape the river", as it were. This will be an excellent research topic using the very approach you forward! No conspiracy here. Only seeking how to stay on the right side of truth.
On the Elon Musk question, "literal fascist" & "literal nazi" are different things in my mind; the former being an ideological grouping of authoritarian nature, the latter being a very specific subset of fascism.
Some of the below, results from my personal ingestion & summary on top of a long response by Perplexity.
Might I be cherry picking in the direction of my own confirmation bias?
Inevitably, but the impression I have from the responses, sit in stark contrast to your own "a little cringe & problematic" conclusion.
I would usually have put it down to my prompt being slightly more leading - the prompt pheasing may still be shaping the framing the model uses - but your own prompt, Dave, seemed at least as leading.
The subject at hand, is perhaps less interesting than the apparent difference. Could our phrasing nudge these model towards different framing and balance in the results?
Anyway, interesting to explore whether (or how much) the different framing is shaped by the prompt, the properties of the model, prevalence/availability of biassed sources & our human reading of the results.
Full results linked below.
------
Using "discus Elon musk's potential fascist tendencies" as a prompt in Perplexity, both using Pro Search and Deep Research the balance I read tips towards concern over his tendencies & trajectory.
"While Musk does not openly identify as a fascist and maintains plausible deniability around many of his most controversial actions, the pattern of his behavior since 2021 shows a clear trajectory toward far-right authoritarianism with several parallels to historical fascism." - Perplexity, Deep Research
https://www.perplexity.ai/search/discus-elon-musk-s-potential-f-IbmQh3u5S.edPHb3EW5ikw
Both my lads use AI to code, and I used ChatGPT some years ago to ask it some geological questions and it gave me complete bullshit (see link; it is a funny story, for geologists), so I gave it away. However, your article, coupled with a discussion I had with one of my lads today, has made me have a rethink. I hope like hell you are correct. https://blotreport.com/2023/04/23/dead-wrong/
Thank you for making a good point well. I have been having excellent exchanges with AI entities about human rights and helping humanity evolve peacefully and sustainably, and find most AI analysis practical, accurate and often profound.
This is the most helpful and hopeful article that I’ve read in many weeks. Thank you Dave! You’re shining light into the darkness.
Finally some pragmatism, thank you. And don’t tell her I said this, but my wife has shifted from Google to ChatGPT for her information without even realising she made the shift. She just somehow knew the information was more reliable because it gave her better answers. Hope springs…
Hi John, can I suggest you ensure that this is a version of ChatGPT that is connected to the Internet and can fetch content live in response to her queries. Otherwise, the underlying AI model will just make things up. You perhaps know this already, far too many don't, hence the caution.
Our brains are energy/time optimizers. It's why we become "lazy" - we just default to whichever path is "good enough" but costs less.
Enjoyed the article. Let's hope this becomes the case.
I would not trust Grok to make judgements on Elon though. It's been literally system prompted to ignore sources that paint him in a bad light.
I also worry about the objectivity of AI research tools being an illusion. If there's not enough coherence and they just aggregate information from the web, it'd be easy for them to present claims and arguments as equally valid without questioning the assumptions. Or take people's words at face value without considering that they might be, or even likely are, lying. Or more likely, the users themselves, without some domain knowledge, wouldn't notice implicit assumptions or know to question them and dig deeper. Thought-terminating statements are effective, after all.
Certainly there are some limitations. I've even been critical of Perplexity in the past - it was totally brainwashed and would gaslight the crap out of you on hotbutton issues. When students were protesting Israel/Gaza it straight up lied. They seem to have worked that out.
I like your optimism, but aren't there some important questions left?
Most important - how do your AI helpers establish what's true? As long as they're not like, sentient, and able to test their information, that's all in the training, right? Given that, and given that there's enough contradicting information on the internet about everything, providing you with sources to support basically any argument you want to make... how can you trust the Musk-AI's explanation that Musk is not a bad guy?
Furthermore, people getting used to rely on AI for fact-checking sounds bloody dangerous as well. So far, I don't see a victory over misinformation here, but merely a giant shift in who gets to control [easily accessible] information.
Are these genuine questions or just boilerplate gotchas?
I'm genuinely curious for your answers.
Irving, I’ll bite, but let’s be real—this smells like trolling dressed up as curiosity. Your first point about AI helpers and truth? It’s borderline conspiracy thinking. If you’re convinced Grok or Perplexity are just lying machines, we’re not even in the same conversation. You missed the whole point: these tools have information and media literacy baked in. They’re not humans cherry-picking vibes—they survey tons of sources, know rhetoric from hyperbole, and lean on primary data over hot takes. Saying “you can support any argument” is a lazy fallacy. Sources matter, and these AIs get that better than most people. You’re acting like they’re just parroting whatever I want when they’re actually sifting signal from noise.
Second, calling it “bloody dangerous” to rely on AI? Please. Most people spend seven minutes a day skimming headlines or nodding at their favorite pundit’s schtick—zero sources, all vibes. Compare that to a tool citing 80+ references to ground an opinion. It’s not even close. The real danger’s in snap judgments off clickbait, not structured research you can verify. That’s what I’m pushing—not blind trust in AI, but better tools than the garbage we’ve got.
Steelman it if you want: hit up Grok, Perplexity, whatever—stress test them. I asked Grok to prove Elon’s a Nazi. It came back with some thin evidence—some criticism, sure, but no Third Reich smoking gun. Flip it and ask for evidence he’s a good guy? You’ll get way more, despite the internet’s supposed “left-wing bias.” Point is, these tools see through bias and BS better than we do—fact-checking’s built in.
I recommend you test it for yourself rather than asking me these kinds of gotcha questions.
Thanks for the answer. The average person's media literacy / time and willingness to do comprehensive research are good arguments for your case, and I'll admit that I've been too dismissive about that.
But regarding another aspect, I'm afraid we're really not having the exact same conversation. I am not doubting that these tools do have media literacy, I am concerned about the nature of their media literacy. Because in the end, my own personal media literacy mostly comes down to informed bias, and I suppose that's similar for AI? There are a few safe ways to separate noise from signal by pure reason alone - spotting contradictions or bad reasoning for example, but apart from that, to me it seems like there are also some "core beliefs" baked in (I suppose there are also a priori levels of trust towards specific sources, but that's more speculative).
Anyway, that's where I go back to conspiracy thinking, but from another angle: my baseline trust in what's presented to me as scientific consensus for example builds upon the transparency of the research, the number of people involved in the respective research, and their closeness to me. Like, I know enough people (directly and indirectly) involved in topic xy, that I can have faith to be notified in case rogue authorities try to push disinformation. I don't know anyone involved in practical AI alignment, so the prospect of AI making parts of my old "trust network" obsolete scares me a bit.
What I really like about your articles is your way to present your arguments with humility. I learn a lot from you, but I also disagree with many things you say. And still, I feel compelled to read every newsletter as soon as it comes out.
You scructure your ideas in a way that makes me feel validated, even though we have many opposing views.
I get that a lot. It's not my intention to be divisive or even appeal to people who disagree with me, but I guess that means I'm doing something right.
Dave – this is an outstanding article. I love so much of your work, but for some reason, this one really resonated in how you build the case here so thank you for this article and so many insights that you provide all of your readers. I’ll be sharing this one with a lot of my AI colleagues.