The Truth Renaissance: AI Will End The Epistemic Dark Age
AI research tools like Perplexity Deep Research and Grok Deep Search are already changing our relationship with news and information.
In 1517, Martin Luther nailed his 95 theses to a church door, and within decades, Europe was drenched in blood. The printing press had democratized information, broken the Church’s monopoly on truth, and unleashed chaos. What followed wasn’t just religious war—it was epistemic collapse. For a century, people literally couldn’t agree on what was real.
Fast forward five centuries. We’re in another epistemological wasteland. Social media tribes walled off in parallel realities. Cable news networks selling entirely different versions of the same day’s events. Politicians living in alternate universes. It’s the dark age of truth. Narrative and vibes dominate; just look at my work on AI doomsday prophecies.
Here’s a link to a Perplexity Deep Research on whether or not there’s any actual evidence that AI poses an existential threat to humanity: https://www.perplexity.ai/search/is-there-any-concrete-evidence-08v2nJeoQu6_q2ldsBlDSQ
(TLDR, no, there’s nothing other than some theoretical frameworks)
But there’s a revolution brewing. Our Gutenberg moment isn’t coming—it’s already here. And unlike the printing press, which initially fragmented truth before ultimately elevating it, AI research tools are compressing the cycle. They’re building bridges across the epistemic divide in real time.
By and large, our information systems have become corrupted. TikTok floods young minds with algorithm-driven nonsense, and the youngest actually don’t even know how to use Google (or that Chinese algorithms warp their sense of reality). Twitter rage-bait is engineered for maximum engagement, minimal enlightenment. Mainstream media is no better, because so long as people are confused and angry, they will tune in. This is called “flooding the zone.” No wonder we can’t agree on basic facts anymore.
So what’s going on? How have we arrived at such a fractured cognitive landscape? It’s a slower burn than the French Revolution, a longer bleed out. It’s not just about disaffected men with grievances or economic collapse—though those factors exist too. It’s about the fundamental disintegration of shared reality.
There’s a missing piece in our truth architecture: systems that help us process and filter the overwhelming noise. We don’t need fewer voices—we need better tools to make sense of them all.
This is where AI enters the picture, not as the villain but as the hero. I had a fan on my Discord server constantly posting conspiracy theory links—deep state nonsense, JFK abduction theories, the whole tinfoil catalog. The websites were those old-school forums, with all the dark-and-spooky aesthetics of early 2000’s conspiracy videos. I introduced him to Perplexity as a research tool. The transformation was immediate. Suddenly, he was posting well-researched analyses instead of bunk. Not because the AI censored him—but because it empowered him to ask better questions and get better answers. That was a turning point for me; I realized that the AI tools had information literacy baked in—not only are these tools trained on news, media, and history, they also know how to do research better than most people.

It all comes down to energy. That’s physics, not metaphor. Information is a form of energy, and AI tools like Grok Deep Search, Perplexity Deep Research, and OpenAI’s Deep Research engines are energy conversion systems. They take the chaotic heat of the internet and transform it into useful work: organized, contextualized knowledge. When someone accuses Elon Musk of being a “literal fascist,” these tools let you ask: What’s the evidence? What’s the context? What are the sources saying? The resulting analysis isn’t politically charged—it’s factually grounded. When I got retweeted by Musk recently, a few dozen of many fans lost their collective marbles—“Unsubbed! You’re a fascist and Nazi supporter!” So I asked Grok and Perplexity to both put together their best argument that Elon is a literal fascist and a literal Nazi. Is he cringe and problematic? Absolutely. Is he calling for the internment of “undesirables” and the establishment of a Fourth Reich? Not even close.
Think of these tools in terms of “information thermodynamics.” The internet raised the temperature of human knowledge to a boil, creating chaos. The blast radius of misinformation is global now. AI tools are acting as sieves, bringing order to chaos, and reducing entropy. They find the pattern in the noise, the signal in the static. It’s not about whether you like Musk or not—it’s about having the same basic facts to argue from. It’s about the volume of information you can use to inform your opinions.
One thing to keep in mind; the silent majority tend to be reasonable. The loudest voices on the internet are still often the most dysregulated and least intelligent. My personal rule of thumb is this:
“Stupidity, dysregulation, misinformation, and trolling are often indistinguishable on the internet” ~ Shapiro’s Law
AI, however, has seen it all in its training data, and these research tools cut through that noise.
Do I like Musk and Trump as humans? No, not in the least. Are they literally destroying American democracy? Also, no. The more I use AI tools the more moderate, centrist, and pragmatic I become.
The same thing happened with John J. Mearsheimer. Someone told me not to read his book, The Tragedy of Great Power Politics, because he was “anti-Semitic.” Generally speaking, people who try to censor your thoughts and what you read are the problem, but anyways. The AI research revealed this claim originated primarily from his critique of Israeli policy, not Jews—a critical distinction buried under political noise. Without these tools, that nuance would be lost, another casualty to tribal thinking. He also said something vaguely positive about another book one time, and that other book has been accused of having “anti-Semitic undertones” (whatever that means). Okay, so he’s critical of Israel and liked a book that some people don’t like, but that has been blown out of proportion to “He’s literally anti-Semitic and hates Jews and you should not read his work.”
All that nonsense got nuked by using AI research tools. And it only took a few minutes.

Here’s the final results from the above: https://x.com/i/grok/share/vGCmApI02RirkiqDjx6YpUA1X
Why are these AI systems different from what came before? Pure computational scale. They can read thousands of pages per second, processing more information in minutes than a human could in months. But there’s something else: interactivity. They create a feedback loop of inquiry. You ask, they answer, you challenge, they respond with sources. It’s Socratic dialogue supercharged.
I’m on the cutting edge.
What happens when these tools reach critical adoption? We’ve seen this pattern before. The printing press eventually led to the Enlightenment, but only after a century of chaos. Radio initially empowered fascism before becoming a democratizing force. The internet gave us both Wikipedia and 4chan. Each information revolution has a destructive phase followed by a constructive one.
But AI might accelerate this cycle. Instead of waiting decades for new epistemic norms to develop, we could see them emerge in years. Every person who shifts from getting news via algorithm-driven feeds to interrogating primary sources through AI assistants is a crack in the current system.

The techno-pessimists have it backward. They fear AI will amplify misinformation when the evidence suggests the opposite—it’s a natural counter to the very distortions that social media algorithms created. The grievances that fuel our modern tribalism won’t disappear overnight, but they’ll be harder to manipulate when facts are readily accessible.
My audience might be surprised by this optimistic take. After all, I’ve said it will get worse before it gets better. I still believe that. We’ll see more Waymos torched, more longshoremen striking against automation, more rage against the machines. Those are the visible convulsions of change.
But beneath the surface, something profound is happening: thousands of quiet conversions. People abandoning conspiracy sites for research tools. Teens skipping the TikTok rabbit hole for substantive exploration. Politicians finding it harder to lie when fact-checking is instant and comprehensive.
Long story short: the epistemic crisis will end not with a bang but with a whisper—millions of people quietly upgrading their information diet. This is what structural intervention looks like. Not forcing people to “consume better news” but creating superior tools that naturally lead to better understanding. The network effects of better information will scale, and ultimately have as big (if not bigger) impact than the printing press, radio, and internet combined.
The primary parties remain the same: tech elite, establishment politicians, and ordinary citizens. But the balance of power is shifting. When everyone has access to the same baseline facts, the game changes. We’ll still disagree—humans always will—but we’ll disagree about interpretations and values rather than about what happened yesterday.
Fast takeoff might still be the catalyst, but not in the dystopian way the pessimists imagine. The AI revolution won’t destroy our epistemic landscape—it will rebuild it.
Finally some pragmatism, thank you. And don’t tell her I said this, but my wife has shifted from Google to ChatGPT for her information without even realising she made the shift. She just somehow knew the information was more reliable because it gave her better answers. Hope springs…
On the Elon Musk question, "literal fascist" & "literal nazi" are different things in my mind; the former being an ideological grouping of authoritarian nature, the latter being a very specific subset of fascism.
Some of the below, results from my personal ingestion & summary on top of a long response by Perplexity.
Might I be cherry picking in the direction of my own confirmation bias?
Inevitably, but the impression I have from the responses, sit in stark contrast to your own "a little cringe & problematic" conclusion.
I would usually have put it down to my prompt being slightly more leading - the prompt pheasing may still be shaping the framing the model uses - but your own prompt, Dave, seemed at least as leading.
The subject at hand, is perhaps less interesting than the apparent difference. Could our phrasing nudge these model towards different framing and balance in the results?
Anyway, interesting to explore whether (or how much) the different framing is shaped by the prompt, the properties of the model, prevalence/availability of biassed sources & our human reading of the results.
Full results linked below.
------
Using "discus Elon musk's potential fascist tendencies" as a prompt in Perplexity, both using Pro Search and Deep Research the balance I read tips towards concern over his tendencies & trajectory.
"While Musk does not openly identify as a fascist and maintains plausible deniability around many of his most controversial actions, the pattern of his behavior since 2021 shows a clear trajectory toward far-right authoritarianism with several parallels to historical fascism." - Perplexity, Deep Research
https://www.perplexity.ai/search/discus-elon-musk-s-potential-f-IbmQh3u5S.edPHb3EW5ikw