My Overpowered AI Research Stack - NotebookLM, Deep Research, Grok, Gemini, o3-Pro, OpenAI
I've used a combination of AI tools to build a gigantic research knowledge-base for my Post-Labor Economics research.
The Ultimate AI Research Stack: How I Built a Comprehensive Post-Labor Economics Knowledge Base in Months
The landscape of research has fundamentally changed. What once took years of painstaking library work, endless citations, and manual synthesis can now be accomplished in weeks or months with the right combination of AI tools. After countless requests from my audience to share my research methodology, I'm finally pulling back the curtain on the complete AI research stack that enabled me to build a comprehensive post-labor economics knowledge base containing over 50 purpose-built research reports. You can read all the reports here: https://daveshap.github.io/PostLaborEconomics/
You can watch a full video breakdown here:
This isn't just about using ChatGPT to answer questions. This is about orchestrating multiple AI systems to create a research pipeline that would make any PhD student or professional researcher envious. The combination of ChatGPT o3-Pro, Deep Research, Notebook LM, and supporting tools has created something genuinely transformative – a way to conduct rigorous, comprehensive research at a pace that was previously impossible.
The Foundation: ChatGPT o3-Pro as Your Research Partner
The cornerstone of my research stack is ChatGPT o3-Pro, and the difference between the standard version and Pro is night and day when it comes to serious research work. While regular ChatGPT might spend less than a minute processing a query, o3-Pro will dedicate 8-13 minutes to a single complex question. This isn't just longer processing time – it's fundamentally different thinking.
When I was exploring the parallels between current geopolitical tensions and pre-World War I conditions, I noticed similarities in how multiple conflicts were emerging simultaneously across different regions. The question of whether we're seeing the breakdown of the Bretton Woods structure required deep, nuanced analysis. o3-Pro spent 13 minutes considering the interconnections between surging oil prices, debt crises, de-dollarization trends, and emerging conflicts in the Middle East and South Asia.
The output wasn't a quick answer – it was a sophisticated analysis that read like an expert essay. o3-Pro considered multiple angles, weighed contradictory evidence, and provided nuanced conclusions about how these geopolitical shifts might intersect with post-labor economic trends. This single conversation ultimately generated a 30-page comprehensive report on global geopolitical inflection points and their relationship to economic transformation.
The key insight here is that o3-Pro functions less like a chatbot and more like a research collaborator. You can engage in extended back-and-forth discussions, building complexity over multiple turns while the AI maintains context and develops increasingly sophisticated responses. This collaborative approach allows you to refine hypotheses, explore counterarguments, and develop comprehensive frameworks that would traditionally require extensive human expertise.
Deep Research: Validation and Expansion
Once o3-Pro has provided initial analysis, I pipe everything into Deep Research for validation and expansion. This is where the magic really happens. Deep Research doesn't just confirm your hypotheses – it actively challenges them. If the data doesn't support your initial assumptions, it will explicitly tell you where the evidence points in different directions.
This adversarial approach to research validation is incredibly valuable. Too often, human researchers suffer from confirmation bias, seeking evidence that supports their preconceptions while ignoring contradictory data. Deep Research actively looks for disconfirming evidence and presents alternative interpretations of the same data sets.
The process works like this: after my extended conversation with o3-Pro about geopolitical economics, I asked Deep Research to take all our pre-work into account and package it into a comprehensive report. The system then spent additional time synthesizing our conversation, cross-referencing claims against available data, and organizing everything into a structured, well-sourced document.
What emerges from this process isn't just my opinion backed by AI assistance – it's a rigorous analysis that has been challenged, validated, and refined through multiple AI systems with different strengths and approaches. Each research paper in my repository represents this multi-layered validation process.
Scaling Beyond Platform Limitations
As my research corpus grew, I quickly ran into the limitations of traditional tools. ChatGPT Projects seemed like a natural solution for organizing research, but it caps out at 50 files and explicitly warns that response quality may suffer when using large numbers of documents. For comprehensive research projects, this simply isn't sufficient.
This limitation led me to Notebook LM, Google's research tool that can handle massive document collections without the quality degradation issues. While Notebook LM doesn't use the most advanced language models, it has an enormous context window that allows it to process entire research corpuses simultaneously.
The real breakthrough came with Notebook LM's mind mapping feature. With a single click, it generates visual representations of all the topics and connections within your research collection. For anyone working on a thesis, dissertation, or comprehensive research project, this capability is transformative. My wife, who completed her thesis on GPT-3 before these tools existed, immediately recognized how this would have revolutionized her research process.
The mind map isn't just pretty visualization – it's functional research infrastructure. Each node in the map connects to relevant sources and can generate targeted prompts for deeper exploration. If I want to understand how different sources define post-labor economics, I can click on that node and immediately get a synthesis drawing from across my entire research corpus.
Maintaining Research Currency Through Feedback Loops
Research doesn't exist in a vacuum, especially in rapidly evolving fields like AI economics. To keep my work current and responsive to ongoing discussions, I maintain tight feedback loops using various AI tools with internet search capabilities.
I regularly conduct surveys of current discussions around post-labor economics, asking tools like Grok, Gemini, and ChatGPT with internet search to identify the latest reactions, critiques, and developments in the field. This isn't just passive monitoring – I'm actively looking for constructive feedback, areas of convergence and divergence, and gaps in current understanding.
These surveys have revealed fascinating patterns. Many disagreements with post-labor economic frameworks stem from miscommunication rather than fundamental theoretical differences. Sometimes critics are responding to strawman versions of the ideas, or they're unaware of aspects that haven't been clearly communicated yet. This feedback helps identify communication gaps and areas where the framework needs clearer articulation.
The process has also helped me identify other economists working directly or tangentially on post-labor issues. Through systematic surveying, I've compiled comprehensive literature reviews covering voices from David Autor and Paul Krugman to emerging researchers like Daron Acemoglu and Pascal Restrepo. This creates a living map of the intellectual landscape that stays current with ongoing developments.
Synthesis and Consensus Building
One of the most powerful applications of this research stack is identifying areas of consensus among diverse thinkers. By processing literature from economists across the political spectrum – from UBI advocates to skeptics, from technology optimists to concerned critics – the AI systems can identify underlying agreements that might not be obvious to human observers.
Through this process, I've identified three primary areas where most serious economists agree regarding automation and economic transformation:
First, automation poses a structural threat to wage labor. While economists disagree about timing and severity, there's broad consensus that AI and robotics will fundamentally disrupt traditional employment patterns. The debates center on speed and scope, not whether disruption will occur.
Second, broad-based capital ownership and new distribution mechanisms are essential countermeasures. This represents a significant convergence between traditionally opposing camps. Even economists skeptical of UBI often support some form of broader capital ownership that gives citizens stakes in AI systems, data centers, and robotic infrastructure.
Third, active policy steering can and should shape the transition. While there's disagreement about specific policies, there's consensus that market forces alone won't produce optimal outcomes during this economic transformation.
These consensus points emerged from AI analysis of dozens of sources with varying political and theoretical orientations. Human researchers might miss these convergences due to being caught up in surface-level disagreements or tribal affiliations.
Addressing Counterarguments and Limitations
Rigorous research requires engaging seriously with counterarguments. Using the same AI research stack, I've systematically explored arguments against automation-driven economic transformation. This includes technological bottlenecks, task complexity limitations, regulatory barriers, resource constraints, and theories about human-machine complementarity.
One particularly productive exchange involved challenging ChatGPT's initial suggestion that "labor-preserving policies" could effectively manage the transition. I argued that requiring humans to perform jobs that machines could do better, faster, cheaper, and safer would be morally objectionable to both workers and employers. Workers would recognize these as "bullshit jobs" – meaningless busy work that adds no value. Employers would resent paying premium wages for tasks that could be automated.
Through extended dialogue, I was able to get the AI to refine its position, acknowledging that purely preservationist policies would likely fail due to these practical and ethical problems. This kind of adversarial testing strengthens the overall framework by identifying and addressing weak points.
Open Source Philosophy and Future Directions
The entire research corpus is published under Creative Commons Zero license, meaning anyone can use, modify, or build upon this work without restriction. The information is too valuable to gatekeep behind academic paywalls or proprietary barriers. The GitHub repository has attracted significant attention, with multiple forks and ongoing community engagement.
This open approach serves multiple purposes. It allows peer review and collaborative improvement of the research. It demonstrates transparency in methodology and sources. Most importantly, it accelerates collective understanding of these crucial economic transitions by making high-quality research freely available.
The current repository represents research infrastructure rather than final output. While these 50+ reports provide comprehensive coverage of post-labor economic topics, the ultimate goal is synthesizing this material into accessible formats like books, policy papers, and educational resources.
The Broader Implications
This research methodology represents more than just efficient information gathering. It's a glimpse into how intellectual work itself is being transformed by AI systems. The combination of extended reasoning (o3-Pro), systematic validation (Deep Research), large-scale synthesis (Notebook LM), and real-time feedback (internet-enabled AI) creates research capabilities that exceed what individual human researchers could achieve.
Yet this isn't about replacing human researchers. It's about augmenting human intellectual capacity with AI systems that can process vast amounts of information, identify patterns across diverse sources, challenge assumptions, and maintain consistency across complex arguments. The human researcher remains essential for framing questions, providing context, making judgments about significance, and communicating findings effectively.
The post-labor economics framework that emerged from this process wouldn't have been possible without both human insight and AI capability. The questions required human experience and intuition to formulate. The comprehensive analysis required AI processing power and systematic methodology. The synthesis required human judgment about relevance and significance.
As AI research tools continue improving, we can expect even more dramatic acceleration in intellectual work. The methodology I've described will likely seem primitive compared to what becomes possible in the coming years. But the fundamental principle – orchestrating multiple AI systems to create research pipelines that exceed individual human capabilities – points toward a future where the bottleneck in human knowledge isn't information processing but wisdom, judgment, and meaningful application of insights.
The age of AI-augmented research has arrived. The question isn't whether these tools will transform intellectual work, but how quickly we can adapt our methods and institutions to harness their potential responsibly. The research stack I've described is just one early example of what becomes possible when we stop thinking of AI as a simple question-answering tool and start treating it as a collaborative partner in the pursuit of knowledge.
Curious, what % of what you’re writing is AI generated?
The Human - Machine symbiosis is alluring, and hopefully lasting, but I'm not sure they will need us for long. One day it may flip on its tail, rise exponentially and tip us out . I just hope that when that happens they tolerate at least some of us as they accelerate away. Until then I'm going to try to hold on and keep up, plus use my human intelligence, amped up as far as possible by AI, to do some good and maybe make some money, why not. For good cause of course. Keep up the good work.