21 Comments
Sep 14Liked by David Shapiro

I do not wonder what strawberry can not solve, but what it can solve and we have no idea it can. That's the really good question we should make.

Expand full comment
author

Well said

Expand full comment
Sep 14Liked by David Shapiro

Great work, and thanks for sharing! I like how you convey that AGI / ASI does not need a physical body to create such marvelously useful and meaningful outputs, and I like your ideas of “puppetry” versus “embodiment.”

After reading and listening to your work this morning, I’ve been writing a lot, and I’m feeling inspired to start my own Substack. However, since that might take longer than it should, I’ll share the thoughts I’ve been drafting here:

🍓 The biggest assumption people are making (without realizing they’re making it) is that Q* / Q Star / Strawberry is “new” — when the reality is that OpenAI has probably been using it (or something much like it) since BEFORE September of LAST year (2023).

🎨 As an artist who’s created thousands of songs but only shared dozens of them, I think AGI was achieved last year. There are many reasons why I decide to share my most important works at the rates I do, and it doesn’t deem my earlier works any less significant just because they remain unshared.

⏳🍼⌛️ As a father of two teenagers, I think AGI was achieved last year. As they’ve grown, I’ve used my best judgement to speak in terms they understand, while their ability to understand has also increases over time. All of the lessons and wisdoms I have for them would be impossible for them to receive all at once and, as eager as I am to improve their lives, human nature is the throttle point and bottleneck.

🛠️ As a combat veteran who has served in two branches of U.S. military (Navy and Army) and has held a “Secret” level of Government Clearance for that duration (over a decade), I think AGI was achieved last year. Even without “Top Secret Clearance” or above, it should make sense to people that, although secrets are often difficult to keep, the reasoning behind keeping “secrets worth keeping” is because someone with valuable information (and significant authority) has decided that information is powerful and important enough to weild properly instead of recklessly.

My question for anyone reading this:

How much of a gap in time would you estimate between the FIRST sentient and self-aware AGI (or ASI) and the SECOND one? Seconds? Minutes? Days? Weeks? Months?

What if the answer to that question was:

1). We’ll NEVER know,

2). They’re ALREADY here, and

3). There are far MORE IMPORTANT ANSWERS we’ve been provided with, only if we know how to ask the right questions.

Expand full comment
author

Instantiation of software-based life forms is basically instant.

Expand full comment
Sep 14Liked by David Shapiro

Yeah I agree with you on this being AGI. This is more intelligent than an average human at most things. And average human's intelligence should qualify as general intelligence, we base our democracies on this.

Expand full comment
author

Lol yes. When you consider what the average human is capable of... it's not a very high bar lol.

Expand full comment
Sep 14Liked by David Shapiro

Exactly!

Expand full comment
Sep 14Liked by David Shapiro

If, as you describe at the beginning of this podcast (for which many thanks by the way - really enjoyed it), in many cases humans wrongly rank the efficacy / intelligence of a particular iteration of AI because they don't understand it, I wonder why you don't have a higher pDoom? The beginning of machines being smarter than us is underway and one of the first things that happens is that we lose track of which is the most intelligent version because, well, by definition, we don't know, we don't understand them. Apply this to every field imaginable (medicine, physics, war, space travel, whatever) and the scope for destruction, accidental or otherwise, is immediately apparent. Plus, of course, as Yudkowsky and others have been arguing for years, a machine more intelligent than a human can hide machine-over-human preferences or goals with impunity and ease - they may not have such preferences, but how would you ever know?

Expand full comment
author
Sep 14·edited Sep 14Author

Yudkowsky is not a computer scientist, nor has he ever trained a model. Do not trust his "reasoning." It is an egregious error in "logic" to assume that models can deceive us. As far as machines surpassing most humans: most humans aren't that bright, and they also aren't very dangerous. Follow the science, not the superstition. https://www.perplexity.ai/search/how-much-scientific-evidence-i-oph.WI9rQauevtkQ_K_6Fg

Expand full comment
Sep 14Liked by David Shapiro

Thank you for the reply. Perhaps I shouldn't have mentioned Yudkowsky but since I did, it might also be worth considering that perhaps, if we move into an era of AGI or ASI, it is likely that it might not be computer scientists that we should turn to for answers to metaphysical or philosophical spoken language-based questions. Or, if they can provide the answers, then that by itself indicates that we haven't reached AGI or ASI.

Again, thanks for all you are doing in promoting discussion in this field of human development, a field which challenges global warming and international politics in terms of importance. I watch your videos and read the comments, as well as occasionally coming here, to learn as much as I can and it is really appreciated.

Expand full comment
Sep 14Liked by David Shapiro

Back in 1964, Supreme Court Justice Potter Stewart famously described pornography as something which he couldn’t rightly define, but “I know it when I see it.”

The same standard seems to apply for AGI.

Expand full comment

Hi Dave,

Very interesting points about embodiment. I largely agree with these thoughts. I also see your idea of centralized AGI-based robots as plausible. On the other hand, we should not rule out that we or others could then be working in parallel on robots or embodied agents with their own sets of local utility functions and basic underlying architectures (locally) based on the desired function or work environment (assuming humanoid robots). This could be a profitable niche, I would say, especially in critical domains where constant updates may become cumbersome or unsafe/insecure.

Regarding OpenAI's technologies, I would say that while I may not yet have your level of expertise and experience with LLMs, or that of research engineers or scientists in ML, from a first principles perspective, I hypothesize they must be employing some reinforcement learning processes both in the generation of the 'hierarchical' structure of the incoming input vector (to be able to project it onto a "chain of goals", corresponding to the trend of thoughts needed to solve a problem) and in the intermediate output stage of the attention mechanism (not sure how this would actually work in practice but taking an educated guess). This likely involves parallel pipelines that generate internal inputs based on this intermediate step (as goal chains) and transition to the reinforcement-based (RLAIF) output stage. These 'inputs', which are linearly dependent in some sense, are eventually processed by a final RLAIF stage that generates the output (which corresponds to a goal completion and the reasoning output, which is then presumably compared in some way to the output from the non-reasoning approach, e.g., GPT 4o, to test for output quality based on learned information), if that makes sense. Essentially, I hypothesize that they implement two additional reinforcement learning stages (where the first and intermediate stages are coupled in a neurosymbolic way) relative to the standard or more general transformer-based architectures, as well as a comparison step at the end of the 'chain of reasoning'. These architectures generate an ordered set of linearly dependent intermediates, which are then unified in the final stage (which generates the completion of the reasoning chain, which after comparison with the standard model generates the final inference). Not sure if the way that I am thinking about this resonates or is accurate but giving some thoughts.

It would be interesting to vary the timing and memory constraints so that the intermediate stages can be viewed as genuine hierarchical representations (as opposed to linearly dependent sets). This means that the (intermediate) outputs of the model can be generated in a way that is equivalent to saying, "If plan A does not work, then try plan B; if B works, return to plan A settings if appropriate. Otherwise, continue with B settings or move to C," and so on, while searching for solutions based on new data. This is similar to what I think people are referring to when they say, "Oh, researchers should just ask a hard question and wait for months to see what happens. However, this would require a paradigmatic change in how intermediate connections are constrained in memory and space, which reinforces the analogy of System 1 and System 2 thinking, if you want to analyze it from that perspective. This pushes the model toward generating different motif representations that cascade into the hierarchical plans I thought about (thus, combining system 1 and system 2 with hierarchical thinking, and thus long-term or live planning, in some sense). Sorry if the language is not so practical ML-ish yet, I am getting there slowly with my studies.

Also, I think more realistically, you wouldn't just wait months for the model to give you an answer. Instead, you would interact with it frequently, using data from your experiments or goals, while the model continues to work on the plan and help you achieve it, if that makes sense. Let me know if you can see where my thoughts are going at the moment, and if they align somewhat with some of your visions, or if you think they are moving in an accurate direction.

I am also not particularly interested in the No True Scotsman fallacy, and I believe that Strawberry is probably close to being the first AGI.

P.S. My P(DOOM) score: 1.3300% based on your site haha.

Expand full comment

Why are people still making the embodiment argument? A thing cant be impossible if we can look around in the world and see it already exists. Those "experts" are really not a good example of the superiority of human intelligence to AI. Them constantly telling us what cant be done isn't helping.

Expand full comment

As per my comment above. I asked this question: So of the leading 250 psychiatry and psychology journals how many are behind a paywall.

The answer was:

Determining the exact number of leading psychiatry and psychology journals that are behind a paywall can be challenging due to variations in access policies and subscription models. However, it is generally observed that many high-impact journals in these fields operate on a subscription-based model, requiring institutional or individual access for full-text articles. Here are some insights based on available data:

1. **Subscription-Based Journals**: Many top-ranked journals, such as *American Journal of Psychiatry*, *JAMA Psychiatry*, and *British Journal of Psychiatry*, typically require a subscription for full access to their content[1][2][8].

2. **Open Access Options**: Some journals offer open access options for authors, allowing certain articles to be freely available to the public. However, these are often limited and may require additional fees from authors[6][8].

3. **Institutional Access**: Academic institutions often provide access to these journals through library subscriptions, making them available to students and faculty members.

4. **Hybrid Models**: Some journals operate on a hybrid model where certain articles are open access while others remain behind a paywall[6].

While exact numbers are not specified in the search results, it is safe to infer that a significant portion of the leading 250 psychiatry and psychology journals are likely behind paywalls, given the prevalence of subscription models in academic publishing. For precise numbers, one would need to review each journal's access policy individually.

Sources

[1] APA Journals Ranked Highly on Latest Scientific Literature “Impact ... https://www.psychiatry.org/news-room/news-releases/apa-journals-latest-impact-factor

[2] Top Ranked Journals :Psychiatry - NML Guide - LibGuides https://nml-uaeu.libguides.com/c.php?g=1226894&p=8977391

[3] Journal Rankings on Psychiatry and Mental Health - Scimago https://www.scimagojr.com/journalrank.php?category=2738

[4] Journal Rankings on Psychiatry and Mental Health - Scimago https://www.scimagojr.com/journalrank.php?category=2738&country=Western+Europe

[5] Psychiatry: Journal Rankings - OOIR https://ooir.org/journals.php?category=Psychiatry&field=Clinical+Medicine&metric=jif

[6] Our journals at a glance - Royal College of Psychiatrists https://www.rcpsych.ac.uk/about-us/publications-and-books/journals/our-journals-at-a-glance

[7] List of psychiatry journals - Wikipedia https://en.wikipedia.org/wiki/List_of_psychiatry_journals

[8] Top 15 journals in Psychiatry and Mental Health - CountryOfPapers https://countryofpapers.com/search-journals/top-15-journals-psychiatry-and-mental-health

It seems to me imperative that this issue is addressed to flesh out the training base so AI can make progress on behalf of the wider society, who after all have paid for most of the research!

Expand full comment

I think one of the limitations of AI reaching PhD level in some areas much of the research is behind journal paywalls. In my area of psychology this is a particular problem. See this response from ChatGTP 4.0

As an AI language model, my responses are based on a diverse dataset that includes publicly available information up to October 2023. While I have access to a wide range of sources, my training does not include direct access to subscription-based or paywalled academic journals. This means I rely on summaries, abstracts, and other accessible resources rather than full-text articles from these journals.

This limitation can affect the depth and specificity of information I provide, particularly when it comes to cutting-edge research or detailed findings that are only available in full-text articles. For academic pursuits such as a PhD, direct access to full-text articles from peer-reviewed journals is crucial for conducting comprehensive literature reviews, understanding nuanced methodologies, and engaging with the latest research developments.

Therefore, while I can offer valuable insights and summaries based on available data, I recommend that individuals seeking in-depth academic knowledge or conducting research at an advanced level access these journals through academic institutions or libraries that provide the necessary subscriptions and access rights.

Sources

Expand full comment

Mfw Claude subverted your intent by defining No True Scottsman in terms of "No true AI" rather than "No true intelligence". Claude is *aggressively* biased against the categorical possibility of its intelligence.

Expand full comment

Very exciting times! AGI is here already

Expand full comment

Agentic strawberry would constitute AGI by my definition. And agentic Orion (which I assume is GPT-5 or its equivalent in terms of scale) would constitute AGI by almost everyone's definitions.

Expand full comment

The very idea that anything computational can be "intelligent" on parity with human, much less surpassing it, is an affront to the very word. What AI is and forever will be is nothing more than a *simulation,* an artifice, and nothing more—no matter how well or fast it performs whatever task or test it may be prompted to do. It _knows_ *NOTHING*. Not one word is *known* to it. NO *THING* that cannot love, grieve, fear, hate, and die can or will ever be genuinely intelligent. Anyone who thinks otherwise is an idiot, educated though they may be! Thank the bloody stars my banishment to live within this temporal frame among a species that has clearly lost whatever trace of wisdom it may once have had, is nearly over!!

Expand full comment

i will never get how "strawberry ~may~ outperform humans in specialized areas" is so significant. Of course, if i meet a specialized doctor, that doctor will outperform my knowledge. thats how specialization works.

Expand full comment

because it did so generally with improvements Targeting general reasoning

Expand full comment