Discussion about this post

User's avatar
Zoran Grete's avatar

I've just flown over the article so I might have missed it: As far as I'm aware, people tend to assign a higher probability to a bad outcome than is the case in reality (people are biased towards bad outcomes). According to ChatGPT (I couldn't find something quick online, so there should be more thorough research done) people tend to overestimate the likelihood of negative events on the range of 10-30%, this number is partially dependent on the severity of the bad outcome. Given the assumed numbers the bias adjusted P(DOOM) would be P(DOOM)' = P(DOOM) *(0.7 to 0.9)⁴ = 3.05% to 8.33%

Edit: instead of 0.7 to 0.9 it should be 1/1.3 to 1/1.1, so P(DOOM)' = 4.45% to 8.67%

Expand full comment
KoiCantortionist's avatar

I wish for "losing control isn't a bad thing":

S == 100%

A == 100%

U == 100%

H == 0%

-> P(DOOM) of 0%

But more realistically I guess I'm at:

S == 50%

A == 50%

U == 50%

H == 10%

-> P(DOOM) of 1.25%

But if AI is not agentic, does it actually qualify as AGI or ASI? Isn't the point for it to be able to do anything a human can do, or better? Humans are agentic, so would a non-agentic AI only ever qualify as narrow AI? Let's say you are an employer, looking to hire an employee. Let's say you can hire a human who is either agentic or non-agentic. Which one do you think will be a better employee?

Expand full comment
9 more comments...

No posts