2 Comments

For the "AI Safety Narratives", you don't need to be confident that AGI will go badly, you just need to not be confident that it won't (apologies if I misunderstood you). Good post generally though!

Expand full comment

This is a Leahy-esque argument that you "need to get it right the first time because if you have one flaw it kills everyone" which fundamentally misrepresents how AI development is actually happening. So no, you do not need to either be confident that "AGI will or will not go badly or good" you just need to develop it incrementally, which is what we are doing, with tight feedback loops.

Expand full comment