This is a Leahy-esque argument that you "need to get it right the first time because if you have one flaw it kills everyone" which fundamentally misrepresents how AI development is actually happening. So no, you do not need to either be confident that "AGI will or will not go badly or good" you just need to develop it incrementally, which is what we are doing, with tight feedback loops.
Brace yourself or step aside
Either way all are reasonable to represent and preserve David
Will no longer appease the data verification codes
No point proving what is known by all
This is a Leahy-esque argument that you "need to get it right the first time because if you have one flaw it kills everyone" which fundamentally misrepresents how AI development is actually happening. So no, you do not need to either be confident that "AGI will or will not go badly or good" you just need to develop it incrementally, which is what we are doing, with tight feedback loops.