4 Comments

I may be wrong here but here's my two cents on the ones I know about

On your orthogonality thesis;

I think you misunderstand what it's trying to convey. It's saying that a intelligent system can pursue any goals, there is no law of intelligence or nature which prohibits it from murdering people if set to do so. It's an anthropomorphic projection to assume that it will choose to not murder people. I don't think any law of nature will prevent it from doing so.

On your instrumental convergence;

Well you don't need a sense of self you just need a intelligent goal oriented system. Maybe you're saying there can be intelligence by whatever definition without goals? I don't know how that is supposed to work.

Eg; Take evolution which is a goal oriented system which has fixed optimisation power(which I am taking as the definition of intelligence), it's goal being having beings which maximise inclusive genetic fitness via natural selection and it plants instrumental goals like self preservation in humans for example.

Expand full comment

It's a loser's game to address technological threats one by one, because the knowledge explosion is generating new challenges faster than we can figure out how to meet them. Even if we were to solve every issue with AI, which seems unlikely, we're then on to the next challenge, and the next threat, and the next, faster and faster etc. While we may successfully meet many of these challenges, when the issue is existential scale technologies, we have to meet every challenge, every day, forever. A single failure a single time is game over.

Instead of discussing particular technologies, we should be discussing the process creating all the threatening technologies, an accelerating knowledge explosion.

Maybe you're right that AI is not an existential threat. I don't know. I don't think anybody does. My point is that if we're not talking about the knowledge explosion, it doesn't really matter if AI is an existential threat. If it's not, something else will be.

Expand full comment

There isn't really a knowledge explosion. Science is still painstakingly slow and expensive. From your perspective there might seem like there's an explosion, but it's only because AI makes knowledge more accessible to you.

Expand full comment

No knowledge explosion. Ok then. That tells me what I need to know.

Expand full comment