12 Comments

I'd really like to know what you think of Sabine Hossenfelder's latest YT post about AI Safety.

Expand full comment

She needs to stay in her lane. Her hot takes on pretty much any topic other than physics is pedestrian.

Expand full comment

Thanks for the quick reply. Yes, I had already thought that too. I just needed another opinion on it.

Expand full comment

Totally agree about the need for transparent, fast movement and the potential dangers of China overtaking the US wholesale. That said, genuine question, I'm curious specifically about your criticism of Anthropic. I know and have seen your position on AI safety evolve as realities and your own health changed, but I am reasonably sure I saw a video sometime last year where you praised Anthropic's work on AI safety and model alignment (was it something about first principles, I can't quite recall?). I'd be fascinated in a video or blog post specifically about how your views on Anthropic, or the differing approaches of some of the big companies with regards to safety and open source, have evolved as the landscape has changed. Your insights, how they've evolved, and why they've evolved are always interesting.

Expand full comment

What's the role of open source models in your game theory analysis? Anthropic and OpenAI are both closed source and not the only major players in town

Expand full comment

Interesting read, thanks Dave. However, one thing I can't get past is the implications of companies (re)applying the 'move fast and break things' approach.

We've seen some pretty dire and unprecedented consequences of the move fast model when it came to social media - for example, the impact on people's attention spans, or the impact on teen mental health.

Maybe society would be better off if there were more regulation and ability to push the brakes with social media, before it became too late... I worry that the same, or worse, could happen with unregulated AI in a race to the bottom.

Expand full comment

Didn't say break things, don't appreciate you twisting my words.

Expand full comment

Sorry if you thought that I implied that you said 'break things' (or that it was okay to do so). Not my intention.

Your Claude output said "... then the optimal strategy would be to develop and release AI models aggressively while being open about safety issues and findings".

If these companies are simply being 'open' about safety issues, that still doesn't give me much reassurance that those safety issues will then be corrected (or even CAN be corrected).

Again, looking at the impact of social media – we didn't know what the consequences would be early on. And even after everyone started catching wind of these negative consequences, there was little anyone COULD do about it, from a political or personal level. Too little was done too late.

I do roughly prescribe to the idea of effective tech utopia (especially due to your insights on post-labour economics), but I'm just concerned by the breakneck speed things are moving, combined with historical mistakes (e.g. neagtive impact of social media), and the incentives that drive these companies and governments.

Expand full comment

Always interesting to hear your thoughts. My reaction though is rather cynical as regards US companies and their likelihood to be transparent. It just doesn't seem to feature in their mindset.

Expand full comment