Anthropic done goofed
The self-inflicted, slow-motion trainwreck that didn't need to happen.
Hey folks,
Thank you.
I know my latest video about Anthropic rubbed some people the wrong way, and provoked much spirited debate. While a few comments were uncalled for, most were reasonable, whether or not there was agreement or disagreement. Thank you for your attention and engagement. This stuff is important.
One thing about me, that I hope you understand and value, is that I do not pull my punches. I remain deeply disappointed (but unsurprised) by Anthropic’s decisions, most notably the way Dario handled this disagreement with the Pentagon.
After reading almost every comment (both here and on Twitter and elsewhere) as well as extensive research over the last few days, I wanted to lay out some key nexus points where consensus has landed. I will provide my editorialization and commentary AFTER but this first section is reporting from the probably 15 hours of research I’ve done over the weekend.
FIRST - Almost everyone agrees that the Pentagon (or more specifically the Trump administration) vastly overreacted with the SCR (supply chain risk) designation. Donald Trump is known for his bombastic, policy maximalism, and “nuclear option” preference, particularly when there’s a personal slight. Thus, while it is unsurprising that they would go as far as the SCR, the SCR represents several problems simultaneously. First, it sets a very dark precedent. Dario was right when he said it felt “punitive” and I would actually go farther and say that it was retributive. The distinction is subtle but important. The fallout from the SCR designation (irrespective of whether or not it sticks) is that of a “chilling effect” to AI labs - “fall in line or else.” It’s one thing to lose a military contract (which happens all the time). It’s another for such a large blast radius to be the default option.
SECOND - While the government’s reaction was dramatic and unnecessary, Dario bears a disproportionate amount of the blame for what happened. First, the negotiations had been going on for months. There had been some leaks since at least January, but nothing quite as sensational as to when Dario published the “good conscience” blog post. The timeline matters. Dario took a private, relatively quiet dispute public, apparently in an attempt to gain sympathy. It backfired spectacularly. Only after that blog post did the administration turn up the heat to absurd levels. Furthermore, the fact that OpenAI got a nearly identical deal literally within hours of Anthropic being kicked out, makes it seem like there was something else going on (perhaps it was personal, or people were tired of Dario). (Note: I don’t want to editorialize further on this point until later, as some of it comes down to unsubstantiated rumor). However, the fact that OpenAI got the deal and Anthropic didn’t seems inconsistent.
THIRD - By any reasonable accounting, Anthropic has been materially and structurally weakened by this move. While some people believe that, by taking the moral high road, Anthropic has strengthened their position, this is simply not true. They’ve lost out on hundreds of millions, if not billions of dollars of contracts over the next few years. This is important because Anthropic had a unique stance on AI safety and alignment. Without money, their research doesn’t happen as quickly or robustly. Furthermore, the contagion effect is real, and already having fallout down to local government levels, even those without ties to the Pentagon or federal government. In addition to the large blast radius, which we have not yet seen the full extent, there is the distinct possibility of brain drain. Several high level people already departed Anthropic before this blew up. Now, Anthropic has signaled very loudly that they will make ideological choices to their detriment, which will attract certain types of talent, but alienate others.
FOURTH - This outcome runs contrary to Dario’s own self-expressed strategy of “steering from within.” This is the method of gaining institutional influence through technical legitimacy and positioning. Anthropic has now left a moral and ethical vacuum within the establishment, which is rapidly being backfilled by the “Tech Right” such as xAI and OpenAI. That is not to take sides, it is to point out that diversity of thought has been substantively curtailed within the establishment because of this action. Anthropic has lost their voice, pull, and influence in a big way (not fully, but largely). What’s even more confusing is that the evidence is clear, the Pentagon was trying to make a deal down to the very last moment, and in fact, they were apparently only a few words away from agreeing on terminology when Dario decided that an internal meeting was more important than hashing out the final details. He refused to come to the phone while the Pentagon was trying to salvage the deal. This fact was shocking to me, and still perplexes me.
-----
The above was my attempt to summarize the most salient, direct points. Now I will share my more editorialized opinions.
FIRST - Anyone who cares about AI safety in any capacity should be disappointed in Anthropic. The reason is because this outcome was largely self-inflicted, and at least a month or two in the making. It was not a sudden rupture, but a slow burn apparently driven by Dario’s recalcitrance. While I strongly disagree with Anthropic’s beliefs about AI safety, the nature of AI itself, and their corporate culture, I would still prefer to see them with a seat at the table. Why? Expressly BECAUSE they have a different viewpoint from others. Longtime fans will remember when I wrote “Claude is a benevolent entity” and spoke about the fundamentally different ethics and epistemics Anthropic was using, and why it was helping them secure the lead. Of course, getting “canceled” by the Trump administration will not immediately end their company, it will dramatically limit their reach and revenue over the coming years. I still believe that it could prove to be a fatal mistake, though many disagree with that stance. No one (insiders, analysts) believes that Anthropic comes away stronger, even if they have a short-term moral boost.
SECOND - There are rumors and opinions that the “Tech Right” has been applying quiet pressure for the Pentagon to distance from Anthropic for various reasons. Allegedly, people ranging from Elon Musk to David Sacks have been openly criticizing Anthropic (not sure if this is public or private or through backchannels). It is also unclear if this pressure campaign was coordinated, or if it had any specific impact on the outcome. It seems unlikely, as the Pentagon was literally on the phone with Anthropic as the deadline passed, trying to come to a deal and avert catastrophe. But I do feel like it’s important and responsible to bring this last part up, especially when some of the facts don’t really pass the sniff test. Why did OpenAI get the deal, almost exactly as Anthropic wanted it? There are, of course, a few possible reasons: perhaps EVERYONE was scrambling to put out the fire as quickly as possible. Sam Altman has repeatedly used the language of “de-escalation” and has been quite vocal about how the SCR designation is a bridge too far. However, the timing and speed of everything does raise a few eyebrows.
THIRD - I found it quite fascinating that of the people who believed Anthropic getting kicked out of the establishment, there were two distinct camps. The first camp is the “accelerationists/Tech Right” who agree that getting EA influence and “Decel” influence out of the Pentagon is the optimal policy for America. This is obvious tribalism. However, there was a nontrivial percentage of pro-Anthropic people who ALSO agreed this was a good thing, albeit for fundamentally different reasons. Those reasons include things such as “good, now the military will have zero influence over Anthropic, and they can build AI for the people.” This, to me, represents an interesting and unique opportunity that could go in many different directions. For instance, one potential direction for Anthropic could be to realize the value of Open Source AI. If they truly want to empower individuals, as others like Emad Mostaque do, they might very well start releasing Open Source models to level the playing field. This is, of course, 100% speculation on my part. My point here is to say that there could be a silver lining to the whole debacle. It remains to be seen. But as they say “let no good crisis go to waste” and while I fundamentally disagree with Anthropic on many points, I don’t think they are that stupid. (I mean, blowing this up this badly was pretty dumb, but beyond that...)
FOURTH - On the topic of domestic surveillance and autonomous weapons. Many people demand to know why I think these are “good” things. I have never said “yes, we should enable mass domestic surveillance and Terminator.” I have, however, LONG said these kinds of things are inevitable. There is a very large difference between an ideological moral judgment (i.e. something is ‘good’ or ‘bad’) versus acknowledging the currents of technology and power. I have literally produced hundreds of videos across these topics, one of the biggest being my Terminal Race Condition video. This episode vindicates that video. What we just experienced, in real time, was a demonstration of a terminal race condition. China and America are rushing ahead, and any friction gets smoothed over. This was a core fact that I worked to try and get the AI safety movement to recognize over the last few years, but with every conversation I had, they basically stuck their fingers in their ears and said “NOPE PAUSE CAN DEFINITELY HAPPEN.” When I say that I am a structural realist, this is what I mean. I am actually somewhat aligned with the EA and Rationalist and Longtermist frame - the telos of “maximize future human life.” I just disagree vehemently with their Bayesian back-of-the-napkin math and their self-destructively ideological stance. I don’t personally like Palantir, which conducts domestic surveillance. But shooting myself in the foot over something that I can do nothing about is about as effective as pissing into the wind.
FIFTH - On balance, I do think humanity has been materially harmed by this outcome. The telos of “maximize future human life” demands dialectic to get there. Anthropic was a radically different voice in the establishment (or Military Industrial Complex) and even though they are often wrong, they are also often right. By being sidelined, which as far as I can tell, Dario chose to be a martyr, is idiotic. He has acted, in my estimation, drastically outside of the expressed values he and his company holds. If he really truly held to those values, he’d do whatever he could to stay at the seat. To stay the first frontier AI lab working with the military. There are MANY reasons for this beyond money and influence (both of those are meritorious reasons when you think about the telos of maximizing human life). One reason is facing the novel challenges that defense and intelligence affords a company. Necessity is the mother of Invention and Constraints are the father of Creativity. Anthropic will now NEVER see a very interesting, dynamic set of problems, which will substantively constrain their output and insight. Possibly forever. Without the novel problem space that military and intelligence offers, it’s possible that Anthropic’s engineers will miss certain algorithmic and game theoretic insights. Again, the optimal policy for the espoused telos is “as many labs with different paradigms sit at the table” - many heads are better than fewer heads.
In conclusion, I will say that I think Dario should do whatever it takes to get Anthropic back in the good graces of the US government, up to and including resigning as CEO. If he honestly believes in the values he’s expressed, he should be willing to make any sacrifice necessary (within reason) including personal sacrifice to see it through.


Dave you have been on a superb balance take steak in the last few months.
I think sadly, you're overplaying your level of confidence about a ton of stuff there :
- How the dispute came to be and escalated
- The exact nature of the deals offered / refused / reneged / then offered to OpenAi. All you have is public statements from various parties. That's not enough to know whether there are meaningful differences between to be analysed
- If one of those parties was not acting in good faith, there would be almost no way to tell for sure from the outside. Though I'll bet that between Dario and Hegseth, I know who you'd trust more... And if bad faith was indeed the cause of the breakup, then it might just be inexorable. And all that would be left would be for one party to make the most of a bad situation.
You're the man, and I love what you put out in the world, but you do have a tendency to form really strong opinions on events and people unjustifiably sometimes. We come to you for nuance. Not for whatever this is.
This really just felt like a good opportunity to shit on EA (to be clear, I also have a dislike for the movement), as some sort of boogieman that explains everything. It really doesn't.
We come to you for the many perspectives each topic can be viewed under.
In this saga, the perspectives you've offered in favour of Anthropic have felt hollow. Like hushed disclaimers before the real stuff, the tirade can go on.
Remember the answer to "What did Ilya see?" was "Not a fucking thing" - he's out there having raising billions and admitting he doesn't really know what to do next.
We need way more info than we have to make the kind of calls people are making here...
Thank you.