OpenAI Strikes Back
Revenge of the Altman
Fifty-eight minutes. That’s how long it took for the most dramatic talent heist in recent AI history to complete its full arc. On Wednesday morning, Mira Murati posted on X that she had “parted ways” with Barret Zoph, her co-founder and CTO at Thinking Machines Lab. Less than an hour later, OpenAI’s CEO of Applications Fidji Simo announced that Zoph was coming home. Along with two other former OpenAI researchers. The same researchers Murati had recruited away from OpenAI just months earlier when she left to start her own company.
You could be forgiven for thinking this is dysfunction. A $2 billion startup hemorrhaging its founding technical leadership before shipping a single major product. Allegations of “unethical conduct” from one side, firm denials from the other. A co-founder war playing out in real-time on social media while the entire AI industry watches. It certainly looks like chaos.
But here’s the thing. This chaos might be exactly what humanity needs.
The Drama
Let’s start with what actually happened, because the details are genuinely wild.
Thinking Machines Lab was supposed to be the serious challenger to OpenAI. Mira Murati spent years as OpenAI’s Chief Technology Officer, helping shepherd the development of GPT-4 and overseeing the company’s transformation from research lab to global phenomenon. When she left in late 2024, she took with her something more valuable than any codebase. She took credibility. Here was someone who had actually built the thing, now setting out to build something better.
The money followed. Andreessen Horowitz led a $2 billion seed round with participation from Accel, Nvidia, AMD, and Jane Street. The valuation reportedly climbed toward $10 billion. Murati recruited aggressively from her former employer, pulling researchers who knew the terrain, who had lived through the scaling wars, who understood what it actually took to train frontier models.
Barret Zoph was the crown jewel. A former VP of Research at OpenAI who had worked on vision models and core architecture, Zoph became Thinking Machines’ co-founder and CTO. Luke Metz, another OpenAI veteran specializing in AI research, joined as a co-founder. Sam Schoenholz, a senior researcher, came along too. The band was getting back together, just under a different label.
Then the band started breaking up.
Andrew Tulloch, another co-founder, left for Meta in late 2025. That was the first crack. But this week the dam broke entirely. According to reports, Zoph informed Murati on Monday that he was considering leaving. By Wednesday, she had fired him. The official line from Thinking Machines cited “unethical conduct,” with sources close to the company alleging he had shared confidential information with competitors. OpenAI’s response was swift and pointed. They didn’t share those concerns about Zoph. The hiring had been in the works for weeks.
Zoph, Metz, and Schoenholz all walked back through OpenAI’s doors. More departures are expected. In roughly 48 hours, Thinking Machines lost about half its founding technical leadership and roughly 12 percent of its total staff.
Murati moved quickly to stop the bleeding, appointing Soumith Chintala as the new CTO. Chintala co-created PyTorch at Meta and brings serious credibility. But the optics are brutal. The most anticipated AI startup of 2025 just watched its technical co-founders boomerang back to the very company they were supposed to be disrupting.
The System
Here’s where it gets interesting. Everything that just happened was completely legal, entirely predictable, and arguably inevitable given where these companies are located.
California has banned non-compete agreements since 1872. That’s not a typo. For over 150 years, the state has held that any contract restraining someone from engaging in a lawful profession is null and void. You can sign whatever document your employer puts in front of you. It means nothing. The moment you decide to walk out the door and join a competitor, there is literally no legal mechanism to stop you.
Recent legislation has strengthened these protections even further. SB 699, passed in 2024, extended California’s reach to void out-of-state non-competes that employers might try to enforce against California workers. AB 1076 now requires companies to actively notify employees that any non-compete clauses in their contracts are worthless. The state isn’t just permitting worker mobility. It’s aggressively protecting it.
This matters enormously for understanding the AI talent wars. OpenAI is headquartered in San Francisco. Thinking Machines is in the Bay Area. Anthropic, Google DeepMind’s significant presence, Meta’s AI research labs. All of them operate under California law. Which means the single most valuable asset in AI development, the researchers who actually know how to build these systems, can move between competitors with zero legal friction.
The result is a labor market unlike anything else in the technology industry. Recent data from ADP Research shows that boomerang employees now account for 35 percent of all new hires across industries. In tech specifically, the numbers are staggering. Sixty-eight percent of new hires in March 2025 were returning employees. The twelve-month average sits around 45 percent, up from a long-term average of 30 percent since 2018.
These aren’t anomalies. This is the system working as designed.
The competition for AI talent has reached historically unprecedented intensity. Sam Altman has described the current environment as the most intense talent war he has witnessed in his career. Meta has reportedly offered over $200 million in total compensation to recruit a single former Apple AI executive. Nine-figure signing bonuses have become almost routine for top researchers. Mark Zuckerberg maintains a personal list of researchers he wants to recruit and has, according to multiple reports, personally hand-delivered soup to targets he was trying to woo away from OpenAI.
The pool of people who can actually push the frontier on large language models is vanishingly small. Industry estimates suggest fewer than 1,000 researchers globally have the expertise to design and train cutting-edge systems. When you’re fishing in a pond that small, and when California law means anyone can leave at any time, the talent wars become existential.
This creates a particular irony in the Thinking Machines situation. The same legal framework that allowed Murati to leave OpenAI and recruit its researchers also allowed those researchers to walk right back. California’s pro-worker protections cut both ways. They enable startups to form by letting talent leave incumbents. They also enable incumbents to recapture talent by making retention agreements unenforceable.
Murati couldn’t have stopped Zoph from leaving even if she wanted to. No garden leave provisions, no cooling-off periods, no legal threats. He could badge out of Thinking Machines in the morning and badge into OpenAI in the afternoon. Which is essentially what happened.
The Bigger Picture
Now zoom out further. What looks like dysfunction at the company level starts to look very different at the ecosystem level.
There’s a word for workers who move from master to master, learning different techniques and absorbing different philosophies before eventually synthesizing everything they’ve learned. The word is journeyman. It comes from the French “journée,” meaning day, referring to skilled workers who were paid by the day as they traveled from workshop to workshop across medieval Europe.
The guild system that dominated European craftsmanship for centuries had a specific structure. Apprentices learned the basics under a single master. Journeymen then traveled, sometimes for years, working in different shops and different cities, learning how the craft was practiced elsewhere. Only after this period of wandering could they become masters themselves.
This wasn’t inefficiency. This was the knowledge distribution system of an entire civilization. A journeyman silversmith in fourteenth-century Florence might learn one technique from a master there, then travel to Venice and learn a completely different approach, then to Bruges for yet another perspective. When he finally opened his own shop, he carried a synthesis of traditions that no single workshop could have provided.
And crucially, no single workshop could hoard all the knowledge.
California’s non-compete ban has accidentally recreated something similar for the information age. When Barret Zoph spent years at Google, then moved to OpenAI, then co-founded Thinking Machines, then returned to OpenAI, each transition carried information. The legal protections around trade secrets prevented him from walking out with proprietary code or training data. But he carried something arguably more valuable. He carried tacit knowledge. How does Google think about scaling? What did OpenAI learn about reinforcement learning from human feedback that isn’t in any paper? What organizational mistakes did Thinking Machines make in its first year that a larger company should avoid?
All of that lived experience now sits in one person’s head, synthesized, and will inform whatever he works on next.
Multiply that by thousands of researchers circulating through dozens of companies. The knowledge of how to build powerful AI systems is becoming, slowly and chaotically, a collective human inheritance rather than any single company’s proprietary advantage.
This matters enormously for preventing dangerous concentration of power. Consider the alternative world where non-competes were enforceable everywhere. OpenAI in 2020 might have locked up its core team for five years. Anthropic might never have existed, since Dario and Daniela Amodei came from OpenAI specifically because they had concerns about the company’s direction. Google might have retained everyone who left for startups. Thinking Machines itself would never have been possible.
In that world, you would likely have one or two dominant players with insurmountable leads, and everyone else fighting over scraps. The winner of that race would have enormous, possibly dangerous, concentration of both capability and decision-making power over humanity’s future.
Instead, what we actually have is a genuinely multipolar landscape. OpenAI pushes frontiers. Anthropic pushes on safety while competing on capability. Google DeepMind brings massive resources and deep research traditions. Meta open-sources models and keeps everyone honest. Dozens of startups try heterodox approaches. Chinese labs operate somewhat independently.
No single entity controls the trajectory.
This pluralism functions as a kind of distributed immune system for AI development. When OpenAI makes a decision that some researchers find ethically questionable, those researchers can leave and join or start something with different values. That’s exactly what happened with Anthropic. When Google moves too slowly on productization, people leave to found startups that move faster. When startups can’t provide enough compute to do the research they want, people return to big labs that can.
The constant circulation prevents any single set of values from dominating. Someone who worked at Anthropic absorbs a certain way of thinking about AI safety, about constitutional AI, about the importance of being helpful and harmless and honest. If they then go to Meta, they bring that perspective into a different environment. Someone who worked at Google DeepMind has been immersed in a research culture that prizes theoretical rigor and peer-reviewed publication. Someone from a startup has learned scrappiness and speed.
As these people circulate, they cross-pollinate philosophies as much as techniques. The result is that no company becomes an ideological monoculture. OpenAI has people who came from safety-focused backgrounds. Anthropic has people who came from capability-focused backgrounds. Everyone is a little bit hybrid.
For a technology as consequential as artificial intelligence, this diversity might be one of the most important safeguards we have.
The Optimistic Read
So here’s how to think about the Thinking Machines situation, and the AI talent wars more broadly.
What looks like chaos at the individual company level is actually a well-functioning system for distributing knowledge and preventing dangerous concentration at the ecosystem level. Mira Murati left OpenAI, taking valuable experience with her, and tried to build something new. Some of that experiment’s lessons will now flow back to OpenAI through returning employees. OpenAI’s culture will absorb some of what Thinking Machines tried. Meanwhile, Murati and her remaining team have learned things that will inform whatever they do next.
The allegations and counter-allegations, the 58-minute turnaround, the drama playing out on social media. All of it is the surface froth on top of a deeper structural reality. Talent flows to where it can be most productive. Knowledge spreads as people move. No single organization can build a moat around the most important input in AI development, which is the people who know how to do the work.
The counterargument writes itself. Doesn’t all this churn slow things down? If knowledge keeps leaking, companies have less incentive to invest in fundamental research. If your best people can leave tomorrow, why invest in training them?
But the empirical evidence suggests the opposite. Silicon Valley, with its non-compete ban, has dramatically outperformed regions with enforceable non-competes in innovation output. The comparison with Boston’s Route 128 corridor is instructive. In the 1970s, Route 128 looked like it might become the center of the computing industry. It had MIT, Harvard, and a cluster of serious technology companies. But Massachusetts allowed non-competes. California didn’t. By the late 1980s, Silicon Valley had won decisively. Scholars have identified California’s legal framework as a key factor in that outcome.
The ecosystem benefits outweigh the individual firm costs. Yes, Google loses some researchers to startups. But Google also gains researchers from other companies, benefits from innovations made elsewhere, and operates in a more dynamic environment overall. There’s also a selection effect. The best people want to work where they have options. Top researchers choose the Bay Area partly because they know they won’t be locked in. Trying to hoard talent through legal mechanisms would drive away exactly the people you most want to retain.
Fifty-eight minutes. That’s how long it took for Thinking Machines to lose its CTO and for OpenAI to gain him back. It sounds like dysfunction. It sounds like the AI industry eating itself.
But step back far enough and it looks like something else entirely. It looks like a system designed to prevent any single organization from becoming the sole custodian of humanity’s future. The journeymen keep journeying. The knowledge keeps spreading. And we all retain optionality over what artificial intelligence becomes.
That might be worth a little drama.


Just a tiny observation, not a condemnation: There is a pattern of AI writing sticks out like a sore thumb to me: "This is not x. This is y." I see it all over the place. Especially on x tweets about geopolitics (calling out Mario Nawfal, and thousands of others). Again, no condemnation, but it just gives me the yucks when I see it.
That is really cool. I had not thought of that. I especially like your comparison with the journeyman system.