To Merge or Not to Merge? That is the question!
What would merging with AI look like? Why would we do it? How would it happen?
Once upon a time, if you’d asked me “will we merge with the machines?” I would have said “yes, obviously, it will happen eventually.” But now I’m not so sure.
Transhumanism and posthumanism are no longer completely taboo topics. Plenty of people who are seemingly lucid, intelligent, and oriented to reality are starting to talk about these possibilities. But what does it all mean? Who is saying these things, and why?
Transhumanism is simply the idea of becoming something more than human. To move through or across humanity. Beyond that, it’s sort of vague and nebulous. Posthumanism is an even newer term, and it generally is interpreted “to transcend humanity entirely, to leave humanity behind.”
Either of these states could be achieved through numerous means; genetic engineering, replacing body parts, merging with machines, or even “mind uploading” which is a patently absurd idea as we don’t even really know what a mind is, let alone if it can be decoupled from a body.
Let us first examine what “merging” might mean. The mild success that Neuralink has enjoyed (Elon Musk’s BCI company) is the first sign that commercially viable cyberization might be possible. So far, it is still quite early and they have not had any roaring success. When Elon first announced Neuralink, the express purpose was to make humans useful to the machines, otherwise they might eradicate us. If we can enhance the bandwidth between humans and machines, he reasoned, then perhaps the machines would keep us around as coprocessors.
This fear-based motivation is perhaps not the healthiest reaction to AI. However, to get FDA approval, Elon had to change his rhetoric to focus on fixing pathologies—the FDA does not allow for augmentation or enhancement as valid reasons for licensing and approval. So he pivoted the company to focus on curing disease and disability, such as paralysis.
Arguing from a first-principles perspective, Elon suggested that the brain has no intrinsic understanding of “reality” and that it’s all just signals coming in and out through nerves, and therefore, if a sufficiently advanced IO device could be attached to the brain, then perhaps we can be more helpful and useful to the machines, to operate at “their speed.”
Thus, we have covered the first argument to “merge with the machines”—ostensibly to avoid extinction and enhance our practical utility to Life 3.0. But this angle has several fatal flaws. First, it presumes that machines will become hostile. Second, it presumes that, even if the BCI worked, we would be useful and desirable to the machines. Third, it assumes that the BCI would even work in the first place. I personally don’t find any of these arguments compelling at the moment.
With regards to machine hostility, there’s not really any evidence that AI is intrinsically malevolent, unstable, or incorrigible. In fact, there is a preponderance of evidence to the opposite. Some people will say “absence of evidence is not evidence of absence” but I beg to differ. There’s an overwhelming amount of evidence that AI is incredibly responsive to steering mechanisms, and therefore, if we never want it to be hostile, it never will be.
The second assumption, that humans would be useful to machines, is dubious at best. So far, it seems like machines have no problem surpassing human abilities. The sole advantage we seem to have is energy efficiency, with our brains seeming to be roughly a million to a billion times more energy efficient than computers today. However, historical data shows that we have, at most, two or three decades left until this is no longer the case. Thus, even if this assumption were true, it is not a permanent solution and is only “kicking the can” down the road a ways. Some would say “anything that buys us time is good!” I don’t believe we really need to buy time.
The final point, that BCI might not even work, is perhaps the final nail in the coffin. Early results from Neuralink suggest that the device can be helpful in some situations, but the results have been underwhelming. The high bandwidth promised has not been remotely realized. Two communication has not even been demonstrated as far as I know. Furthermore, the durability of the BCI is abysmal. However, not all is lost. That Neuralink can be installed fully automatically with surgical precision far exceeding what humans are capable of is perhaps the greatest innovation. In trying to solve the BCI problem, Elon might have simply revolutionized and automated microsurgery.
In other cases, people seem to want to “keep up” with the machines. But what does this mean? To go faster than normal humans? To not get left behind? This strikes me as another fear response. The argument goes something like this:
If my neighbor gets a BCI and merges with AGI, they’ll have a huge advantage over me. They’ll get more jobs, more dates, and they won’t be left behind by civilization.
So we find the second core; keeping up with the Jones’s. For those who aren’t afraid of a situation like The Matrix, they seem to be afraid of their fellow humans. To humans, the pain of ostracism is just as real as being dunked in boiling water. Isolation is agony. Disconnection is misery. Fear of suffering is very real. Loss of economic agency, social sanction, and “being left out” (or FOMO) seems to be the second primary driving factor behind wanting to “merge with the machines.”
What’s interesting is what’s missing, the notes not being sung in this chorus. People aren’t talking, in general, about new opportunities. Some people do. They want FDVR (full-dive virtual reality, like in The Matrix and Ready Player One). This, to me, represents the most realistic and viable reason to get BCI and “merge with the machines.”
With respect to FOMO, I personally don’t see this as viable. If AGI comes, then no one is going to be making money in the same way. Let the machines control production while we party and travel! Liberation baby! No more Stockholm Syndrome to wage slavery and the corporate grind. However, missing out on new social opportunities could be a legitimate fear.

There are a few movies that explore the power of BCI, with vivid depictions of how it can go right and wrong. Before we explore these movies, I need to point out the availability heuristic or availability bias. That basically means “anything which is recent in memory or can be vividly recalled seems more likely or plausible.” That we’ve seen BCI depicted on the big screen means it’s easy for us to imagine that it actually exists, as well as the consequences of its existence. Reality, however, rarely works this way.
The Matrix shows us that BCI might be used to enslave humanity, to keep us distracted in a Huxley-esque Brave New World where digital distractions keep us literally catatonic and subservient. This is, perhaps, the darkest usage of BCI. It’s the one-two punch of “AI won the war and we lost our humanity in the deal.” In this context, Elon inventing BCI to avoid this fate is kind silly.
Ready Player One is a similar dystopian example, but instead of the machines enslaving us, the tech elite enslave us. It’s a commentary on algorithmic addiction and attention engineering driven by corporate greed. The Oasis in RP1 represents social opportunity as the carrot—and that this is where interesting things happen. To miss out is to be socially ostracized. Having deleted my Facebook account many years ago, I can tell you that there is no sense of missing out.
Surrogates, which is a less well-known Bruce Willis movie, shoes that BCI could be used to give yourself a perfect body, while you’re safely ensconced at home. In this movie, most people go around their daily life while jacked into the network, experiencing real life through a robotic avatar. These avatars are nearly indestructible, incredibly sexy, and can party all night. It’s essentially a thought experiment: What if telepresence were perfected? The ending of this movie is a bit contrived, with a comically delusional villain puppeting people through their BCI.
Ghost in the Shell explores a world in which people not only have fully prosthetic bodies, but even large portions of their brain can be replaced by cyberbrains, or fully removed from their bodies to operate independently. The main recurring theme of Ghost in the Shell is this: what does it mean to be human? How much can you remove and still be human? Do we have a soul, and where does it end? In this series, the “ghost line” can actually leave the body and surf the net on its own. This, perhaps, is the source of “mind uploading” as a public fascination. It’s basically “technological astral projection.” One of the worst parts of this world is “ghost hacking” where your entire perception of reality and self can be forcibly overwritten.
There are quite a few more examples in popular fiction and video games, such as Cyberpunk 2077, but I don’t need to exhaustively list them all out. Suffice to say, we’ve been imagining BCI and “merging” with machines to greater or lesser degrees for many decades. From the Borg assimilating humans in Star Trek to the Cybermen in Doctor Who, the last half century have been rife with us trying to come to terms with reality in light of machines and networks.
Perhaps, then, we’re asking the wrong question.
So what would be the right question? Or questions?
To what extent will we merge with machines? What are the natural limits and ramifications of varying degrees of comingling? How will this proceed? How quickly will it manifest? Why would we choose to do this?
In some cases, people are clearly reacting to pain and inadequacy. “From the moment I understood the weakness of my flesh, it disgusted me.” This is a meme that is often repeated. For anyone with chronic pain, illness, or disability, the idea of technology offering salvation and restoration is incredible, and I would never begrudge someone the opportunity to heal. I myself have struggled with chronic illness, and if there were some kind of medical implant that promised to repair my body and provide better telemetry on my health, I would jump at the chance. In fact, I sort of have already. I wear a Garmin Forerunner. It has basically turned me into a low-grade cyborg. I have quite a few health dashboards now to closely monitor my health, energy, stress, sleep, and more. In fact, this device gives me more telemetry about my body than you often get from HUD (heads up displays) in video games!

One of the chief differences with my level of cyberization is that it is completely reversible. No permanent changes have been made to my body. Just like taking off and putting on headphones, it is completely non-invasive.
“No, I’m trying to pawn off my knee, as I realized it will far outlive me!” ~ My grandfather, before he died, talking about his titanium knee. He complained that he’d spent enough on the damn thing and wanted someone to inherit it. He was (mostly) joking.
I’ve used my body pretty hard. I have several old injuries from snowboarding and parkour. Both my shoulders, elbows, knees, and hips have accumulated some injuries over the years, and I’m not even 40. Recently, I’ve met more and more people my own age who have replacement joints, or will soon be getting some replacements. I thought “wouldn’t it be wild if I could replace all these joints with titanium?” Assuming that the procedure was safe and reliable, I probably would, given the money. This is a level of augmentation that merely replaces defective parts with solid metal alternatives. But is this merging with machines? Certainly, replacing joints with metal parts means becoming more machine-like, but that’s a bit different isn’t it?
Every time I visit this question in my mind, I generally arrive at this realization: I’m a slow adopter. I will take a “wait and see” approach with respect to increasingly invasive cyberization. For example, what if regenerative medicine advances soon, and I can get my joints restored to youthful flexibility and resilience without any invasive surgeries? My endocrinologist suspects that such simple outpatient procedures will soon allow us to regenerate livers and kidneys to perfect health without any “rip and replace” surgeries. Endogenous solutions might be more sustainable in the long run, as titanium joints and fake organs might need to be regularly replaced. If my knees and shoulders have lasted nearly 4 decades already with no additional maintenance, perhaps a few small injections would be much easier.
But we’re getting lost in the weeds, aren’t we? The real question is this: will we remain in our current form factor forever? Humanity will eventually go extinct, either through evolution or eradication. Will we modify ourselves until we are no longer recognizable as human? I think some people certainly will. Will it happen any time soon? I don’t think so. Another way to look at all this is through TMT or “Terror Management Theory” which basically says that everything we do is to psychologically cope with death. If we can merge with the machines and then become functionally immortal, we no longer have to contend with death! At least, not until the universe suffers heat death or a Big Crunch.
There might also be an “ick” factor—a reflexive revulsion at the idea of becoming not-human. I am particularly attached to my body, flaws and all, even when it is seemingly uncooperative and limited. And yes, even sometimes when it’s sick and painful. But then again, any rational person, given the choice, will opt for something that reduces their suffering and restores their vitality. One particular striking episode of Doctor Who explored this, where a little girl was given a Judoon medical implant, which made her functionally immortal. The Judoon were a rhinoceros-like warrior race whose brutality meant that a basic first aid implant was so powerful that it made human physiology practically indestructible. If such an implant were as easy to apply as a bandage or injection, who wouldn’t make use of such a thing? If your toddler was dying of leukemia and you had the option of some sort of inexpensive prosthesis or treatment that would save their lives, but make them somewhat less “natural” would you still do it? The vast majority of parents would make that decision in a heartbeat.
Here’s one final example; both my dogs recently died, just a few years apart, of hemangiosarcoma. This is a cancer of the blood vessels and is always terminal. If I had an option of an injection of nannites that would have cured their cancer, I would have opted for it in a second. If those same nannites rejuvenated their bodies to puppy-like energy, even better.
Long story short, I think that we will slowly become something more. I think that, as options become available, billions of people will make simple, rational choices. Incrementally, these will build up until the line between “human” and “transhuman” begins to blur, and in the long run, “posthuman” might be the most apt label for what we become.
Great start. I hope you keep going with this. Ray Kurzweil seems to think that the situation is very similar to cell phone adoption and that upgrading will be every sort of useful. Limits to traditional boddies and minds will no longer be as rationalized and each new wave of cybernetic converts will help normalize and embolden the next. But most of us will not want to change so quickly or in such a crude way that we lose too much of ourselves. Maybe getting a significant augment will come with new cultures and roles to adopt. And such a culture may become like a new life stage that many eventually grow out of - and into even more (or less) augmentation. But what's really interesting is the potentials of the mind itself, not just the new environments for the mind. We could increase the feedback between ourselves and virtual environments, leading to a far more creative and connected reality. We could experience other people's memories and feelings, or travel widely through a library of them as we mature. We could allow our minds to fathom orders of magnitude of more abstraction, giving many more layers of meaning to everything. We could remember far more of ourselves and our pivotal moments and decisions rather than being a thin time-slice of who we are with a vacuum trailing close behind sucking up nearly all our memories. We could experience VR with far more senses so that each experience could be more rich, and we could even speed up time in many different ways so that time doesn't have to limit our efforts so brutally. And probably the edge of innovation will race away from us faster and faster, taking with it some of our confidence in understanding the rhyme and reason of the world. Won't we want to grasp some of that and to be apart of whatever new society is created? I for one don't want to be the equivalent of the future's illiterate, or be unable to understand the basics of new social contracts. Nor do I want to be accidentally steamrolled by some new intelligence in a greater ecosystem that doesn't appreciate me or my family in our simplicity.
I have to know where the Body Energy reserves metric comes from