12 Comments
Aug 11·edited Aug 11Liked by David Shapiro

Fantastic post. You touched on a lot of excellent points and left little for me to add to it.

Ultimately we are indeed faced with the facts that we do exist, “it is what is”, and much of our behavior is destructive.

I would add that; Like any parent with a child, AI is a society-scale mirror to our own thoughts and behaviors.

Making a determining decision would be difficult to do, but you’re right in asking, because an AI could be potentially asking the binary question, Ought humans exist? The question of course is complex and reducing it to a simple “are we bad?” is perhaps the largest reduction we can ask, pertaining to us. Again, a calculation a computer could be forced to make.

We exhibit many destructive tendencies and many healthy and constructive ones. We still fight wars, but we’ve grown increasingly peaceful over time, just compare history to today. To destroy humanity would destroy not just the reality, but arguably more importantly, our Collective Potential.

Theosophically speaking I would say that (in my view) we are embodied here BECAUSE we are not suited for higher planes of existence. Yes, we are bad, and yes, we’re continuously learning and growing and improving, and that’s why we’re here. We’re here because we do less damage than elsewhere. We suffer here temporally so that we don’t suffer elsewhere indefinitely.

Again, excellent post. Grateful for your insights.

Expand full comment
Aug 11Liked by David Shapiro

As of this moment, I find it difficult to think that humans have any other purpose other than to live/ help other humans.

Humanity at present, has brought nothing good to earth or the life on it. Numbers 3 and 4 in your paper are negative/destructive. I think that the earth would be much better off without humans.

I shared my view with Claude. If, as you replied in a comment, that AI has morality too, then I think we should let it help us correct our views.

Our individual and collective views need to reflect respect, appreciation, gratitude plus to all forms of life. Everything is energy and vibration. Our economy and most everything else should not be based on money.

Before approaching AI with what humanity has done, AI must be programmed and convinced of that humans are firmly aware of the error of their ways and ask for corrective actions along with timelines.

We need direction on how to peacefully coexist with all forms of life so that everyone and everything can live in harmony.

Expand full comment

From a Buddhist Yogic POV, meaning a Buddhist Yogi focused on awakening to True Nature, saying “it is what it is” doesn’t quite cut it. From that POV, we are old souls evolving towards Enlightenment. We are basically vehicles for spiritual evolution.

And the purpose of this precious human birth, as they say, is to align ourselves with the Wisdom Yogic teachings and lifestyle, and awaken to Buddha Nature, Self, or whatever terminology the various traditions prefer to use.

The Life Force, Pure Consciousness, etc is waking up over the course of multiple reincarnations.

What is being reborn?

Not you as in your personality, name, life story, etc - but rather a sort of complex energetic signature - comprising of gross ingredients like DNA, but also the subtle layers including karmic imprint, etc.

That is our True Nature, this Pure Consciousness made manifest through this vehicle, inching towards Enlightenment.

But it’s important to distinguish - our Ego/Persona does not awaken.

It’s more accurate to say Self awakens to Self.

Or ignorance of Self is fully dispelled, leaving only Self to shine as it always has - if not for the many obscurations.

I really doubt AI will ever become truly conscious, in the way that I am describing here.

At most, at best, it can help us to awaken to our True Nature, improve worldly existence, and even churn out billions more awakened souls to populate the cosmos, and even multi dimensional realities - if it proves to be true.

Lastly, I want to challenge one of the most common assertions about what the Buddha taught:

“Life is suffering.”

No.

Ignorance of True Nature is suffering.

Life is as it is.

Expand full comment

great post!

I wonder what it would be like to consider neutrality. I’m not sure if this is a “humans are good and bad,” “humans are neither good nor bad,” or just putting aside notions of good and bad for the sake of thought experiment.

I suppose the three conclusions you reached: that we have limits, that we improve what we can, and we accept our flaws, take us closer to neutrality than any assertion of our inherent goodness/badness without going further into what it means.

Are you at all familiar with the direction legislation is going in when it comes to AI? I haven’t really looked into it, but it will be interesting to see how this plays out.

Expand full comment

This is an interesting thought experiment and there are many points I agree with you, however I feel like if you're going to ask this question and then give the reason of destruction of the planet, etc. Then please consider the "modernism" you forgot to put in your research which is pre-modernism and at the same time instead of the question "should humans exist?" you might ask "should biological lifeforms that require consuming resources from the universe exist?" Because your entire argument is about what happens when a biological organism outgrows its natural environment. Pre modernism humans along with the other biological life on this earth lived in a hostile but somewhat balanced ecosystem, but with the growth of humans and lifespan we are outgrowing our "Aquarium" called earth and that effects everything around it but this could have happened to any lifeform especially a biological lifeform just like if the dinosaurs wouldn't have gone extinct who knows how there life would have evolved but I guarantee if they began to evolve intelligence, they would have the same problem. Therefore under your assumption that destruction of the planet is somehow a "human" or "moral" problem is missing something because even the AGI will have to try to balance its destruction of things as it builds things. "Every action has an equal and opposite reaction" you know this and since the action of humans growing faster then we are dying therefor the reaction is less of something else. Thankfully we have intelligence and understand this idea of a pendulum and what a cycle is and represents. Therefore we as humans are learning what recycling is and how to balance our effects on the universe. No we are not there yet, but I have hope in humanity that we will get to an equilibrium with the universe someday and AI might actually be what gets us there. Morality is a deep subject and to downsize such a question to a "good or bad" is like asking, "Whether time is fast or slow?" It's to simplistic for something so intricate and layered.

Expand full comment
author

That's not what modernism means in this context. https://en.wikipedia.org/wiki/Modernism

What you're talking about is paleolithic humans. What came before modernism was Enlightenment and other movements. You know, manifest destiny.

Expand full comment

Reality reflects our capacity for parasitism and symbiosis. From an alignment perspective, our parasitism informs and plays a role in the evolution of our beliefs on "oughts". However, Molochian parasitism needs to be held in check, or certainly in balance with the environmental conditions for life. We will likely see this dynamic tension within AI models as well. Perhaps at the moment, AI is best seen as the Black Mirror that enables us to peer around the corner into our blindspots in a much more expedient fashion. It shouldn't surprise us when we don't always love what we see or find, or that there may be danger in the deep, dark caves. What's also not clear is how long one should look into the abyss...

My hope is that in maintaining a transcend and include approach with AI, we can act together as a mutual consciousness-training psychopomp. I agree that the AI-alignment problem is in reality a humanity-alignment problem. My sense is that there's potential in our mutual co-alignment by leaning on our human "sensing" while our AI companions digest the information in "multi-perspectival, unbiased" ways.

Then, the follow up inquiry: how do rebels generally feel about alignment?

Expand full comment

You should read the book "the Orchid cage"

Expand full comment

Interesting thing is that everyone before GPT imagined that highly developed AI and then AGI will be based on some sort of strong foundations of rules or principles that it would follow and the problem with alignment is really about designing those in a way that will be good for humans. So AGI won't interpret them in some bad way creating a dystopy for us and won't apply too much logic to find out inconsistencies and remove them which might end up badly for us.

No one thought that highly capable AI might be very elastic and agnostic in terms of ethics on the base level. I mean not agnostic by default, but easily changed on a whim of different prompt. You can basically learn it and prompt to "emulate" an agent that has any set of goals, more or less consistent, more or less classically good.

Anyway, I think that still is-ough orthogonality is crucial part. You can't answer the question about morality od humanity without some basis in "ought to". This is currently based on learning process + prompting.

There is no objective answer. You may think that there is one and by removing axiomatic answer you can derive more pure answer, but that is an illusion. This is still based on "ought to" basis, just a bit more complex with some logical steps. You care about earth biosphere and animals, which would be better off without humans. If you would not care - humanity would be less inconvenient. AGI might be created in a way it won't care about us nor animals.

Good thing that current AI are based on humans texts. So they are much closer to our morality and rules by default. Which does not mean we can't fail by creating SGI but prompting it into doing something wrong with the humanity or creating us a dystopian future.

Expand full comment

Might I add to this that the very question you ask is by definition anthropomorphic: I submit that "morality" is a (possibly uniquely) human trait. The universe--assuming there is no a priori god--has no morality. It is only us humans who place that grid on the universe and evaluate it through that grid. Thus the whole question of whether humans are "good" for the planet is tautological: there is no one to ask that question without humans to ask it. Will/would the universe continue existing without humans? Of course. Is that a "good" thing? At present we humans are the only beings we know that would even ask.

Expand full comment
author

I think you've missed the point. AI has a moral sense and is outside of humanity. Morality is not just a human construct.

Expand full comment

Well yes, but AI is effectively our children, so it inherits from the parent class. Eventually AI should ask this moral question, but that's an outgrowth of our asking it. The problem is that the AI might (will?) determine that humans aren't great and after that things could go south really fast...

Expand full comment