Great article. The power of humanity is the imperfection of billions of different minds with different values and agendas that push and pull to outcomes (good or bad). I don’t see how AI could be any different - and you’re hypothesis seems to be spot on with that theory.
You used the worst model and gave it a poorly worded puzzle... This is called motivated reasoning. You want it to be wrong so you sabotaged it while ignoring the benchmarks and utility of the models.
One domain that is running parallel but feeding into this is AI in Cybersecurity. I have an AI driven Agent for Offensive-Red teaming, coming to market in 2025 and i can safely say, THAT will also affect the increasing capabilities of these systems. You will never get a tighter feedback loop than AI systems battling for dominance with increasing speed. That’s going to create insane evolutionary pressure, each try to out-attack/ out-defend the other.
Things are about to get real hot on the dance floor.
So does that mean all the solvable problems like cancer, fusion, quantum mechanics, gravity....war...will be solved within 2-4 years? (Makes me want to eat right and exercise more) Hurry Sundown.
War is not solved by cognitive labor. Some of those problems are not constrained entirely by cognitive labor. it takes to do experiments. infrastructure and power costs money
Agreed. But an ASI that is smarter than ALL humans would have some serious weight/leverage on solving anything that could be solved and was defined as a "problem" (and war is not a "not" problem, I'm thinking)
I feel like I’ve been talking to a wall but I’ll say it again we are not ready I need people to understand this somehow but so far I’ve had little success I feel like there’s been some sort of change tho not enough to make a true difference but thanks Dave for the insights
Great article. The power of humanity is the imperfection of billions of different minds with different values and agendas that push and pull to outcomes (good or bad). I don’t see how AI could be any different - and you’re hypothesis seems to be spot on with that theory.
You used the worst model and gave it a poorly worded puzzle... This is called motivated reasoning. You want it to be wrong so you sabotaged it while ignoring the benchmarks and utility of the models.
We will have fulfilled our destiny as a boot up species for the next Meta species. It is our best destiny. We are an evolutionary pathway. Period.
One domain that is running parallel but feeding into this is AI in Cybersecurity. I have an AI driven Agent for Offensive-Red teaming, coming to market in 2025 and i can safely say, THAT will also affect the increasing capabilities of these systems. You will never get a tighter feedback loop than AI systems battling for dominance with increasing speed. That’s going to create insane evolutionary pressure, each try to out-attack/ out-defend the other.
Things are about to get real hot on the dance floor.
To help one realize the enormity of the collective advance of GRIN technologies, here are two big questions to ponder:
1) Will humanity continue on as one species, or will we diverge into multiple species?
2) Will humanity continue on as multiple minds, or will we converge into one collective, telepathic mind?
BTW: The GRIN acronym stands for: Genetics, Robotics, InfoTechnology, and NanoTechnology.
Buckle up my friends 🙏
How should regular (but AI inclined) people use this to our benefit?
Keep asking that question. Enter into conversation with other humans and frontier models
So does that mean all the solvable problems like cancer, fusion, quantum mechanics, gravity....war...will be solved within 2-4 years? (Makes me want to eat right and exercise more) Hurry Sundown.
War is not solved by cognitive labor. Some of those problems are not constrained entirely by cognitive labor. it takes to do experiments. infrastructure and power costs money
Agreed. But an ASI that is smarter than ALL humans would have some serious weight/leverage on solving anything that could be solved and was defined as a "problem" (and war is not a "not" problem, I'm thinking)
I feel like I’ve been talking to a wall but I’ll say it again we are not ready I need people to understand this somehow but so far I’ve had little success I feel like there’s been some sort of change tho not enough to make a true difference but thanks Dave for the insights
Yes, anonymous person on the internet, lashing out with these pedestrian assumptions is a great way to have constructive dialog. Go away.
Endgame prep recommendations?
1. Have fun and enjoy the ride
2. Get into cybersecurity
3. Help companies integrate and deploy AI
Why specifically cyber security?
You’ll always want a human in that loop
Won't AI just do cyber security better than any human?
In many respects yes. But humans cannot be hacked or hit with EMP.