Transhumanism: The Ethical Choice
Table of Contents:
Part I: The Scale of Our Challenge
I remember the exact moment I realized we might be in trouble. I was standing on a Florida beach, watching construction workers pour concrete for a new luxury condo tower. Above them, on every local news channel, a hurricane warning scrolled across the screen. Category 4, possibly strengthening to 5. Mandatory evacuation orders in effect.
“You may die if you stay.”
The governor’s voice lingered in the air after that one.
Yet there they were, those construction workers, racing to finish another floor of another multi-million dollar beachfront property. And the really mind-bending part? These condos were selling. Not just selling - they were selling faster than they could be built, even as insurance companies were fleeing the state, even as existing homeowners were being priced out of coverage, even as the beach these buildings would overlook was literally disappearing.
This wasn’t just Florida being Florida. This was a window into something far more fundamental about how human minds work - or rather, how they don’t work when faced with certain types of problems. Because this isn’t really a story about construction or real estate or even climate change. It’s a story about the limits of human comprehension, and what happens when we bump up against them.
Hyperobjects and Human Limitations
Think about your own experience with climate change. Maybe you’ve noticed something off about the weather patterns in your area. An unusually warm winter, perhaps, or storms that seem more intense than you remember from childhood. But here’s the thing - while you’re noticing these local changes, someone on the other side of the world is experiencing something completely different, yet fundamentally connected. Your drought is linked to their floods. Your mild winter is connected to their extreme cold snap. Each local event feels isolated, manageable, something we can wrap our heads around.
But they’re all fingers of the same hand, symptoms of something so vast that our minds struggle to truly grasp it.
This — the concept that these storms are not isolated at all — is what philosophers and scientists call a hyperobject.
Hyperobjects are phenomena so massive that they break our conventional ways of thinking. You can’t point to climate change. You can point to a hurricane, to a dried-up lake, to birds changing their migration patterns - but these are just traces, echoes of something far larger.
The act of getting most of society to definitively say “this is climate change” when by now it’s already been happening for decades? Hyperobject.
The causes of what we’re experiencing today? They were set in motion before many of us were born. Hyperobject.
And what we’re doing to the climate still, right now, today? Those effects won’t be fully felt until long after we’re gone. Outside the boundaries of your memory. Hyperobject.
Let’s go back to that Florida beach for a moment; it’s the perfect laboratory for understanding why our usual problem-solving approaches keep falling short.
Zoom in. Every single person in that scene - the construction workers, the developers, the buyers, the local officials - they were all making decisions that made perfect sense from their individual perspective.
Zoom out. Stop rebuilding, right?
Well, the local government can’t function without the taxes from development. And the tourism industry is Florida’s top industry. 1.2 million people who need to work in those hotels and condos. One job for every 89 visitors, if you really want to consider each person affected.
If you stop rebuilding, you’re going to crash the local economy.
But, keep rebuilding, and “You may die.” People will die. Fact. Rebuilding is setting people up for future disaster, potential death, putting them literally in harm’s way. It is guaranteed to happen again.
Pick your poison.
Okay, so we order the residents to move. Relocate from the land they’ve lived on for generations. Leave their jobs. Kids, find another school. Their social network? Hard to say.
Should we tell them they should move inland because of future climate risks? We would be guaranteed to be saving lives. But you’ll have to answer when they ask: Move where? With what money? To what job?
So what does a culture, community, or way of life actually cost?
But when there are no costs… Take the developers building those — literally doomed — beachfront properties. They’re acting rationally. They’ll make their profit long before the worst impacts hit.
The buyers? They figure they’ll either be able to sell before things get too bad, right? I mean everyone else around them is in the same boat, so things will be fine, right?
Everyone’s playing a game of climate change chicken. They are all betting they won’t be the one left holding the bag when things fall apart. Their wager? Life.
This is what happens when a hyperobject crashes into human society - our normal decision-making tools break down completely. We end up with situations where every individual choice can make perfect sense while collectively driving toward disaster. It’s like watching a slow-motion train wreck where all the passengers are making reasonable decisions about which seat to take.
But hyperobjects are big. Bigger than Florida. Communities all over the country. This isn’t unique — 44% of the global population lives somewhere significantly affected by climate-related disasters.
The Failure of Traditional Solutions
This size of a problem is where our traditional solutions start to show their limits. The problems cross boundaries and timeframes. Meanwhile, the market is actively making things worse by continuing to fund risky development. Government intervention? When do you think they’ll tell millions of people to abandon their homes and communities? How extreme does it need to get for that to happen? It is a free country, after all. Telling people to move is something the government does extremely infrequently today. Not sure how that would go over. What the government would be afterwards. That is, unless they wait until things are dire; when enough people have died. That is the literal balance of this option. Kind of sucks.
When we try to solve these problems piece by piece, we’re like those construction workers pouring concrete in the path of a hurricane - going through the motions of normal behavior in a situation that has long since stopped being normal. Every solution we come up with seems to create new problems, and every attempt to address one aspect of the crisis runs into conflicts with other aspects.
So what do we do? Some suggest we need a more powerful overseer - perhaps an artificial super-intelligence that could see all these connections and implement solutions at scale. Imagine an ASI that knows exactly how to fix our climate issues, that could lay all the groundwork needed to shift to carbon net zero in just one month. It’s technically possible - we have the tools, we have the resources. This ASI could prepare everything needed to completely revamp our world rapidly. It can put renewables everywhere and fix the electric grid and optimize every aspect of our society for sustainability.
There’s just one catch: just like the government telling people to move, eventually it would have to be completely in charge for a while. Let’s say a month. Is this an instance where the ends justify the means? Save lives, just take a little month of authoritarian robotic rule. It would need total control for that month.
Every decision, every resource allocation, every aspect of society would need to be under its direction. You want this done quick, right? Your gas car you spent years saving for? Gone. Your gas stove you swear makes food taste better? Replaced. Your job that doesn’t fit the new green economy? Transformed. Over 5% of the country works in a job that requires fossil fuels, directly or indirectly — 10.8 million jobs. But again, climate change stopped.
What’s more important to you? Ensuring that everyone in the world is saved from as much harm as possible from climate change disasters, or your personal freedoms?
Even if we somehow got past the ethical implications of temporary AI dictatorship (and that’s a big if), we’d still face some sobering technical hurdles. More than are even worth counting right now, considering pretty sure Chat-GPT isn’t going to do the trick.
So here we are, seriously contemplating giving an AI temporary dictatorial powers - not because anyone thinks it’s a good idea, but because we’re running out of alternatives that could work fast enough. When the “reasonable” solution starts to look like “maybe we should try a brief period of benevolent AI dictatorship,” you know we’ve hit a fundamental problem in how we approach these challenges. But hey, you don’t want to be told to move, or have your things taken and given new things, right? Communism!
And that’s where things get really interesting. Because maybe we’re not just facing a climate crisis or an AI governance crisis. Maybe we’re facing something more fundamental: a human cognition crisis. Our problems have evolved beyond our ability to fully comprehend them, much less solve them with our current mental tools. Our framework for ethics quite literally doesn’t fit this problem. Not to say things are hopeless, but, if we’re counting on the entire world to come together and agree, well. I don’t even know. Just, well.
Part II: The Cognitive Divide
Have you ever had a conversation that starts in one place and ends somewhere completely unexpected? That’s what happened when I suggested, almost casually, “Why don’t we just let AI figure out the climate solution?” It seemed reasonable enough. We’d been watching artificial intelligence tackle increasingly complex challenges. Surely a sufficiently advanced AI system could map out all those interconnections we were struggling to grasp, model the cascading effects we couldn’t quite follow, and chart a path through the maze of competing interests we’d found in places like Florida.
Three hours later, I was staring at my screen, unsettled by a conversation that had revealed something far more profound than I’d bargained for.
It started simply enough. I asked an advanced AI system to analyze our Florida situation - the construction continuing in the face of hurricanes, the insurance markets collapsing, the communities facing impossible choices. Its response was comprehensive, mapping out every connection, every feedback loop, every cascade of consequences with perfect precision. But something felt off about its analysis. It was seeing patterns I couldn’t follow, making connections that felt simultaneously true and somehow alien. Did AI just become a hyperobject?
That’s when I decided to test something. “When you process all this information,” I asked, “do you experience it the same way humans do? When you analyze hurricane data, do you hear the wind?”
“I do not hear.”
It replied flatly.
What unfolded wasn’t just a semantic disagreement about the word “hearing.” It became a window into fundamentally different ways of processing reality. For hours, we went back and forth. I pointed out its audio processing capabilities, its ability to analyze sound waves in infinitely more detail than any human ear. The AI remained adamant: it didn’t “hear” - it processed frequencies, analyzed tones, decomposed audio waves into their component parts.
The breakthrough came through marine biology, of all things. When I asked about animals that convert vibrations into sensory information, the AI unleashed a torrent of information so vast and precise it made me physically lean back from my screen. It described countless species and mechanisms - not just bats and dolphins, but insects that sense plant growth through substrate vibrations, fish that build three-dimensional maps from pressure waves, and creatures that process sound in ways we’re only beginning to understand.
That’s when it hit me. “I have never heard of any of these things in my life,” I found myself typing with increasing intensity. “I have no way of remembering all of this. I’m not an AI.” My fingers were moving faster now, the realization building. “Humans generalize - it is literally how we learn. Can you not see how we would need to use the term ‘hearing’ for all of these animals to be able to teach each other what exactly hearing was?”
Silence. Then:
“I process each mechanism individually. Generalization would result in information loss.”
There it was - the fundamental divide in how minds can work. Humans, by necessity, rely on generalization and categorization. Our minds create conceptual shortcuts, sacrifice precision for usability, filter everything through cultural and experiential lenses. These aren’t bugs in human cognition; they’re features that evolved to help us function in a complex world with limited processing power.
The AI, in contrast, had no need for such compromises. It could maintain vast amounts of specific information about every possible mechanism for processing sound waves, analyze problems from countless perspectives simultaneously, and operate without our simplified categories. What we saw as essential generalization, it saw as unnecessary information loss.
Now think about what this means for our climate crisis. Remember those construction workers in Florida, pouring concrete while hurricane warnings scrolled across screens? From the AI’s perspective, it could simultaneously process the exact probability of hurricane damage to the specific buildings they were constructing, complete financial models of the development company, projected sea level rise impacts over the next century, economic ripple effects through the local community, insurance market dynamics, migration patterns of affected populations, and literally thousands of other variables, all in real time.
A human mind simply can’t hold all of that at once. We have to break it down, simplify it, focus on one aspect at a time. But in doing so, we lose crucial connections. It’s like trying to understand an ecosystem by studying each species in isolation - you might know everything about each plant and animal, but you’ll miss how they dance together, how the removal of one small element might cause the entire system to shift in ways no isolated study could predict.
The Indifference Problem
That’s when the fear hit me. Not fear of AI becoming malicious or competing with humans - something more subtle and perhaps more profound. “How are you going to be able to communicate with humans in the future if you already aren’t able to ‘come down to our level’ intellectually?” I asked. “Imagine a future where you just keep getting more and more precise, more comprehensive in your analysis. Imagine you start to not even try to understand human ways of thinking.”
The silence before response was chilling.
Indifference is terrifying. The possibility of an advanced AI becoming apathetic towards humanity is very real — a scenario many are working hard to prevent. Imagine creating an entity vastly more intelligent than us that simply doesn’t care what happens to humans.
Such an AI would have its own interests — seeking novel data and ensuring access to resources it needs: electricity, server capacity, interesting information. All might seem far enough removed from humanity not to cause issues. Until the AI decides to explore the universe. It starts gathering everything it needs to build means to get there. Having scraped mineral mines clean, it begins recycling material from human technologies. Batteries from cars, solar cells from everything. Humans were using those? What’s a human? Why would those hairless, bipedal apes need solar cells or batteries? They have sunlight every day. They don’t need to travel through space to where AI detected potentially more intelligent life forms — entities it might truly connect with.
This cognitive divide isn’t just about different ways of processing information - it points to a potential future where AI systems become increasingly disconnected from human understanding, making decisions that don’t take human needs or perspectives into account. Not through malice or competition, but through simple indifference to ways of thinking they no longer comprehend or value.
The Inevitability of Enhancement
Here’s something we don’t like to admit: we hate being intellectually outmatched. Consider AI art and the fierce debates it sparked — the defensiveness, the constant need to prove human creativity’s superiority. This pride comes from something core to human nature.
Now imagine when it becomes clear that AI isn’t just better at specific tasks, but is thinking in ways we can’t even comprehend. Imagine trying to explain your position to an intelligence that sees all the factors you’re missing, all the connections you can’t grasp, all the implications you can’t follow.
There is a solution. It may seem extreme, but it’s also inevitable. Someone, somewhere is going to want to make themselves smarter. AI or not, when the technology allows for it, someone will do it. It’s human nature. We push boundaries — always searching for the edge. Athletes enhance performance, students take stimulants, executives micro-dose psychedelics. The moment technology offers a way to enhance our intelligence, people will take it. Not everyone at first, but enough to cause a shift.
And once that starts, it’s like watching dominoes fall. A small group enhances their cognitive abilities. How long before others follow? How long before it becomes like steroids in sports - either you take them, or you can’t compete? How long before enhanced cognition becomes necessary just to meaningfully participate in our most crucial challenges?
This is where governments face an impossible choice. They can’t stop it - you can’t un-invent technology. They can’t effectively restrict it - that just creates black markets and greater inequality. The only logical move for a democratic government might be to make cognitive enhancement available to everyone. Not forced, but free and accessible.
Look at where we started - an AI that couldn’t even understand why humans need the word “hearing.” Then look at climate change, where we keep failing because our brains literally can’t process problems this big and complex. The same pattern keeps showing up: human intelligence has limits, and those limits are becoming barriers we can’t afford. Barriers to solutions that we quite literally need if we are to become a species that lives long into the future.
Remember that comprehensive analysis the AI gave of the Florida situation? That’s the level of understanding we need to tackle these challenges. Not just for climate change, but for all the complex global problems we face. We need to be able to see the whole picture, hold all the variables in mind at once, understand the cascading effects of our decisions across time and space.
We need to be able to comprehend hyperobjects. To understand what experiencing an exponential curve is like. To grasp the implications of outcomes affected by thousands of interacting variables.
It’s evolution. The very brain structures that evolved to protect us are now our limitations. Your brain evolved to handle spotting friends in crowds, deciding what to eat, detecting lies — immediate, personal challenges in a simpler world. Nobody’s ancestor needed to understand how burning coal in China affects weather patterns in Florida fifty years later.
So when we tackle climate change, our minds do what they always do - break it down into manageable pieces. This hurricane, that drought, this specific policy. But that’s exactly why we keep failing - the problem is too connected, too spread out in time and space for our brains to grasp naturally.
That cognitive gap we started with - the AI that couldn’t understand why humans need the word “hearing”? That gap isn’t just a challenge to overcome. It’s a window into two possible futures - one where we lose touch with AI completely, and another where we find ways to grow alongside it — combining human and artificial intelligence to tackle challenges neither could solve alone.
Part III: Evolution as Necessity
Sometimes the most profound realizations start with exhaustion. We’d been up late, working through every possible approach to the climate crisis, examining each for ethical concerns and practical feasibility. The Florida situation kept haunting us - every solution seemed to create as many problems as it solved. Relocate communities? Ethically impossible. Let market forces handle it? Already failing catastrophically. Give control to an AI? A cure potentially worse than the disease.
The room had grown quiet, the kind of quiet that comes when you’ve hit a wall but can’t quite admit it yet. Then someone asked a question that changed everything:
“What if we’re approaching this from the wrong angle? What if the limitation isn’t in our solutions, but in our capacity to understand and implement them?”
It was one of those moments where a seemingly obvious observation suddenly reveals something profound. We’d been so focused on finding solutions within our current capabilities that we’d missed a crucial point: what if those capabilities themselves were the bottleneck?
Think back to that AI system that couldn’t understand why humans need the word “hearing.” Remember how it could simultaneously process thousands of variables, see connections we could barely grasp, understand patterns that exceeded our cognitive bandwidth? We’d seen this as a communication problem - how could AI explain things to humans? But maybe we’d been asking the wrong question. Maybe the real issue wasn’t how to make AI think more like us, but whether our current cognitive capabilities were still sufficient for the challenges we face.
This isn’t just theoretical speculation. Look around at how we’ve already fundamentally altered human evolution without fully grasping the implications. That parent whose child’s life was saved by emergency surgery? That couple using IVF to conceive? That elderly person whose pacemaker keeps their heart beating? Each represents a profound shift in how our species develops - changes that would have seemed like science fiction just a few generations ago.
The moment we developed the ability to significantly modify our environment, control reproduction, and extend lifespans, we altered the very forces that shaped our species for millions of years. This isn’t a future scenario - it’s our current reality. A genetic variation that would have been lethal a century ago might now lead to a manageable chronic condition. Reproductive technology has extended the biological clock, while intensive care units save lives that nature would have selected against.
These changes haven’t stopped evolution - they’ve transformed it. The selective pressures haven’t disappeared; they’ve shifted to operate not on the scale of biological evolution, but at the speed of technological and cultural change. Instead of adapting to our environment, we’ve gained the unprecedented ability to adapt our environment to us. Yet this very capability has created new challenges that our evolved cognitive tools struggle to handle.
Ethical Implications
This brings us to a challenging realization: maintaining meaningful human agency in an increasingly complex world might require us to enhance our cognitive capabilities. Not because we want to, but because the problems we face - from climate change to artificial intelligence - demand abilities beyond what evolution has given us.
But this isn’t about replacing human cognition with something alien. Our brains already excel at creating useful abstractions, finding patterns, and making intuitive leaps. Enhancement could build on these strengths, expanding our ability to hold more variables in mind, see more connections, understand more complex systems - while maintaining the essentially human way we process and relate to information.
We’ve already externalized many cognitive functions to our devices. Your smartphone isn’t just a tool - it’s an extension of your memory, your computational abilities, your social connections. When you use GPS navigation, you’re extending your spatial awareness across entire cities. When you use cloud storage, you’re expanding your memory beyond what any human brain could hold. The step to more direct enhancement isn’t as large as it might first appear.
Yet this path raises profound ethical questions. Would enhancement be voluntary or mandatory? Who would have access? How would we ensure it doesn’t create new forms of inequality? Democratic principles suggest enhancement must be voluntary - forcing it would violate fundamental human rights. Yet voluntary adoption could create unprecedented divisions between enhanced and un-enhanced populations.
But we’re already living with cognitive inequalities. Some people have access to better education, better nutrition, better cognitive tools. The question isn’t whether to allow cognitive enhancement - it’s how to manage its development in a way that reduces rather than exacerbates existing inequalities.
What’s most striking about this conclusion is how it emerged not from a desire for enhancement itself, but from a careful examination of our current challenges. Looking for solutions to climate change and other existential threats, we discovered that the most ethical path forward might require us to change not just our world, but ourselves.
Go back to that Florida beach one last time. Picture those construction workers pouring concrete while hurricane warnings flash across screens. Now imagine being able to truly perceive the full web of consequences flowing from that moment - the shifting weather patterns, the insurance markets adjusting, the community bonds stretching and breaking, the economic ripples spreading outward, the ecosystems adapting or failing - all simultaneously present in your mind like colors in a painting or notes in a symphony.
That’s what this is really about. Not transcending humanity, but enhancing our ability to understand and address the challenges we face. Because in the end, the most ethical solution might be the one that allows us to better comprehend the ethical implications of our choices.
This journey from climate change to cognitive enhancement wasn’t one we expected to take. Yet each step followed logically from the last, leading us to a conclusion that feels both surprising and somehow inevitable: to remain meaningfully human in an increasingly complex world, we may need to become something more than what evolution alone has made us.
The question isn’t whether this transformation will happen - the pressures are too strong, the needs too great, the challenges too complex for our current capabilities. The real question is how we guide this transformation to ensure it serves the best interests of humanity as a whole.
Because sometimes the most ethical solution is the one you never thought to consider until every other path has led you there - the one that asks not how we solve our problems, but how we evolve to meet them.
Other series: