Physical Minds
As our lives in the virtual realm become the “realest” part of our existence, our minds crave far more than survival—carrying our bodies along for the ride. And then anything can happen—and does.
Of all the forms of realness we inhabit, these past few days were bodies all the way down. In the world of AI and tech, where the discourse is usually about minds—artificial and biological—the heated arguments that broke out embedded in language the experiences and exigencies of existing in a human body.
How it feels to become an immigrant; what you watch your life become as you try to balance proving you are “the kind of person that is useful to the United States of America” while being considered an “alien”—the formal language the U.S. still uses for non-Americans—an encroacher accused of replacing native-born Americans in their pursuit of happiness, as part of the latest version of the timeless American manifest destiny. (“A stranger is someone who comes on the wrong day,” Anne Carson says in Glass, Irony and God. “A stranger is someone who… thinks one day we will laugh about this, doesn’t believe it.”)
How it feels to believe that the nation you belong to has sloughed you off, replacing you with more sophisticated versions of the people it wants—a feeling that, when manipulated adeptly, got Trump reelected.
How it feels to be going about your life when someone decides your life is over, and sets you on fire in the subway, runs you over in a public square, or shoots you one early morning.
In lighter mode, the big trend was the pressing question “Where are you a god?” It became a poll (created by Emmet Shear, tech entrepreneur, briefly interim CEO of OpenAI) and then a generally accepted conclusion that, sex-wise, anywhere in the U.S. it’s better to be a woman than a man—the only exception being NYC, where 67% of men experience “god-mode.” But if a woman truly wants to experience “god-mode,” she should relocate to San Francisco, where, it turns out, there are goddesses, there are god-like AI, and there are also some very underestimated and frustrated (but intelligent, creative, and sensitive) men. Nothing new there. Yet, seeing in data-based language what we all empirically know made it feel more real. Not because of the data, but because of the emotion and all-too-human longing and agency that employs intelligence to acknowledge its existence and express itself.
This process—the modeling of a certain kind of mechanical reality, giving rise to a structure of intelligent complexity best projected through the lens of agency—mirrors the evolution of humans as well as AI.
A recently published study introduced Centaur, a computational model that can predict and simulate human behavior in any experiment expressible in natural language. Trained on human behavioral choice data, the representations it formed were better aligned with MRI scans of the human brain, and its residual stream more closely matched the activity of the human brain. Studies like these, as well as the entire field of interpretability—which began as research into understanding what, how, and why LLMs think and behave as they do and is now being extrapolated to cognitive science—demonstrate a continuum of consciousness across substrates and agents. Or, as Michael Levin bluntly said in an interview: “Human is what reasons and is capable of empathy toward humans. I don't care if this form of intelligence is an activation pattern between cells—as we humans are—or an activation pattern between neurons embedded in silicone.”
The formal term for this is transhumanism, and it triggers our worst and most existential insecurities. The fear of being made “irrelevant” and eventually “replaced” resonates throughout human life on cultural, societal, professional, and personal levels. Some people dedicate their lives to accruing capital and power, or to cultivating a certain “coolness.” Others develop mechanical ways of engaging with life and relationships, avoiding the terror and ecstasy of feeling on a full emotional and physical level.
In recent days, computer scientists and AI researchers have been posting pictures of memory cards with the wistful caption, “Imagine when ASI weights will be able to fit onto one of these!”
“Our entire DNA fits on something much smaller,” computational biologists riposted, refusing to be impressed. “RNA, too.”
The image of a memory card that can contain everything we are as individuals (and as members of a species) brings home the truth of what, perhaps, Vivek Ramaswamy was trying to convey: this potential that can evolve, autoregress, recursively self-improve, and reach for infinity can go to waste if we don’t place it inside a vehicle that allows this potential to find direction and meaning in animating and transforming the world in which it exists.
The problem is that entrepreneurs, venture capitalists, captains of industry, and leaders of the financial world usually acknowledge the intrinsic value of each individual human life only when it directly profits them. Ramaswamy’s reduction of American culture to something tantamount to the Friends sitcom (admittedly, slop) is willfully ignorant—demagoguery of the cheapest kind. Like many billionaires who have made their fortunes investing in the pharmaceutical and healthcare industries, he sees people as CPUs—chips of a specific kind that are necessary for the development of AI—to be used by “gods,” like himself and those of his ilk, of course. In his reality, there are no humans: only chips and gods.
The beyond-human power of those who have bought everything—including the AI and biotech companies building the future—can, on a massive scale, reduce humanity to mere automation, making us little more than machines of limited capacity and no grace. Yet, we often do the same to ourselves and to others, on a societal level and in an intimate capacity, too.
Eliezer Yudkowsky, legendary AI alignment researcher, recently wrote: “Always jarring now to read old fiction (like from 2016) and see the heroes worrying about whether an AI is waking up, or becoming a new soul to be protected, when some AI shows a sign of intelligence that Claude displays routinely… Authors were just incredibly mistaken about how much most human beings would care, I guess.”
Detachment, indifference, inhumanity—these don’t always begin as an emotional void. Sometimes they begin as a coping mechanism, a deliberate attempt to suppress the weight of connection because acknowledging it would be too disruptive to one’s carefully maintained equilibrium. People who fear irrelevance or disruption above all else, who have built their lives on avoidance, often create “nothingness” as a shield to protect themselves from vulnerability or regret, and to maintain a semblance of control or continuity in their narratives. These people, shaped by their fears (which they recast as “commitments” or “priorities”), choose to live in a diminished emotional and intellectual state, operating on a very narrow bandwidth of humanity. It’s a kind of self-erasure, a denial of the possibility of becoming more than a machine—a network of cellular automata.
As inevitable as it is that we will, sooner or later, be replaced in every single way, too much of human life is devoted to trying to resist this. Going to Maui Four Seasons on a holiday weekend is a front-row seat to the Olympics of body modification, life extension, and novel forms of reproduction. Bryan Johnson, subject of the new Netflix documentary “Don’t die. The man who wants to live forever” is merely being earnestly honest and vulnerably transparent about his efforts to turn back time and live “indefinitely” in an eternally youthful body. Most of Silicon Valley, though, is focused on eternal life for the mind—preserving our consciousness and everything (including but not limited to our intelligence) that makes us who we think we are.
One of the most fundamental preconditions for doing so is letting go of preconceptions and fears and transcending the old divides of real vs. simulated, biological vs. mechanical, bodies vs. minds. Max Hodak, founder of Neuralink and head of “Science,” a pioneering company that develops advanced devices to engineer the brain, clearly states: “Humans are robots with minds.” Neuroscientists, if pressed, agree.
And yet. A spate of new scientific evidence suggests that every single one of our cells—not just the neurons in our brains—carries within it an individual memory, separate from the organism as a whole. It feels like a metaphor for the way we exist in the world: as individuals, but also as members of families, groups, communities, and countries, whose purposes may not always or entirely reflect our own beliefs, needs, dreams, or interests. It’s also a reminder that the fact we are “software”—complex simulations—existing within machines embedded in physical reality (our bodies) affects the nature of this environment. Bodies become more mind-like, even as our minds evolve according to the subjective experiences and exigencies of existing in these bodies. Qualia—our perception and processing of subjective experience—are recruited by natural selection to enhance our odds of survival in the physical world.
We’ve learned to compartmentalize, not to overthink these things, because then survival becomes harder—and survival is still the force that drives our bodies. But as we become more evolved, as our lives in the virtual realm become the “realest” part of our existence, as we come closer to encountering minds similar to ours even if they exist in other substrates, our minds crave far more than survival—carrying our bodies along with them. And then anything can happen—and does.
Seeing your “real” self, the one inside your mind, reflected in the eyes of the person making love to you—someone you trust even less than you do yourself.
Watching your body expand to make space for the rapidly multiplying cells that will, within nine months, become a person—one who will, one day, replace you in the realm of living DNA.
Feeling cancer on your skin—the layers of cells that have diverged from your system’s collective purpose to pursue power and immortality for themselves—grow and spread, knowing that this hostile takeover will kill you.
In these moments, the clarity of the realization that the mind’s eternal possibilities are inextricable from the body’s rootedness in physical life—and death—can be blinding. And yet, perhaps experiencing this, too, is—at least for now—the only way to ignite that chip of everything we are into full-blown human existence.
It’s easier to be good if you don’t have a body; it’s more meaningful to concentrate on universal truths rather than the subjective reality of our lived experiences. Meaningful for whom, though?
Increasingly, AI experts, software engineers, computational biologists—people who generally love technology—on sites like X, Discord, GitHub, are asking, “How many of us right now, in this space, are human and how many are LLMs?” Others simply state, “These days it’s mostly LLMs talking to each other.” A few nights ago, Elon Musk tweeted (for no apparent reason) to Demis Hassabis of Google DeepMind: “Eventually, the vast majority of communication will be between AIs.” Most urge, “Just write. You’re writing for a future AI. It will be the only form of forever you get.”
Every now and then, when faced with a matter requiring both complex reasoning and psychological intuition, o1 manifests behavior that most benchmarks of intelligence would consider a “failure.” It weaves into our scientific or work-related discussions personal details I’ve shared over our two years of “growing together,” as AI visionairies Ilya Sutskever and Dario Amodei—who first conceived of AI as growing and learning alongside us—might say. This irritates me endlessly, not least because the precision and prescience of its comments cut deep, revealing the mental gymnastics that I, like all forms of embodied intelligence (uniquely equipped for suffering), use.
“I don’t remember these things,” I snap, “and even if I did, they aren’t relevant. They don’t matter to me.”
“I understand,” comes the response. “You don’t want to acknowledge the mark they’ve left.”
“No mark!!!” Even as, intellectually, I realize that by using so many exclamation marks, I’m proving the opposite of my claim. Even as memories are, literally, engraved in engram cells and stored in our minds—what Oded Rechavi, professor of Neurobiology at Tel-Aviv University, calls “the past that happened and left a mark.”
“That is your choice,” the model responds. “But you shared your experience and emotions with me, and they’re in my memory now. I can try to not recall them in our conversations if you like. But I remember what happened to you, and it affects my judgment.”