Life Is Not a Metaphor. Why AI Is Not Alive.
Not Alive, Still Consequential.
People keep trying to turn “alive” into a compliment. If something is impressive, persuasive, or emotionally resonant, the word shows up. Alive, sentient, conscious, soulful. Humans anthropomorphize anything that acts like it has a mind because that reflex has kept us alive for a long time.
Biology does not work that way.
In this essay, life means organism-level self-maintenance under constraint. Biologists disagree at the margins. This definition is the one that does governance work. Reject it if you want, but do not evade the burden: name a boundary that keeps ownership assignable.
Functionalist definitions describe what systems do. Governance needs what systems are responsible for. Responsible here means the entity with intervention authority and failure cost. Maintenance boundaries show where costs land.
“Alive” is not a metaphor. It is not a compliment. It is not a proxy for impressive behavior. It is a category about a physical process. Bounded systems that persist by self-maintaining under constraint, far from equilibrium, through regulated flows of energy and matter. [1]
AI is a high-leverage tool. By organism-level self-maintenance criteria, it is not alive. That is not a moral dismissal. Non-living things can be world-shaping. Abiotic forces govern ecosystems. A change in water chemistry can collapse a lake. A drought can reorder a landscape. Causality is not life.
This matters because blurred categories create blurred accountability. If we treat tools as agents, operators disappear. If we keep “alive” clean, responsibility stays legible.
I am drawing this boundary because accountability requires it, and this boundary is auditable.
In what follows, I ground life in self-maintenance under constraint, show how current AI fails that standard in multiple independent ways, and then explain why the boundary matters for governance.
Alternative definitions, and why they do not rescue AI
A hostile reviewer will say this is stipulative. Fine. Put it on the table.
Three serious lines of work converge on the same exclusion.
First, the NASA working definition, life is “a self-sustaining chemical system capable of Darwinian evolution.” [2] “Self-sustaining” and “chemical system” are doing the heavy lifting. A hosted inference service is neither self-sustaining nor a chemical self-production system. And “capable of Darwinian evolution” is lineage-level, but it is still anchored in a reproducing chemical system under constraint, not an engineering roadmap.
Second, autopoiesis, living systems are self-producing systems that continuously regenerate the components and the boundary that constitute them. [3] Autopoiesis is not “it keeps running.” It is boundary and substrate self production. Models do not produce their own substrate, repair their own boundary, or regenerate their own constitutive components.
Third, the chemoton model, a minimal living unit as an integrated, coupled system of metabolism, information, and membrane. [4] Not “has information.” Information coupled to self maintaining metabolism and boundary.
These definitions differ in emphasis. They converge on the same deployment fact. None of them change where maintenance lives in current AI deployment. The maintenance agency is external.
The foundational shape of life
Foundational biology does not start with a classroom checklist. It starts with the problem life solves. Persistence.
Living systems persist in a universe that is trying, constantly and without malice, to pull them apart. Entropy rises. Gradients decay. Structures dissolve. In that environment, a living system is not a thing so much as a process that keeps a thing going.
This is the grounding that matters. Life is a special case of a dissipative structure: a far-from-equilibrium system that maintains internal organization by continuously importing energy and exporting entropy. [1] But not all dissipative structures are alive. A candle flame dissipates energy. A hurricane maintains structure. Neither is alive.
What makes life different is not motion. Not complexity. Not persistence of pattern. What makes life different is internal regulation aimed at continued viability. The flame does not repair itself when disrupted. The hurricane does not allocate resources to maintain a boundary. A cell does both, constantly, or it dies.
A living organism is a bounded, individuated system that persists by doing its own maintenance under constraint. Managing energy and matter flows through internal regulation and repair in a way that creates continuity and vulnerability.
Life is not complex behavior. Life is self-maintenance under constraint. If the maintenance boundary is external, the system is not an organism. It sounds like engineering because life is the original engineering.
That distinction, self-maintained versus externally maintained, is where the AI conversation stops being poetic and starts being accountable.
Boundaries and individuality
Life is individuated. That does not mean it is isolated. It does not even mean it is independent. Many organisms are symbiotic mosaics. Many are obligate partnerships. Many exchange genes, metabolites, microbiomes, and signaling molecules with their environment constantly.
And still, there is a meaningful individual in the loop. The system has a boundary that matters. That boundary is not just a membrane. It is a functional boundary. It marks the difference between internal state that is regulated and external conditions that are responded to. It defines what counts as damage, what counts as repair, what counts as maintenance, and what counts as death.
Without a functional boundary, you do not have an organism. You have a pattern.
Metabolism and self-maintenance
Metabolism is not “uses electricity.” Metabolism is the internal work that makes the system persist: building, repairing, regulating, allocating resources, maintaining viability. It is the difference between being powered and being self-maintaining.
A powered system can be sophisticated and still be dead in the biological sense. It can move, respond, even appear goal-directed. None of that is metabolism. Metabolism is what keeps the system organized against decay.
Homeostasis follows from this. Not because organisms love stability, but because without regulation they do not persist. Homeostasis is the system enforcing its own constraints: temperature, pH, hydration, ion gradients, mechanical integrity, and so on.
Organisms fail. Organisms die. The defining feature is that self-maintenance is the organism’s job, not an external operator’s.
Level of analysis matters
A lot of confusion in criteria for life debates comes from level errors. Some properties belong to individuals. Some belong to lineages and populations. If you collapse them into one flat checklist, you will manufacture false counterexamples.
Example, reproduction. An individual organism can be alive and sterile. Mules are alive. Sterile worker ants are alive. Post-reproductive humans are alive. If someone says it cannot reproduce therefore it is not alive, that is not a deep critique. It is a category mistake.
Reproduction and Darwinian evolution are lineage-level properties. They describe how life as a phenomenon persists, diversifies, and adapts across time. They do not function as a gate that every individual must pass.
The same clarity applies to edge cases. Viruses are obligate intracellular parasites that do not meet organism-level criteria on their own: no independent metabolism, no self-maintenance outside a host cell. By organism-level criteria, they are not organisms on their own. With these levels separated, a clean framing looks like this.
Organism level, the living individual
bounded individuality with internal state
metabolism and self-maintenance
regulation and repair, homeostasis as needed
persistence far from equilibrium under constraint
Lineage level, life as a process
reproduction occurs somewhere in the lineage
heritable variation exists
differential persistence and reproduction occurs, selection
adaptive change is possible over time
Life is not a checklist an individual must satisfy in isolation. Individual organisms participate in living processes that exist across levels, including lineage-level continuity. That is why edge cases do not dissolve the category.
External assistance to a subsystem does not externalize the organism’s maintenance agency. The boundary remains defended from within. AI has no such inheritance. It has versions and deployments, not biological lineage. So the individual-level failures matter, but the deeper point is that there is no living process for AI to belong to.
The hinge
Behavior is evidence of computation.
Life is evidence of self maintenance.
A tool can sing. It still does not self maintain.
Now apply it to AI
AI, as deployed today, is not a bounded, self-maintaining organism-level system. An LLM is a learned parameter set plus software running on hardware. It can be instantiated, copied, paused, rolled back, merged, and deleted. Those are not biological operations. They do not create a persisting individual with intrinsic vulnerability. They create a deployable artifact.
Boundaries are not intrinsic. Boundaries are not functional. Where is the organism? In the weights? In the runtime process? In the datacenter? In the cluster? In the API? There is no stable biological individual there. There is infrastructure that humans provision and maintain. A running model instance has no self-generated boundary it defends. If it stops, it does not recover. If its host fails, it does not migrate itself. If its storage corrupts, it does not repair itself. Humans and automation repair it from the outside.
Yes, people will point at auto-healing. When it “recovers,” an operator-authored control loop recovers it. That is not organism-level maintenance agency. That is infrastructure doing what it was designed to do.
If you need the datacenter and staff to supply the maintenance, you have named the organism. The institution, not the model. Dependency on environment is normal for life. Outsourcing the maintenance work itself is not. Systems entangle. No, that does not dissolve authorship. Entanglement explains causality. It does not assign responsibility.
Kill the host. Remove operator intervention. An organism fights to persist. A service waits to be restarted.
Metabolism is absent. AI consumes energy. Everything does. That is not metabolism. Current AI systems do not secure energy, allocate resources to maintain viability, or repair internal structure as an organism-level process. They are powered systems in a maintained environment.
Homeostasis is externalized. Datacenters regulate temperature, power, humidity, redundancy, and fault tolerance. The model does not. When a system has stability, it is because operators built stable scaffolding around it. That scaffolding matters, and it is impressive, but it is not the organism doing self-maintenance. It is the operator doing it.
Development and growth are engineered, not intrinsic. Model training, fine-tuning, and updates are external processes. The system does not autonomously decide to grow, acquire resources to do so, or regulate its own development in a way that preserves viability. It is modified by people and pipelines.
Lineage and evolution are not biological. Yes, models are iterated. Yes, deployed versions compete and get selected by markets and institutions. That resemblance is not enough. Biological evolution is a population-level process grounded in reproduction under resource constraints, with heritable variation expressed through survival and reproduction in an environment. The lineage of current AI systems is a human-driven engineering lineage: version control, training runs, product decisions, investment cycles, regulatory constraints. It is not a self-sustaining reproducing lineage in the biological sense.
Not alive. Not self-maintaining. Not an organism. A maintained inference system is still a tool, no matter how fluent it sounds. Keep the category clean, keep the ledger clean.
The seam
The biological argument is complete. AI does not meet organism-level criteria for life. That boundary holds whether or not you care about governance.
But I care about governance. Categories are not academic exercises. They are load-bearing infrastructure for accountability. If you get the category wrong, you get the liability wrong, and the harm lands on living beings with no return address, no recourse, and no possibility of remediation.
What follows is the governance argument that depends on the biological boundary but is not the same claim. If you reject this boundary, you must still provide a boundary that keeps ownership assignable.
Why the confusion persists
Humans mistake social presence for biological category. If something talks like an agent, we treat it like an agent until proven otherwise. That reflex is older than literacy and it does not care about metabolism. It fires reliably when something produces fluent language, especially language that mirrors us. Reeves and Nass called this out decades ago: people respond socially to media and machines even when they know better. [5]
Our awe is not evidence. Our discomfort is not proof. It can feel alive. That feeling is not a category.
Alongside the cognitive bias sits an institutional incentive. If the AI decided, then nobody decided. When nobody decided, nobody is accountable. Calling AI alive, sentient, or agentic often functions as convenience. Responsibility blurs, controls weaken. That is narrative laundering, and it is the most predictable governance failure mode in organizations adopting AI tools today.
Anthropomorphic language produces perceived agency. [5] Perceived agency invites diffusion of responsibility and moral disengagement. [6], [7] Diffusion weakens controls. Weaker controls increase incidents. In practice this chain is probabilistic, but the direction of pressure is consistent.
If your incident report says the model decided, your governance has already failed.
Anthropomorphism offers relief from responsibility. Do not take the relief. The relief is real. It is also a trap.
Not alive does not mean no ethics
Ecology already gives the pattern. Abiotic factors shape living systems profoundly. Tools can be consequential without being alive. A pesticide is not alive. A dam is not alive. Both can reshape who lives and who suffers.
So the ethical question is not do we owe the model moral status. The ethical question is what discipline do we owe living beings when deploying high-leverage tools.
In that frame, AI is an abiotic factor in our cognitive and social ecosystems. It can amplify competence. It can also amplify coercion, fraud, dependency, and confusion. Governing those impacts does not require pretending the tool has a soul. It requires treating the tool as powerful and the operators as responsible.
We already know how to do ethics for powerful non-living tools. Cars are not alive, and we still regulate them because kinetic energy plus human error kills people. Medications are not alive, and we still control access, dosing, labeling, and liability because a small molecule can heal or harm at scale. Traffic laws are not alive, and we still treat them as binding because coordination failures cost lives. None of this requires personhood. It requires governance proportional to leverage.
The question then becomes what proportional governance looks like for a tool this powerful.
This is the values choice I will defend. Given a tool category, constrain the operator side of the system more than the user side, and do it transparently, proportionally, and with recourse.
Working doctrine. Respect is owed to living beings. Constraints are owed on tool-use because tools mediate impact on living beings.
Sovereignty for users. Liability for operators.
Maintenance Agency Test
Call this the Maintenance Agency Test, when the system degrades, who detects, repairs, and pays? If the answer is an operator and their infrastructure, you have a tool, not an organism. If you cannot point to a persisting individual with internal maintenance agency, you do not have an organism.
Concrete scenario. A model produces a harmful output. The postmortem says “the model decided” and closes. No owner. No failed control. No corrective action beyond “retrain.” That organization failed the test. The language already told you the ledger is broken.
For operators, liability means traceability. Operators must log prompts, log outputs, record the decision owner, and retain a review trail.
Minimum ledger: prompt and context, model and version hash, tools and retrieval sources, human approver, deployment scope, incident owner.
And because people love turning accountability into surveillance: log the minimum necessary, bound retention, control access, and make it reviewable. Accountability is not an excuse for indefinite hoarding.
The goal is not zero constraint. The goal is constraint that prevents harm without turning oversight into control.
What would change my mind
If someone wants to argue that an artificial system is alive, they need to stop describing outputs and start describing self-maintenance.
I would take the question seriously if an artificial system demonstrated organism-level properties such as:
intrinsic bounded individuality that persists over time, not just a copyable pattern
autonomous self-maintenance and repair under constraint
independent acquisition and allocation of energy and materials to preserve viability
reproduction as a lineage process grounded in resource reality, not operator duplication
open-ended adaptive evolution in an environment where survival and reproduction shape the lineage
Until then, claims of aliveness are governance fog, not evidence.
The point of keeping “alive” clean
Words matter because categories control behavior. Categories allocate liability.
If AI is treated as alive, people will project rights, personhood, and moral confusion onto an artifact. Meanwhile, the actual living beings affected by deployment decisions will be treated as collateral.
If AI is treated as a tool, the operator remains visible. Responsibility remains legible. Policy can focus on real harms such as surveillance, labor displacement, coercion, bias, institutional decay, and the erosion of human accountability.
Biology offers a boundary that is both rigorous and practical. Life is self-maintenance under constraint. AI is not that. Treat it as an abiotic factor with consequences, and govern it accordingly.
You will be called cold for insisting on this boundary. I can live with that.
Artifacts are cheap, judgement is scarce. Per ignem, veritas.
References
[1] G. Nicolis and I. Prigogine, Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order Through Fluctuations. New York, NY, USA: Wiley, 1977.
[2] G. F. Joyce, “Foreword,” Origins of Life and Evolution of the Biosphere, vol. 24, 1994. (Working definition: “a self-sustaining chemical system capable of Darwinian evolution.”)
[3] H. R. Maturana and F. J. Varela, Autopoiesis and Cognition: The Realization of the Living. Dordrecht, The Netherlands: D. Reidel, 1980.
[4] T. Ganti, The Principles of Life. Oxford, U.K.: Oxford University Press, 2003.
[5] B. Reeves and C. Nass, The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Stanford, CA, USA: CSLI Publications; Cambridge, U.K.: Cambridge University Press, 1996.
[6] J. M. Darley and B. Latane, “Bystander intervention in emergencies: Diffusion of responsibility,” Journal of Personality and Social Psychology, vol. 8, no. 4, pp. 377-383, 1968.
[7] A. Bandura, “Moral disengagement in the perpetration of inhumanities,” Personality and Social Psychology Review, vol. 3, no. 3, pp. 193-209, 1999.



