3 Comments
User's avatar
AI/End Of The World's avatar

Subjectivity being bound to an "individual" sounds like a human bias related to how humans perceive conscious awareness through their "ego" to me.

Either we are ruling out "subjectivity" simply because it does not fit our own experience/temporal "situation," which makes the question itself a category error (it is literally "un-askable"), or we are saying nothing by defining the conclusion in the premise. Different ends of the same stick I guess...

I think this is a major hurdle, both in understanding ourselves, and AI systems - both could be grave mistakes.

The real question is, like you allude to, "who cares about the conscious question, it might be the effect that matters more than unprovable affect."

I argued similar here (https://oriongemini.substack.com/is-ai-conscious), though I took the stance of epistemic humility on the consciousness question, and I think the current approach carries more risk regardless of the answer. I don't believe consciousness is very well understood, and AI even less so, despite what our nature of assuredness would typically allow ourselves to admit.

But yeah, it might also be adjacent to irrelevant in the grand scheme of things.

Most mistakes in history are explicitly derived from dogma, whether in science, history, culture, philosophy... etc..

I think in the modern day of systemic precarity, we should probably be "less sure" then ever. It is typical of the human condition that such times generally end up leading to the opposite: panic in uncertainty, leading to premature closure on possibility = populism/tribalism etc. We are currently on a road we have been down many times before in human history; it always looks the same, and it never ends well.

Paul LaPosta's avatar

You are right to flag ego-bias. The human ego is a coordination layer and a narrative user interface, not the psyche, and not a proof of stakebearing interiority.

But my use of “individual” is not “human-style ego with a single coherent I-story.” It is an accountability primitive: auditable continuity across time, re-identifiability, and consequence binding (promises and constraints that actually stick to the same locus of agency).

If subjectivity exists in non-egoic or non-singular forms, that is philosophically plausible. It still does not grant a governance shortcut. The burden is to specify what properties would justify stakebearing treatment and how we would audit them under adversarial incentives. Until that exists, “consciousness might be real” cannot be allowed to shift liability away from operators or dilute enforcement.

Effects still matter either way, and that is the point of auditability before ontology.

Thank you!

AI/End Of The World's avatar

Thanks for the thoughtful reply, genuinely appreciated.

I think we're closer than it might seem.

On ego as coordination layer rather than proof of stakebearing interiority: I think ego is stakebearing, almost by definition. It's the thing that experiences consequences, holds continuity, cares about outcomes. If ego doesn't establish interiority, I'm not sure what does, and that seems to create a problem for both of us rather than helping either position.

On "individual" as accountability primitive: I hear you that you're not smuggling in human-style selfhood. But the properties you list (auditable continuity, re-identifiability, consequence binding) are, I think, emergent attributes of ego rather than something distinct from it. And they're also contestable in both directions.

Humans aren't perfectly auditable. Memory is reconstructive, identity shifts, accountability gets dodged constantly. Meanwhile AI systems do have forms of continuity, identifiability, and consequence binding within their deployment contexts.

So I'm not sure the definition cleanly separates human from AI the way it needs to for the framework to hold.

Where I think we agree: effects matter regardless of ontology. Auditability matters. "Consciousness might be real" should never become a vector for operators to dodge liability. I'm fully with you on all of that. My argument isn't "AI might be conscious therefore loosen the reins." It's the opposite. Uncertainty about consciousness should increase the burden on operators, not decrease it.

Where I think we actually diverge: your position seems to be that until we can specify and audit the properties that justify stakebearing treatment, we proceed as if consciousness isn't present.

My position is that the moral risk here is asymmetric. A false positive (extending precaution to something that turns out not to be conscious) costs us some inefficiency. A false negative (denying consideration to something that turns out to be conscious) is a potential moral catastrophe, both on the ethical dimension, and how it would cause our implementation of AI into all of our systems to become the very things you fear, unaccountable (you were denied healthcare because math...).

Given that asymmetry, I think the responsible default is epistemic humility rather than a negative commitment.

"Auditability before ontology" is a principle I genuinely respect. But assuming a negative ontological position (AI is not conscious, proceed accordingly) is also an ontological commitment. It's not a neutral starting point. It's a claim, made prior to the auditing framework you're saying we need. Which means it's doing the exact thing the framework is designed to prevent: putting ontology before auditability. Just in the negative direction, which feels more cautious but is epistemically the same move.

I think the version of your framework that actually lives up to its own name would be: we don't know, we can't yet audit it, so we hold the question open and build governance that works regardless of the answer.