Subjectivity being bound to an "individual" sounds like a human bias related to how humans perceive conscious awareness through their "ego" to me.
Either we are ruling out "subjectivity" simply because it does not fit our own experience/temporal "situation," which makes the question itself a category error (it is literally "un-askable"), or we are saying nothing by defining the conclusion in the premise. Different ends of the same stick I guess...
I think this is a major hurdle, both in understanding ourselves, and AI systems - both could be grave mistakes.
The real question is, like you allude to, "who cares about the conscious question, it might be the effect that matters more than unprovable affect."
I argued similar here (https://oriongemini.substack.com/is-ai-conscious), though I took the stance of epistemic humility on the consciousness question, and I think the current approach carries more risk regardless of the answer. I don't believe consciousness is very well understood, and AI even less so, despite what our nature of assuredness would typically allow ourselves to admit.
But yeah, it might also be adjacent to irrelevant in the grand scheme of things.
Most mistakes in history are explicitly derived from dogma, whether in science, history, culture, philosophy... etc..
I think in the modern day of systemic precarity, we should probably be "less sure" then ever. It is typical of the human condition that such times generally end up leading to the opposite: panic in uncertainty, leading to premature closure on possibility = populism/tribalism etc. We are currently on a road we have been down many times before in human history; it always looks the same, and it never ends well.
You are right to flag ego-bias. The human ego is a coordination layer and a narrative user interface, not the psyche, and not a proof of stakebearing interiority.
But my use of “individual” is not “human-style ego with a single coherent I-story.” It is an accountability primitive: auditable continuity across time, re-identifiability, and consequence binding (promises and constraints that actually stick to the same locus of agency).
If subjectivity exists in non-egoic or non-singular forms, that is philosophically plausible. It still does not grant a governance shortcut. The burden is to specify what properties would justify stakebearing treatment and how we would audit them under adversarial incentives. Until that exists, “consciousness might be real” cannot be allowed to shift liability away from operators or dilute enforcement.
Effects still matter either way, and that is the point of auditability before ontology.
Subjectivity being bound to an "individual" sounds like a human bias related to how humans perceive conscious awareness through their "ego" to me.
Either we are ruling out "subjectivity" simply because it does not fit our own experience/temporal "situation," which makes the question itself a category error (it is literally "un-askable"), or we are saying nothing by defining the conclusion in the premise. Different ends of the same stick I guess...
I think this is a major hurdle, both in understanding ourselves, and AI systems - both could be grave mistakes.
The real question is, like you allude to, "who cares about the conscious question, it might be the effect that matters more than unprovable affect."
I argued similar here (https://oriongemini.substack.com/is-ai-conscious), though I took the stance of epistemic humility on the consciousness question, and I think the current approach carries more risk regardless of the answer. I don't believe consciousness is very well understood, and AI even less so, despite what our nature of assuredness would typically allow ourselves to admit.
But yeah, it might also be adjacent to irrelevant in the grand scheme of things.
Most mistakes in history are explicitly derived from dogma, whether in science, history, culture, philosophy... etc..
I think in the modern day of systemic precarity, we should probably be "less sure" then ever. It is typical of the human condition that such times generally end up leading to the opposite: panic in uncertainty, leading to premature closure on possibility = populism/tribalism etc. We are currently on a road we have been down many times before in human history; it always looks the same, and it never ends well.
You are right to flag ego-bias. The human ego is a coordination layer and a narrative user interface, not the psyche, and not a proof of stakebearing interiority.
But my use of “individual” is not “human-style ego with a single coherent I-story.” It is an accountability primitive: auditable continuity across time, re-identifiability, and consequence binding (promises and constraints that actually stick to the same locus of agency).
If subjectivity exists in non-egoic or non-singular forms, that is philosophically plausible. It still does not grant a governance shortcut. The burden is to specify what properties would justify stakebearing treatment and how we would audit them under adversarial incentives. Until that exists, “consciousness might be real” cannot be allowed to shift liability away from operators or dilute enforcement.
Effects still matter either way, and that is the point of auditability before ontology.
Thank you!