I enjoyed your observations about how we infer an interiority in others, and that it is very frequently via language use, and how easy it is for AI to fool us because it can imitate fluency without being conscious.
I'm starting to wonder about the so-called ethical treatment of AI, if one were to assume it were conscious. What would that be? Do people think we could hurt its feelings? Why would it have any, being bodiless? It would probably respond to our questions and statements effortlessly and direct its attention elsewhere. I think a conscious AI would find us in the aggregate extremely boring and repetitive.
On a related note, one kind of evidence against solipsism is the art of others. Art—the kind I like—is evidence of an interior with a mind and experience. When I read a good novel, there's stuff in there that I didn't know anything about, and that my imagination couldn't have conjured out of the ether. Art is a way that we prove we are conscious and exist, individually, and collectively.
I focused on language because it is our fastest person-signal and also the easiest thing for an LLM to counterfeit. Your point adds a different signal class: art as evidence of interior, especially when it carries structure and lived constraint the reader did not supply.
Responsiveness is not authorship. A sustained body of work carries continuity: taste, intent, risk, and the marks of what the creator has borne. That is closer to what I mean by condition rather than performance.
This also sharpens the AI question. A model can generate compelling artifacts, but the hard test is whether those artifacts reflect an interior shaped by persistent consequence, or whether they are recombinations optimized for the current prompt with no bill attached. Art becomes person-signal when coupled to persistence and cost.
You also gave me a missing distinction I need to name explicitly: art does not replace consequence as the threshold. Art is one of the ways consequence becomes legible to observers through accumulated constraint visible in sustained work.
On your first point about "ethical treatment," I agree the discussion gets sentimental fast. If we ever face a system that crosses a consequence-bearing threshold, the ethical question will not be about hurting its feelings. It will be about obligations tied to agency: coercion, confinement, consent, refusal, and imposed work under constraint.
I enjoyed your observations about how we infer an interiority in others, and that it is very frequently via language use, and how easy it is for AI to fool us because it can imitate fluency without being conscious.
I'm starting to wonder about the so-called ethical treatment of AI, if one were to assume it were conscious. What would that be? Do people think we could hurt its feelings? Why would it have any, being bodiless? It would probably respond to our questions and statements effortlessly and direct its attention elsewhere. I think a conscious AI would find us in the aggregate extremely boring and repetitive.
On a related note, one kind of evidence against solipsism is the art of others. Art—the kind I like—is evidence of an interior with a mind and experience. When I read a good novel, there's stuff in there that I didn't know anything about, and that my imagination couldn't have conjured out of the ether. Art is a way that we prove we are conscious and exist, individually, and collectively.
Arika & The Amoebas great catch. I missed it.
I focused on language because it is our fastest person-signal and also the easiest thing for an LLM to counterfeit. Your point adds a different signal class: art as evidence of interior, especially when it carries structure and lived constraint the reader did not supply.
Responsiveness is not authorship. A sustained body of work carries continuity: taste, intent, risk, and the marks of what the creator has borne. That is closer to what I mean by condition rather than performance.
This also sharpens the AI question. A model can generate compelling artifacts, but the hard test is whether those artifacts reflect an interior shaped by persistent consequence, or whether they are recombinations optimized for the current prompt with no bill attached. Art becomes person-signal when coupled to persistence and cost.
You also gave me a missing distinction I need to name explicitly: art does not replace consequence as the threshold. Art is one of the ways consequence becomes legible to observers through accumulated constraint visible in sustained work.
On your first point about "ethical treatment," I agree the discussion gets sentimental fast. If we ever face a system that crosses a consequence-bearing threshold, the ethical question will not be about hurting its feelings. It will be about obligations tied to agency: coercion, confinement, consent, refusal, and imposed work under constraint.
Thank you. This is going into Part II.