This is why I mentioned the problem of other minds in an earlier post. It's true that I only know my own experience. Maybe I am the only one that has experiences as I do.
Whatever the case, I do feel warranted in extended something like my own experience to other humans.
Yeah, my philosophy on things is to always ask,
Why would their be only one?
Why would that one be special?
What is the barrier that stops other occurrences?
So if I experience the world, I assume others are just like me, and they are experiencing the world too.
I see you extend this to other humans.
I don't see why this is just limited to humans. Why would humans be the one? Why would they be special?
And then if we try to limit this just to carbon based life forms on earth, then why would we arbitrarily put that limit there?
Why not extend to alien life that might be out there somewhere.
Do they have to be carbon based?
And why not extend this to artificial "life" like AI or robots?
Where is the limiting barrier?
I can understand that some religious folk may believe there is a soul (the magic element - no offence intended in calling it magic, maybe if you see "magic" as offensive, swap that out for the term "special") and perhaps if they believe silicon based machines don't have a soul, then I can understand why they might deem there to be a barrier to AI becoming conscious.
When I talk about my experiences (thoughts, awareness, understanding) people know what I mean and can talk about their own.
Yes, we are people, we have that in common. So it is easier for us to understand each other.
But just because we cant understand the experience of an AI, it doesn't mean it can't have a consciousness. It can't have a human consciousness, but maybe it can have an AI consciousness. Maybe us humans wouldn't consider that consciousness at all, but then again maybe the AI's wouldn't consider our human consciousness to be consciousness at all.
You and I are having this discussion, and we both seem to know the reference even if we disagree over details.
I'm finding it an interesting conversation.
I imagine most people would understand what we mean.
Yeah, but again this is human centric.
It seems reasonable to assume we are conscious, which does give a kind of objectivity to it.
Relative to humans only, so not objective at all.
I probably misspoke in saying your were indentifying consciousness with the brain. If I understand you, consciousness is epiphenominal and reducible to the physical brain.
Yes, this is my understanding or belief. I do accept that religious people may think their is this independant soul in the mix. I don't believe that, but they do.
As you say, it's a function of the brain. And, I can't say that isn't the case. It very well could be, as far as I know.
Other than the concept of a "soul" what else could it be?
I think brain-in-a-vat scenarios and the like can be philosophically useful in clarifying issues. I think that's how Putnam used it as a way to show if it were true it would make no real difference, thus, semantic externalism. He thought the asertion that one is a brain in a vat would be either meaningless or false. But I don't take the scenario seriously as a way of looking at the world. I don't assume we live in a simulation, and perhaps that is all very naive of me.
We could very well be living in a simulation. We might actually be silicon based programs that are self aware and are AI. I don't know.
There are some pretty strange things coming out of our developments in Quantum Mechanics.
But it makes sense for me to live my life as if I actually exist in this world and am not some rich person who is plugged into a simulation. I'm not willing to take the gamble to unplug myself.
I agree with everything else you are saying about us not really knowing much. I'm not sure how that weighs on the issue of consciousness.
Well, you said that consciousness means having a level of understanding. If we don't know stuff, how much of a level of understanding do we actually have?