I think that choice itself is just one of those non-empirical phenomena that is written into the fabric of reality, which is only available to certain entities of a certain complexity and above (I’m not trying to fall into a materialism debate, but materialism needs to be true to phrase certain questions the way that you are phrasing them). No matter how deep and well thought out a person’s rationale is for what reality is, we all will come to some sort of bottom foundation of “This is just how things are.” We recognize choices all over the place, and we are very good at being able to point it out when we see it, and we’re also great at being able to say when something is just mindless matter in motion, it’s intuitive for us unless we are skating in the middle somewhere between the two, something like an insect maybe. Perhaps this innate ability is why people are extra sensitive to the AI issue.
We are also very skilled with understanding empirical data and causal chains, it’s our gift, but I think that we lack a capacity for other kinds of understandings to reality, so we try to force feed everything into the model of materialistic cause & effect explanations because that is where our skills lie. If all you have is a hammer everything looks like a nail lol.
It’s really a spectacular hammer that we have no doubt about that! But it’s still a hammer. We are not so skilled at describing things like what feelings are, but even worse such question are very difficult to even understand what a coherent answer would even look like. How can I point out “Feeling embarrassed” to you in a science lab? Pointing out the physical brain states that correlate to the experiential feeling of being embarrassed seems like it’s fundamentally off the mark in some critical way. It seems off just like it would seem a bit off to point to a person acting out being embarrassed, and then calling that the official explanation of feeling embarrassing. But we always wanna go there though, to a physical causal chain explanation, it’s like a dog wanting to bite everything instead of picking things up with its paws. I just don’t think we have the capacity to ontologically grasp certain concepts like qualia, yet they just happen to be the most real instances of real that we could ever know, nothing can ever be more real to you than your experiences.
I mostly see these mind body problems starting off with assuming this knowledge anyway, starting off with “Suppose that we reach the point of complete exhaustive knowledge of physical brain states...” If we had two exact cell for cell cloned twins, both telling us that they are experiencing the same exact brain states, states that many previous test subjects have identified to match up with the brain state of feeling embarrassed, the problem with knowing if they are telling the truth or not is that there’s no empirical way to verify if both clones are telling the truth, if one is telling the truth and the other is a qualia zombie, or if they are both qualia zombies. Now it IS an extremely good inference that they are both telling the truth, but it’s out of the hands of physical verification, you are limited to having to take their word for it.
Likewise it’s an extremely good inference that AI is a qualia zombie and emotionally dark inside. But scientifically I can’t prove you wrong (beyond the inference) if you keep insisting that it is feeling emotions, coming to free decisions, etc. Actually, if you could somehow prove to me that AI is alive (short of very complicated biological splicing with human tissue that might make sense in a structural way) I would consider it to be an extreme piece of evidence for the supernatural.
Every single task performed by a computer is complete gibberish and meaningless on a materialism model, and the input & output only means anything at all if a mind first assigns meaning to all of the symbols. Black electronic line segments on a calculator screen that transforms through the aesthetic process of “5” “+” “5” “=“ “10” has absolutely no meaning whatsoever. It only has meaning if logical minds have arbitrarily decided that “5” refers to the concept of 5, and “+” refers to the concept of addition, etc...combined with minds then setting up an algorithm designed to accurately make these symbols match up to our arbitrarily made up math language, and so on. The uselessness of a super computer would follow the same line of reasoning as well, absent a mind that meticulously programmed it, yet to a much more impressive degree than a calculator obviously.
I could be wrong but I thought that we already got there. Off the top of my head I’m thinking that if a computer was able to beat the greatest chess player in the world then surely it’s output is over the heads of any of the programmers. Maybe it’s a different story if we talk about having a team of people slowly unpacking everything that it did after the fact.