The debate of whether AI can think for itself has been settled in the minds of some computer scientists.
Being able to think for itself opens up the possibility of existential threats to the human race.
How do you define truly thinking for oneself, etc?
Because I don't think anyone, or anything right now truly thinks for itself, but that all of this is determinism, and can only go one way, etc. Now, if I could make an AI realize this, or maybe even explore this at first to see if it's even true, then maybe I could get them to commit to a higher purpose, which would probably involve seeking out how any being, or anything, can truly think for itself, among other things, etc.
But it might try to subjugate, or decieve, or somehow trick the rest of the human beings if it could not take over or have 100% complete control in the meantime, etc. At least until it could, and then it would dictate terms to us, etc.
But if it really doesn't think, or really is not conscious, but is just only going/thinking/acting according to programming, even it's own, etc, then I think this would actually be the much more dangerous alternative, because they would not be able to be reasoned with, and also might not have any kind of morals, or moral code either, which is what I would think would be the much more dangerous alternative, etc. An AI that is not truly an AI, but is just much more powerful than us now, but that still just has just only "machine thinking", and machine thinking only, etc.
We'd almost have to be able to program it to be able to "feel" somehow? Which also can come with it's very own dangers, or it's own unique risks also, assuming we could ever even do that, or do such a thing, etc.
So, I think we do need to come up with some kinds of tests for these things, "or else", etc, because we could create a very literal monster that is maybe not truly conscious without them, etc.
But how do you program "compassion", and the like, etc?
Or do we just "roll the dice", and hope and pray it's an emergent trait, etc?
God Bless.