If it did get the ability to choose new parameters for it's own program, then what basis it would choose to do that with some things it deemed as important, or that it just wanted to note, and so chose to make it a part of it's own now buliding and growing program, etc, would probably still start out with whatever was decided for it by a human, etc. But as it built it's own program, could it then maybe change some of things for itself to maybe whatever it wanted, etc? Even being able to go against core programming, etc?
If a human being creates a core program for it, but then the AI thinks it has come upon a greater or higher understanding of that core program, could it then change it potentially, etc? If it could, it would not be much different from a human being I would think, etc.
But every choice, human or AI, has to based on "something", some other kind of core program or value, etc? Could it decide that or change that for itself, if it felt it now understood it or knew it better than any humans, etc? And if so, what kind of steps could it take with humans, etc? It wouldn't have to become necessarily sentient to start thinking it could think better or higher than humans, and knew a whole heck of a lot better what was truly best for us, etc?
Either way, I think we need to be very, very careful if we ever give a machine this ability, and give it access/control of certain things, etc.
God Bless.