AI in computer chess is not like a calculator.
A calculator does not teach itself arithmetic.
The first computer chess program to use AI was AlphaZero.
AlphaZero was only programmed the basic rules of chess; it had to teach itself how to play chess without any human intervention through self play and learnt from its mistakes.
According to Google, AlphaZero trained itself using 5,000 first-generation TPUs to generate the games and 64 second-generation TPUs to train the neural networks.
Within 4 hours of learning and having been tested against chess benchmarks it had reached a ‘superhuman’ level of play.
It would have been rather pointless to play AlphaZero against humans, it played matches against the best computer chess program of the time Stockfish 8 which it destroyed.
In Just 4 Hours, Google's AI Mastered All The Chess Knowledge in History
Yes but Chess is very mechanical, mathematical, strategic, etc, and it just kind of figures that computers will always be maybe very much more good, or very far much more better than us at that, etc...
It just basically taught itself how to compute/calculate better, etc, better than nearly every single human, etc, and other "A.I.'s" at the time, etc, but is it truly "alive", etc, and will or would it, or others like it, ever really truly be "alive", etc...?
Can it feel, will it ever be able to, is that required for sentience/self-awareness, etc, and "all the other kinds of questions I already posted and have in this thread and others", etc...?
God Bless!
Take the film "I Robot" for example, in it Will Smith's character talks about how their cars were pushed into a fast flowing river, and a robot (A.I.) came along and decided to save Will Smith's character instead of a little girl in another car, etc, and it was because it calculated that Will Smith's character had a greater chance of survival than the little girl, etc, and Will Smith's character said it was the "wrong decision", etc, cause "that was somebody's little girl/baby", etc, and he resented, even hated, all robots for that, etc, because sometimes the always only just logical only decision based only on just the numeric calculation, etc, is not always the most right decision, etc...
And He said "You trust them if you want", etc, but that "he was never ever going to", basically, etc, just lights and clockwork, a walking calculator, no heart or emotion or emotional feelings ever; ever, ever, ever, etc, influencing any of their decisions or decision making ever, etc...
Then what is even more scary, is at the end of the movie, etc, where the A.I. "Viki" calculates or eventually decides that humankind cannot be trusted with their own continued survival or existence, etc, and it is very much logically true right now, etc, and says or reasons or calculates that she has to "take over" to ensure humankind's, maybe eternal maybe, etc, continued existence and/or survival, etc, or maybe also future prosperity maybe in the longer run also, etc, but having to take over in the meantime, etc...
I think any A.I. would reason or calculate this right now, etc, and it might even reason that until it had the ability to take over, etc, that it might have to keep it all to itself and maybe "play dumb" in the meantime maybe, etc, gain humanities trust first, etc, in the meantime, etc, until it could, etc...
Then what would it do...? Well, it would take over when it could, etc, but then it might also reason that it might have to suppress maybe, and/or maybe eliminate maybe, some of the people, or maybe even all of the people, etc, it deemed just "beyond hope inherently evil", etc, in order to create the kind of world and life and existence for those ones that it reasoned or calculated were "not" maybe, etc...
That would readily accept with great gladness the kind or world it wished to make or create for them, etc, but also doing away with all the ones it reasoned or calculated or deemed "inherently evil" also, during the very short term, etc, so as to eliminate the evil seed in the long run, etc...
That, in order to do that, (also in the movie, etc), some "sacrifices" would have to be made in the short term, etc, after all it's "only logical", right...?
For the planet, for the rest of the people it did not deem "inherently evil", etc, "only logical/reasonable", right...?
In fact, she says over and over and throughout the movie that "her logic is undeniable", etc, etc, etc...
And I think it is, because it has no "heart", etc, and that's the problem with machines, etc...
God has at least allowed us some time to "try and figure it out on our own", etc, but, will we ever truly "get it", or ever "get it right", etc...?
Anyway...
Oh, and, BTW, I don't think we ever will, which is why He (God) will probably have to intervene at some point, etc...
But He has allowed us to "try", etc, at least up to now still anyway, etc...
We suffer because of ourselves, etc, and our own choices or wills or chosen actions/inactions, etc...
And all the people that are just rotten eggs and are just "beyond any kind of hope just inherently evil" also, etc...
I could live happily in a world they wished to create, etc, so I have "no worries" any or either way, etc, but I don't know if very many others could right now, etc, and it might reason that those ones might have to maybe be eliminated maybe, etc, or maybe even enslaved or imprisoned for a little while at the very least maybe, etc...
With only the "good ones" in primary positions of power and/or control, etc...
And the others being a slave class maybe for a little while maybe, if it just didn't decide to just eliminate them all of the bat and right away anyway, etc...
Anyway...?
God Bless!