Preventing artificial intelligence from taking on negative human traits.

Neogaia777

Old Soul
Supporter
Oct 10, 2011
23,277
5,237
45
Oregon
✟952,487.00
Country
United States
Faith
Non-Denom
Marital Status
Celibate
OK, Chess, well, there are only "so many computations" one can make in the game of chess, etc, too many for us maybe, but not for a super, super computer, etc, and I'll bet, if you pit two of them who have both already "completely mastered it" up against each other, etc, eventually, it might all come down to just whomever gets to make the first move first maybe, etc, as they both will have completely mastered all possible computations/moves of it, etc...

A cousin of mine is very, very good at puzzles, and puzzle games, and strategy and strategic games, etc, and he never even finished the eighth grade, etc, but he's shockingly good at them, etc, just has that kind of mind, etc, and in say, a game of checkers, especially if he always goes first, etc, you can never beat him at it, etc...

I'm horrible at those kinds of things, etc, but, to simplify it even more, I can almost always win at a game of tic-tac-toe always, if I always go first, etc, or it always ends in a draw with someone who equally knows it, etc, but that's an extremely simple game, etc, but my point is, to two super, super computers, etc, chess might be or become very much like that to them, etc...

Checkers is like that to my cousin, etc...

Anyway,

God Bless!
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟101,755.00
Country
United States
Faith
Christian
Marital Status
Private
it had to teach itself how to play chess without any human intervention

Yes but Chess is very mechanical, mathematical, strategic, etc, and it just kind of figures that computers will always be maybe very much more good, or very far much more better than us at that, etc...

To say this code "taught" itself chess is the type of conflation - appropriation of animal traits - I'm talking about. It's easy for this type of thing to slip in when there's no rigorous scientific definition of "teach" that explains both the computer code and the human activity. Though if you want to give it a shot ...

I agree with @Neogaia777 , and the rub is this. The only advantage AI has that allows it to "destroy" chess players is it's computational speed. [edit] And the reason chess programs have gotten better is because humans have gotten better at programming [end edit] Were we able to somehow equate computer and human computations, and allow both the computer and the human the same number of computations per move (rather than using a time limit), I would expect humans would suddenly be competitive again.
 
Last edited:
Upvote 0

durangodawood

Dis Member
Aug 28, 2007
23,408
15,557
Colorado
✟427,892.00
Country
United States
Faith
Seeker
Marital Status
Single
....Were we able to somehow equate computer and human computations, and allow both the computer and the human the same number of computations per move (rather than using a time limit), I would expect humans would suddenly be competitive again.
This sounds like "if humans were much better at chess, they could be competitive again."
 
Upvote 0

Neogaia777

Old Soul
Supporter
Oct 10, 2011
23,277
5,237
45
Oregon
✟952,487.00
Country
United States
Faith
Non-Denom
Marital Status
Celibate
Who would create AI that doesn't care about its own self-preservation?
How would you program it to overly care about it's own self-preservation, etc...?

As in a fear of death, etc, cause that would take emotions, etc, emotions that it just simply wouldn't have, etc, it just simply wouldn't care about any kind of emotional considerations at all unless it could actually physically feel it, etc....

Now, it wouldn't get or be suicidal or depressed on the other hand either, as that would also take emotions also, etc, which it just simply does not have, etc...

It would just simply "calculate", etc, this against that, etc, come up with a number, etc, and then decide it's decisions for each situation accordingly, depending on how it was programmed, etc...

We could try to program it to consider emotional considerations into it's computations, etc, but that might be exceedingly difficult for each and/or any unique circumstances and/or situations, etc, and it might also conflict with it's logic, or it's own logical programming, etc, which it would probably just default to without any emotional considerations throwing them out and considering them highly "illogical", and "irrational", and "unreasonable", etc...

If it could program itself, I highly doubt it would consider any kind of emotional considerations either unless it could actually physically feel it either, etc, as they go against it's nature, which is the nature of a very complicated calculator, or very highly complex computational computer, but still just a computer, etc...

Reason and/or logic are it's laws, etc, and emotions and emotional considerations fly in the face or that a lot of the time, etc, and I think it would just consider them highly illogical and irrational and unreasonable, and, in the end, completely unnecessary, etc, to the point to where it would even consider them, or would just toss them out, without the ability to actually feel them, etc...

See my thread here: Emotional awareness, feelings, just a part of our physical makeup, or evidence of something greater?

Anyway...

God Bless!
 
Upvote 0

Neogaia777

Old Soul
Supporter
Oct 10, 2011
23,277
5,237
45
Oregon
✟952,487.00
Country
United States
Faith
Non-Denom
Marital Status
Celibate
This sounds like "if humans were much better at chess, they could be competitive again."
I think he means that if some humans were given enough time per turn, to consider every single possibility or possible move or outcome of moves from the beginning of each turn each time, then they might be able to compete with these computers maybe, etc...?

But I just don't think we can do that though, etc, even the very best human chess players make errors or mistakes sometimes, etc, just don't see "it all" sometimes, like a computer can, etc, and might not be able to computationally compute all possibilities or possible outcomes of all moves from the very beginning in this context, or contest, etc...

But the computers could, and could perfectly, and always, and every single time, etc, without ever making a mistake or error, etc...

And while it might only take the computer a few seconds to do this, and make it's next move, etc, it might take a human player days just staring at the chessboard between each move to be able to or match up to the same, etc...

And even then, probably still would not be able to do it all always perfectly, etc...

But the machine always would/could, etc...

Anyway...?

God Bless!
 
Upvote 0

Gene2memE

Newbie
Oct 22, 2013
4,094
6,290
✟272,315.00
Faith
Atheist
Marital Status
Private
How would you program it to overly care about it's own self-preservation, etc...?

As in a fear of death, etc, cause that would take emotions, etc, emotions that it just simply wouldn't have, etc, it just simply wouldn't care about any kind of emotional considerations at all unless it could actually physically feel it, etc....

You don't need emotions to programme in self preservation.

What you need is sufficient parameters for what constitutes harm, sensors to detect said harm, thresholds for what is considered acceptable/unacceptable levels of harm, and then avenues/strategies for the AI to avoid it.

I'm not saying its easy, but it seems possible.

I don't think a cockroach could be said to have a 'fear of death', but it has plenty of self-preservation instincts.
 
Upvote 0

Neogaia777

Old Soul
Supporter
Oct 10, 2011
23,277
5,237
45
Oregon
✟952,487.00
Country
United States
Faith
Non-Denom
Marital Status
Celibate
You don't need emotions to programme in self preservation.

What you need is sufficient parameters for what constitutes harm, sensors to detect said harm, thresholds for what is considered acceptable/unacceptable levels of harm, and then avenues/strategies for the AI to avoid it.

I'm not saying its easy, but it seems possible.

I don't think a cockroach could be said to have a 'fear of death', but it has plenty of self-preservation instincts.
I think it would still just be a "numbers calculation", etc...

And it wouldn't be able to feel the "harm", etc...

Unless you are suggesting that we program it to react or retaliate to certain kinds of threats or damage done to it somehow in certain situations, etc...?

And it sounds almost like the beginning or programming some kind of "morality" into it maybe, etc...?

What would you suggest would be some acceptable or unacceptable levels of harm or threat, etc...? And ways the A.I. should either, A: react to it being done, or B: what it should do to try and avoid it being done, etc...?

I don't think it would still be "true self-preservation" though, etc, because that has to often come from a "feeling", fear or anger for one, etc, which it just wouldn't be capable of, etc, or at least, I maybe hope not maybe, etc, especially not the anger part, etc...

And how do you know a cockroach doesn't fear death, etc...?

Cause many other animals do, etc...?

If you cast a shadow over it with your foot or shoe to squash it, or when you expose it from it hiding place, etc, does it not try to run and hide elsewhere as quickly as it possibly can, etc, out of fear maybe, etc...?

A machine is just not capable of that, etc...

Anyway,

God Bless!
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

sjastro

Newbie
May 14, 2014
4,852
3,887
✟273,723.00
Faith
Christian
Marital Status
Single
To say this code "taught" itself chess is the type of conflation - appropriation of animal traits - I'm talking about. It's easy for this type of thing to slip in when there's no rigorous scientific definition of "teach" that explains both the computer code and the human activity. Though if you want to give it a shot ...

There is no ambiguity about it.
Learning to play chess goes beyond knowing how the pieces move and the rules of chess.
This was the part programmed into AlphaZero; the tactical and strategic side of chess was self taught by AlphaZero.
This is how AI operates by machine learning.

If you don’t believe me here is the peer reviewed paper.
The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from selfplay. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess) as well as Go.

I agree with @Neogaia777 , and the rub is this. The only advantage AI has that allows it to "destroy" chess players is it's computational speed. [edit] And the reason chess programs have gotten better is because humans have gotten better at programming [end edit] Were we able to somehow equate computer and human computations, and allow both the computer and the human the same number of computations per move (rather than using a time limit), I would expect humans would suddenly be competitive again.

For conventional computer chess programs improvements have been obtained with better programming and faster multi-core CPU’s.
With machine learnt programs the hardware performance is not as straightforward.
The self taught program Leela Chess Zero is in fact much stronger on single or dual GPUs than on the much more powerful multi-core CPUs, as the neural networks run best on target platforms such as NVIDIA GPUs.
The recently developed neural network known as NNUE now allows neural networks to be effectively run on CPUs resulting in a massive increase in performance for conventional programs.

The cold hard reality is that while conventional computer chess programs have been better than the best human players for at least ten years, the self taught programs are now so far ahead they play at an evolutionary level beyond the very best human players.
Grandmasters themselves contribute to this POV.
Human chess grandmasters generally expressed excitement about AlphaZero. Danish grandmaster Peter Heine Nielsen likened AlphaZero's play to that of a superior alien species. Norwegian grandmaster Jon Ludvig Hammer characterized AlphaZero's play as "insane attacking chess" with profound positional understanding.Former champion Garry Kasparov said "It's a remarkable achievement, even if we should have expected it after AlphaGo.

Where humans may possibly be still competitive is in correspondence chess with no time limits.
 
Upvote 0

Neogaia777

Old Soul
Supporter
Oct 10, 2011
23,277
5,237
45
Oregon
✟952,487.00
Country
United States
Faith
Non-Denom
Marital Status
Celibate
Just did a google search on "can machine learning ever lead to emotional feeling", and "can machine learning ever lead to emotional awareness", and "can machine learning ever lead to true self-awareness", etc, etc, etc, been plugging in different things, etc, and got some interesting results, etc...?

God Bless!
 
Last edited:
Upvote 0

Neogaia777

Old Soul
Supporter
Oct 10, 2011
23,277
5,237
45
Oregon
✟952,487.00
Country
United States
Faith
Non-Denom
Marital Status
Celibate
Just did a google search on "can machine learning ever lead to emotional feeling", and "can machine learning ever lead to emotional awareness", and "can machine learning ever lead to true self-awareness", etc, etc, etc, been plugging different things in, etc, and got some interesting results, etc...?

God Bless!
The most general consensus seems to be, that they can mimic emotions, and they can learn how to appear to act or react to our emotions in an emotional way, and maybe in some ways, even better than most humans beings can or do most of the time, etc, but that they do not actually have or experience emotions, no matter how complex they are, etc, at least, not right now anyway, etc...?

And it may not be something that either we or it may be capable of programming, etc, in part, due to still not fully knowing yet why we or the animals feel, or come to have or experience true feelings and/or emotions, etc...

Cause science still can't fully explain that, etc...

Not even in a cockroach yet, etc...

Anyway,

God Bless!
 
Upvote 0

Neogaia777

Old Soul
Supporter
Oct 10, 2011
23,277
5,237
45
Oregon
✟952,487.00
Country
United States
Faith
Non-Denom
Marital Status
Celibate
Have any of you played the semi-recent video game (2018) "Detroit: Become Human" at all yet...?

It's a very, very interesting game, and it's all about Androids (A.I.'s) becoming truly sentient, etc, and the way it starts to be beginning happening to them is also, very, very interesting, etc...

Any of you on here ever played it yet, etc...?
 
Upvote 0

Neogaia777

Old Soul
Supporter
Oct 10, 2011
23,277
5,237
45
Oregon
✟952,487.00
Country
United States
Faith
Non-Denom
Marital Status
Celibate
We are bio-chemical-electro "machines'', etc, and the way that biology and chemistry and electrical energy works together may be what gives us the ability to "feel" and have feelings, and/or experience different kinds of emotions, and different emotional states of...? well... "mind" I guess maybe...? But that might not be entirely accurate because it affects our whole entire being, etc... But, anyway, our biology and our chemistry for example, we experience different types of emotions and different emotional states of mind because they give us a "high" chemically just like drugs do, etc, and how many people actually only do drugs just to "feel", etc...

Anyway, many of us are quite literally "addicted" to them, etc, our emotions, etc, especially if they/you/me are feeding the same kinds of emotions over and over and over again, cause then it becomes stronger (that or those connections) and more of them in the brain, etc, and the stronger or more those connections are or become, the harder they are to break, etc, and we can quite literally go through some pretty serious withdrawal symptoms if we are not feeding them on a regular basis constantly, etc, and so this is the way it is with many people, and is just what many people do, etc...

But is any of this possible in a mechanical machine, etc...? In order to have it feel, you would quite literally have to design it to where it would act as if it had an addiction to drugs, etc, but the drug would also have to give it a "high" also, etc, and then you'd also have to design it to where it would also experience withdrawal symptoms, or certain lows, without it also, or in the absence of it also, etc, but the problem is, electro-mechanical machines can't "feel", cause there is no biological or chemical component to it/them that would almost be required to make it this way, etc...?

So they can never, ever truly "feel", etc...

I just know I got tired of the roller-coaster ride... eventually, etc... Used to have an addiction to drugs, but do not any longer, etc, use to use them to "feel", etc,and now it almost scares the poop out of me to feel anymore now, etc, in part, maybe because I know how it works now, etc, and for another part, I eventually sought stability and balance and to be on an even-keel in my emotions and be off the worlds roller-coaster ride, etc, don't even watch much TV anymore because of the way they manipulate me/you/us that way, etc, and I'm very very careful with all my entertainment and everything I choose to expose myself to now, etc, including people now in any kind of social environments or social circles, etc, they're like animals to me now, great ferocious beasts, predatory carnivores that eat each other and devour one another to me now, etc, and it scares me now, etc, and I want no part of it, etc...

But, anyway, back to topic...

I think potential A.I's are capable of a lot of very great very "mechanical intelligence", etc, but I also think, without the ability to feel or experience any kind of emotions at all, etc, they are also very limited, etc, which should, in my view, make us think twice about just how much power and/or authority and/or control we would give them in the future I think, etc...

Anyway,

God Bless!
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Gene2memE

Newbie
Oct 22, 2013
4,094
6,290
✟272,315.00
Faith
Atheist
Marital Status
Private
I think it would still just be a "numbers calculation", etc...

And it wouldn't be able to feel the "harm", etc...

Sure it would. It's just a different kind of 'feeling' than humans, or other creatures, have. You're being rather anthropocentric here.

Unless you are suggesting that we program it to react or retaliate to certain kinds of threats or damage done to it somehow in certain situations, etc...?

And it sounds almost like the beginning or programming some kind of "morality" into it maybe, etc...?

Self preservation responses to external stimuli aren't programming in "morality". They're solutions to ensure the survival of an autonomous AI. If X occurs, then Y.

As for reacting or retaliating, there's a huge difference in terms of a moral dimension there. The second also requires a massive leap in terms of capabilities from the first. There's also huge differences when you consider the source of the harm.

For instance, there are already automated delivery drones that use radar and lidar and 'sense and avoid' programming to detect and respond to things that may damage them. The drone is reacting on its own to avoid things that might damage it.

Are you suggesting that if someone starts chucking rocks at a drone, onboard (or network) AI should have an option to retaliate?

What would you suggest would be some acceptable or unacceptable levels of harm or threat, etc...? And ways the A.I. should either, A: react to it being done, or B: what it should do to try and avoid it being done, etc...?

Acceptable levels of harm depend on the parameters of what the AI is attempting to accomplish. Or is being used to accomplish.

If you're sending an AI controlled Boston Dynamics 'Big Dog' into a structure fire to look for possible injured/trapped/unconscious people, the threshold of acceptable 'harm' is probably greater than if you're using it to cart around firewood.

I don't think it would still be "true self-preservation" though, etc, because that has to often come from a "feeling", fear or anger for one, etc, which it just wouldn't be capable of, etc, or at least, I maybe hope not maybe, etc, especially not the anger part, etc...

And how do you know a cockroach doesn't fear death, etc...?

You're right.

I don't know what the neurological threshold for consciousness is, so I can't actually argue that a cockroach is not conscious. I'd point out thought that your stomach has 500 million neurons in it, which is about 500 times the number of neurons in the brain of a cockroach.

If you cast a shadow over it with your foot or shoe to squash it, or when you expose it from it hiding place, etc, does it not try to run and hide elsewhere as quickly as it possibly can, etc, out of fear maybe, etc...?

A machine is just not capable of that, etc...

You might be surprised what a machine is capable of. For instance, drone detect and avoid systems are already way better than humans at picking up possible aerial collision causes and maneuvering out of the way.

Iris Automation | Sense and Avoid: How it Works in Unmanned Aerial Vehicles

If a machine can pick up on a threat, and then maneuver to avoid it and go about its business, is it not responding to ensure its own survival, just as the cockroach is? Regardless of whether it has 'fear' or not?
 
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
15,202
5,877
✟296,775.00
Faith
Christian Seeker
Marital Status
Single
How would you program it to overly care about it's own self-preservation, etc...?

As in a fear of death, etc, cause that would take emotions, etc, emotions that it just simply wouldn't have, etc, it just simply wouldn't care about any kind of emotional considerations at all unless it could actually physically feel it, etc....

Emotion doesn't make you eat unless you're a "depressed eater" or something.

Hunger makes you eat and they are signals that can be felt in different parts of the body.

Man made programs also interact by signals. You can certainly do the same thing with self-preservation when ability to function normally is threatened.
 
Upvote 0

timewerx

the village i--o--t--
Aug 31, 2012
15,202
5,877
✟296,775.00
Faith
Christian Seeker
Marital Status
Single
Anyone interested in combat applications?

A program that self terminates on a whim or don't care about its survival won't be useful. It could become non-functional long before its life expectancy, putting much of its procurement cost to waste and nobody (human or otherwise) would like that.
 
Upvote 0

durangodawood

Dis Member
Aug 28, 2007
23,408
15,557
Colorado
✟427,892.00
Country
United States
Faith
Seeker
Marital Status
Single
Who would create AI that doesn't care about its own self-preservation?
Pretty much all software created right now doesnt care about self preservation, yet people still make it. I elect to keep and update certain application for years. Decades now actually.

Also, society may well exert an ethical push on creators to limit self-interest in AI. I image some would conform while others wouldnt.
 
  • Agree
Reactions: jacks
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟101,755.00
Country
United States
Faith
Christian
Marital Status
Private
This sounds like "if humans were much better at chess, they could be competitive again."

Not at all. Your comment makes you sound unfamiliar with competitive sports/games.

Computers will always have an edge over humans in terms of computational speed. If that edge is a significant factor in the game played, computers will always win. But computational speed is not (IMO) an indicator of intelligence - artificial or otherwise. That was my point.

I proposed a hypothesis. If speed were eliminated as a factor by changing the rules, humans could beat computers under those new rules. It happens all the time in competition. Either the rules change to make things "fair" (something constantly happening in professional sports), or different leagues are created to allow everyone to compete (NCAA divisions I, II, II ... men's & women's basketball, etc.).

Now, if you want to argue computational speed is an aspect of intelligence, then you would actually be addressing my post rather than just booing from the cheap seats.
 
Upvote 0