Preventing artificial intelligence from taking on negative human traits.

sjastro

Newbie
May 14, 2014
4,855
3,890
✟273,856.00
Faith
Christian
Marital Status
Single
I guess that's technically correct but there must be hundreds/thousands(?) of person years of human research gone into: trial/error statistics, game theory and search algorithm experience etc, that's been poured into distilling the present 'winning formula'/algorithm choices?

Exactly how that might manifest itself in producing the final product, has sort of been lost in all the number crunching, I think(?)
We humans have created AI and its definition as described by computer science defines the objective for AI.
Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,258
8,056
✟326,229.00
Faith
Atheist
Sure. AI came up with a new strategy. I've never disputed that, and though that may be your point, it didn't seem the point of anyone else, but rather the focus on the "novel" part - an attempt to imply a human would never come up with that idea, thereby proving AI is actually intelligent. It's that part I don't buy.
That wasn't my point. AI is intelligent by the definition I gave.

Thanks for the definition. A minimalist you are, then.
It's the definition of 'intelligence' in this context - it's what the 'I' in real-world AI means.

As @durangodawood said earlier, the ability to step outside the game. A computer that would look at flat space and think, "Hmm, what about curved space? What about time as a dimension that results in spacetime?" In the case of Chess, a computer that would try to cheat - intimidate, fatigue, distract its opponent.
Well anyone that knows about AI game players would know they're domain-specific problem solvers, using the rules of the game to solve the problem of achieving a win; they have no greater context, no metacognition, no understanding that they're even 'playing a game'.

For that, you'd need something far more sophisticated, with metacognition, detailed knowledge of the gaming world, and an understanding of human culture, values, and behaviour, in that context (which would, presumably, require a broader knowledge & understanding than just gaming).
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟101,755.00
Country
United States
Faith
Christian
Marital Status
Private
That wasn't my point. AI is intelligent by the definition I gave.

Yes, I realize that. I was speaking of what some others seemed to say.

It's the definition of 'intelligence' in this context - it's what the 'I' in real-world AI means.

Do you have a reference for that?

Well anyone that knows about AI game players would know they're domain-specific problem solvers, using the rules of the game to solve the problem of achieving a win; they have no greater context, no metacognition, no understanding that they're even 'playing a game'.

For that, you'd need something far more sophisticated, with metacognition, detailed knowledge of the gaming world, and an understanding of human culture, values, and behaviour, in that context (which would, presumably, require a broader knowledge & understanding than just gaming).

I wouldn't say everyone knows this, but I agree. Therefore, circling back to the OP, how do the morals of an AI fit into the conversation?
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,258
8,056
✟326,229.00
Faith
Atheist
I'd like to point out that traditional chess engines surpassed the benchmark made by Alpha Zero and the strongest engines currently are those that use a combination of traditional and neural network techniques.
Agreed, you don't need a neural network, a brute force approach, with some memory of successful & unsuccessful paths to help tweak the position evaluation, can take you a long way.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,258
8,056
✟326,229.00
Faith
Atheist
Do you have a reference for that?
No, I'm just inferring it; but given the capabilities of real-world AI systems, what else could it mean?

I wouldn't say everyone knows this, but I agree. Therefore, circling back to the OP, how do the morals of an AI fit into the conversation?
Presumably, when AIs become sophisticated enough to make value judgements, we would like them to have and prioritise values that are, at least, not to our detriment. How that could be done is the subject of a lot of study.

Currently, values are effectively built-in; e.g. for self-driving cars, pedestrians are on the list of objects to avoid, and further down the line, pedestrian categories could, in principle, be ranked by value (assuming they could be discriminated) for Trolley Problem scenarios. But there are practical and ethical issues in ranking categories, e.g. valuing a younger life over an older life, or valuing the car occupants over pedestrians, because life is far more complex than that; and, more generally, moral values are strongly culturally influenced, and so-on.
 
Last edited:
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟101,755.00
Country
United States
Faith
Christian
Marital Status
Private
Presumably, when AIs become sophisticated enough to make value judgements, we would like them to have and prioritise values that are, at least, not to our detriment. How that could be done is the subject of a lot of study.

Currently, values are effectively built-in; e.g. for self-driving cars, pedestrians are on the list of objects to avoid, and further down the line, pedestrian categories could, in principle, be ranked by value (assuming they could be discriminated) for Trolley Problem scenarios. But there are practical and ethical issues in ranking categories, e.g. valuing a younger life over an older life, or valuing the car occupants over pedestrians, because life is far more complex than that; and, more generally, moral values are strongly culturally influenced, and so-on.

What entity is responsible for the actions of an AI driven machine?
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,258
8,056
✟326,229.00
Faith
Atheist
What entity is responsible for the actions of an AI driven machine?
I think the designers, builders, and trainers (and sellers?) should be legally responsible, but only up to a point. The more complex and unpredictable the tasks the AI undertakes, e.g. autonomous driving, the less it is possible to anticipate all the possible failure modes, and so the less I think it is reasonable to hold the designers, builders, and trainers responsible for failures they could not reasonably be expected to anticipate.

So I think reasonableness is the test, and insurance should bridge the gap. I don't really think the situation is much different, in principle, from non-AI systems, just somewhat more complex.

E.T.A. There may come a point when certain AIs have individual identities and are behaviourally self-developed to a degree that we could consider the designers, builders, and trainers to be in the same position as parents who have less responsibility for their offspring as time goes on, in which case we could hold a 'mature' AI fully responsible in the same way as a mature adult.

What that means in practice is anyone's guess. I think the vast majority of AIs will be linked so that all AIs of the same type share the same knowledge & experience.
 
Last edited:
  • Like
Reactions: Gene Parmesan
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟101,755.00
Country
United States
Faith
Christian
Marital Status
Private
I think the designers, builders, and trainers (and sellers?) should be legally responsible, but only up to a point. The more complex and unpredictable the tasks the AI undertakes, e.g. autonomous driving, the less it is possible to anticipate all the possible failure modes, and so the less I think it is reasonable to hold the designers, builders, and trainers responsible for failures they could not reasonably be expected to anticipate.

So I think reasonableness is the test, and insurance should bridge the gap. I don't really think the situation is much different, in principle, from non-AI systems, just somewhat more complex.

I don't see it as any different from any other saleable product.

E.T.A. There may come a point when certain AIs have individual identities and are behaviourally self-developed to a degree that we could consider the designers, builders, and trainers to be in the same position as parents who have less responsibility for their offspring as time goes on, in which case we could hold a 'mature' AI fully responsible in the same way as a mature adult.

What that means in practice is anyone's guess. I think the vast majority of AIs will be linked so that all AIs of the same type share the same knowledge & experience.

It makes for some nice Marvel movies, but I don't expect to see anything other than that in my lifetime.
 
Upvote 0

sjastro

Newbie
May 14, 2014
4,855
3,890
✟273,856.00
Faith
Christian
Marital Status
Single
There are three types of machine learning.
machine-learning.jpg

1*CcCC8K41uZziBP9T1LLT3w.png
Supervised and unsupervised learning involve human interaction which gets back to the original post of human bias in AI.

Reinforcement learning is based on the psychological reward system of encouraging a particular type of behaviour by the use of positive stimuli (rewards).
Paradoxically this system of learning produces AI devoid of any human bias.

sl8-1560347050061.jpg
AlphaZero and Leela Chess Zero are prime examples of reinforcement learning.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

SelfSim

A non "-ist"
Jun 23, 2014
6,154
1,956
✟174,730.00
Faith
Humanist
Marital Status
Private
.. Paradoxically this system of learning produces AI devoid of any human bias.
Yet it looks to be 100% indistinguishable from how humans go about learning(?)

If this is so, then surely the process itself, (or 'the how'), is the bias there?
 
Upvote 0

sjastro

Newbie
May 14, 2014
4,855
3,890
✟273,856.00
Faith
Christian
Marital Status
Single
Yet it looks to be 100% indistinguishable from how humans go about learning(?)

If this is so, then surely the process itself, (or 'the how'), is the bias there?
Reinforcement learning does not use existing data for training.
The bias could be in the data itself as Amazon's data trained AI program illustrates.
SAN FRANCISCO (Reuters) - Amazon.com Inc's AMZN.O machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.
 
Upvote 0

Ponderous Curmudgeon

Well-Known Member
Feb 20, 2021
1,477
944
65
Newfield
✟38,862.00
Country
United States
Faith
Non-Denom
Marital Status
Divorced
There are three types of machine learning.
machine-learning.jpg

1*CcCC8K41uZziBP9T1LLT3w.png
Supervised and unsupervised learning involve human interaction which gets back to the original post of human bias in AI.

Reinforcement learning is based on the psychological reward system of encouraging a particular type of behaviour by the use of positive stimuli (rewards).
Paradoxically this system of learning produces AI devoid of any human bias.

sl8-1560347050061.jpg
AlphaZero and Leela Chess Zero are prime examples of reinforcement learning.
where did the psychological reward system come from? Have we not just moved the bias?
 
Upvote 0

sjastro

Newbie
May 14, 2014
4,855
3,890
✟273,856.00
Faith
Christian
Marital Status
Single
where did the psychological reward system come from? Have we not just moved the bias?
Using chess as an example, supervised learning which uses human chess games as the data results in human bias as the AI program plays like a human.

In Game Theory humans and computers play games to win in order to obtain some payoff.
For humans it could be satisfaction of winning or monetary gain; in the case of computers it is to maximize some numerical value.

In reinforced machine learning the chess moves and the rules of chess are the only parameters given to the program.
The program plays itself but under a reward system; if it wins it gives itself a positive value, if it loses a negative value.
Since the payoff is to maximize this value, the program recognizes it needs to win games while discarding the games it loses.
The program initially starts off playing purely random moves but through reinforced machine learning the win/loss ratio increases with time.
Since there is no human involvement in the training except for the chess moves and rules, the reward system produces a concept of chess strategy unique to AI.
There is a reverse bias now with the top human chess players being influenced by AI strategies.

A more in depth view of reinforced learning can be found in this video.

 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,258
8,056
✟326,229.00
Faith
Atheist
Reinforcement learning... Paradoxically this system of learning produces AI devoid of any human bias.
It seems to me that this applies only to contexts where human bias is irrelevant. If a human is the judge for reinforcing AI behaviour in a context where human bias may be relevant, then any relevant bias that person has may be reinforced in the AI.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

sjastro

Newbie
May 14, 2014
4,855
3,890
✟273,856.00
Faith
Christian
Marital Status
Single
It seems to me that this applies only to contexts where human bias is irrelevant. If a human is the judge for reinforcing AI behaviour in a context where human bias may be relevant, then any relevant bias that person has may be reinforced in the AI.
As shown in the video the reinforcement learning of an AI chess program does not necessarily produce an immediate return when the program makes a move.
The program generally needs to look a number of moves ahead in order to evaluate the return.
This requires an inductive bias introduced by the programmer.
The inductive bias can be weak or strong depending on the algorithm used (Differential Programming, Monte Carlo, Temporal Difference etc) and affects the learning rate.

From this perspective one can argue there is a ‘human bias’, but the ‘human bias’ I had in mind is consistent with the OP where AI decision making mirrors the weaknesses in human decision making such as the Amazon example where unsupervised learning was used.

AlphaZero and Leela Chess Zero were trained with an inductive bias but their games are not human like as I can attest to after being slaughtered by Leela Chess Zero.:(
Maia on the other hand was supervised trained using human games and not only plays like a human but makes all those blunders we humans are capable of making.
 
Last edited:
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
6,154
1,956
✟174,730.00
Faith
Humanist
Marital Status
Private
The very creation of AI, itself, is starting to look very much like an exercise in human engineers attempting Reinforcement learning for themselves(?)
I mean, creating a 'successful' example of an almost denovo style of learning, helps to highlight our own 'blunders' in how we go about learning something counterintuitive.

I'd love to see the knowledge acquired from Leela (etc) applied to learning how one could potentially exceed human thinking limitations in a field like say, QM.
Now that would be really something to see, eh?
 
Upvote 0

sjastro

Newbie
May 14, 2014
4,855
3,890
✟273,856.00
Faith
Christian
Marital Status
Single
The very creation of AI, itself, is starting to look very much like an exercise in human engineers attempting Reinforcement learning for themselves(?)
I mean, creating a 'successful' example of an almost denovo style of learning, helps to highlight our own 'blunders' in how we go about learning something counterintuitive.

I'd love to see the knowledge acquired from Leela (etc) applied to learning how one could potentially exceed human thinking limitations in a field like say, QM.
Now that would be really something to see, eh?
There are plenty of examples of the use of AI in physics.

 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,258
8,056
✟326,229.00
Faith
Atheist
As shown in the video the reinforcement learning of an AI chess program does not necessarily produce an immediate return when the program makes a move.
The program generally needs to look a number of moves ahead in order to evaluate the return.
This requires an inductive bias introduced by the programmer.
The inductive bias can be weak or strong depending on the algorithm used (Differential Programming, Monte Carlo, Temporal Difference etc) and affects the learning rate.

From this perspective one can argue there is a ‘human bias’, but the ‘human bias’ I had in mind is consistent with the OP where AI decision making mirrors the weaknesses in human decision making such as the Amazon example where unsupervised learning was used.

AlphaZero and Leela Chess Zero were trained with an inductive bias but their games are not human like as I can attest to after being slaughtered by Leela Chess Zero.:(
Maia on the other hand was supervised trained using human games and not only plays like a human but makes all those blunders we humans are capable of making.
I wasn't thinking so much about rules-based games, as applications where human bias is likely, such as job candidate selection, crime prediction, and other societal contexts where complex value judgements concerning human competence, behaviour, etc., are involved.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

sjastro

Newbie
May 14, 2014
4,855
3,890
✟273,856.00
Faith
Christian
Marital Status
Single
I wasn't thinking so much about rules-based games, as applications where human bias is likely, such as job candidate selection, crime prediction, and other societal contexts where complex value judgements concerning human competence, behaviour, etc., are involved.
I was referring only to reinforcement learning.
The examples you provided are of unsupervised learning where data is used to train the algorithm.
I've mentioned the case of Amazon's recruitment algorithm discriminated against hiring women as the data was biased; the same problems can arise in crime predictions where the higher percentage of arrest and incarceration rates for Australian indigenous people or African Americans can lead to algorithms using racial profiling.
Predictive policing algorithms are racist. They need to be dismantled. – MIT Technology Review
 
Last edited:
Upvote 0