Preventing artificial intelligence from taking on negative human traits.

SelfSim

A non "-ist"
Jun 23, 2014
6,172
1,963
✟176,122.00
Faith
Humanist
Marital Status
Private
That is part of the decision making AI has to make.
A driverless car is also expected to safely overtake, brake at a safe distance and know how to park which depends on the position of other vehicles and is random.
So I get that the plan (which might get encoded directly into some program) might have problems if it attempts to make explicit references to something which may well be random .. so wouldn't the programmer just avoid doing that by creating a different set of rules which aren't dependent on random things (as you say there: the position of other vehicles). Eg: Rule #1: always leave the immobile car 2 metres from any other two immobile (parked) vehicles(?) .. or something like that?
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
6,172
1,963
✟176,122.00
Faith
Humanist
Marital Status
Private
I think what I'm getting at here, is that the complex responses which we humans apparently employ in order to navigate an evidently complex universe, are more or less intuitive to us, yet we are (mostly) still following a fairly simple set of rules .. survival, laziness (energy conservation), avoidance of pain bad, smells, unpleasant images, etc.

I suppose then if we are able to distinguish what those rules actually are in a given context, (which involves a lot of hard work), then I would suppose those rule could be encoded into AI software(?)

The behaviours of that AI software, would then display similar (but nonetheless, slightly different) appearances as our own under those conditions(?) I don't think the individual responses would be predictable in advance, but the overall general behaviours would be very recognisable to us (intuitively so)?

Is that what we're talking about when it comes to AI 'intelligence' in this thread? (whilst trying to avoid the definitions bottleneck?)
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
So I get that the plan (which might get encoded directly into some program) might have problems if it attempts to make explicit references to something which may well be random .. so wouldn't the programmer just avoid doing that by creating a different set of rules which aren't dependent on random things (as you say there: the position of other vehicles). Eg: Rule #1: always leave the immobile car 2 metres from any other two immobile (parked) vehicles(?) .. or something like that?

My grad work was in machine control. When I was in grad school in the late 1980s (30+ years ago) I had a robot that could handle collision avoidance for randomly placed stationary objects. So, it's a problem that's been solved for many years.

Moving objects are more complicated. Moving objects whose trajectory can change at any moment is even harder. Still, in terms of logic & adaptation, the chess problem could be more challenging than self-driving cars. I'd have to dig into more, but it wouldn't surprise me.

But when I got into industry, I was frustrated to find the code is only one challenge. The more difficult challenges are the placement & durability of the sensors, and how easily they're fooled or compromised by something like getting covered in mud.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
6,172
1,963
✟176,122.00
Faith
Humanist
Marital Status
Private
But when I got into industry, I was frustrated to find the code is only one challenge. The more difficult challenges are the placement & durability of the sensors, and how easily they're fooled or compromised by something like getting covered in mud.
So, there's inaccuracies creeping into sensor measurements, which may have more impact in the dynamic cases .. but there are other arrows in AI's quiver to overcome those inaccuracies, which we don't have (such as: pretty well infinite perfect memory retention/retrieval, and pretty well instantaneous, perfect application of uniniterrupted logical focus, etc ).
Swings and roundabouts there .. and nothing which comes to mind which would distinguish AI from human intelligence there .. (other than by way of us actually knowing how the two go about 'being intelligent', as the basis of saying the two are fundamentally different intelligences)?
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
So, there's inaccuracies creeping into sensor measurements, which may have more impact in the dynamic cases .. but there are other arrows in AI's quiver to overcome those inaccuracies, which we don't have (such as: pretty well infinite perfect memory retention/retrieval, and pretty well instantaneous, perfect application of uniniterrupted logical focus, etc ).
Swings and roundabouts there .. and nothing which comes to mind which would distinguish AI from human intelligence there .. (other than by way of us actually knowing how the two go about 'being intelligent', as the basis of saying the two are fundamentally different intelligences)?

There are computers available that could give AI some of those advantages, and those things are often brought to bear in chess AI, but not as much as you would think in autonomous vehicles. Cost, space, and heat loads become an issue. If you'd like to drive a tank-sized car where you are squeezed into a tiny seat and everything else is for the AI, it might have all the capabilities you envision. I and my colleagues were constantly battling over memory, clock cycles, and sensor channels for our competing goals in improving the vehicle.

I worked on clutch control, and a constant headache for us was that we could never get approval for a pressure feedback sensor. It meant we had to "tune" the system (essentially guess) because by the time other sensors detected something was happening, it would be too late for the controller to respond - and that was only a matter of tens of loops (a few tenths of a second) - but people in the vehicle could feel it if we got it wrong.

Still, sensor issues are something AI can't overcome. Imagine a game of blind man's bluff. If you can't sense it, you can't calculate what to do, no matter how much memory you have and how fast you are.

So, while chess AI may possibly be the more technically challenging AI problem the stakes are much higher in autonomous vehicles if you get it wrong.
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
I think what I'm getting at here, is that the complex responses which we humans apparently employ in order to navigate an evidently complex universe, are more or less intuitive to us, yet we are (mostly) still following a fairly simple set of rules .. survival, laziness (energy conservation), avoidance of pain bad, smells, unpleasant images, etc.

I suppose then if we are able to distinguish what those rules actually are in a given context, (which involves a lot of hard work), then I would suppose those rule could be encoded into AI software(?)

The behaviours of that AI software, would then display similar (but nonetheless, slightly different) appearances as our own under those conditions(?) I don't think the individual responses would be predictable in advance, but the overall general behaviours would be very recognisable to us (intuitively so)?

Is that what we're talking about when it comes to AI 'intelligence' in this thread? (whilst trying to avoid the definitions bottleneck?)

If I understand you, I would say the AI being discussed here is more impressive than what you describe.

Imagine being told the rules of baking: what happens when you mix various things, heat them, etc. Everybody "knows" the perfect cake involves mixing flour, eggs, sugar, oil, and chocolate. But the AI tells you something crazy, to mix ingredients that make no sense, heat it to temperatures that make no sense, etc. No one's ever done that before. But you do it and the result is the most amazing cake you've ever tasted.

It may be impressive, but I'm disputing that the AI is intelligent. The Sistine Chapel is impressive, but the paint brush isn't the artist.
 
Upvote 0

Oneiric1975

Well-Known Member
Apr 23, 2021
1,044
684
48
Seattle
✟15,282.00
Country
United States
Faith
Seeker
Marital Status
In Relationship
Just firewall those machines against this:

Galatians 5:19 Now the works of the flesh are manifest, which are these; Adultery, fornication, uncleanness, lasciviousness,
20 Idolatry, witchcraft, hatred, variance, emulations, wrath, strife, seditions, heresies,
21 Envyings, murders, drunkenness, revellings, and such like:

Ironically enough that was EXACTLY the training set I was going to use for my document classifier algorithm. Now I'll have to change it I guess.
 
Upvote 0

Strathos

No one important
Dec 11, 2012
12,663
6,531
God's Earth
✟263,276.00
Faith
Christian
Marital Status
Single
Politics
US-Democrat
I'd like to point out that traditional chess engines surpassed the benchmark made by Alpha Zero and the strongest engines currently are those that use a combination of traditional and neural network techniques.
 
Upvote 0

sjastro

Newbie
May 14, 2014
4,902
3,960
✟276,494.00
Faith
Christian
Marital Status
Single
You accuse me of obfuscation. I will simply note I am in good company. What I repeatedly ask you to do, and what you repeatedly refuse to do, is define your terms. For someone who claims to be a physicist, it perplexes me that you won't do it. Regardless, in this video Feynman is asked to answer whether machines are intelligent. Almost the first comment he makes is that in order to answer, intelligence must be defined. (see 0:28)


You deferred on that point. So, I took the liberty of doing it myself. Since, however, you declined to share, I likewise decline to share how I came to that conclusion ... though if you read my earlier replies, you'll see that actually I've already answered the question.

Good day, sir.

I asked you how a programmer can anticipate the random conditions a driverless car encounters and this is the response I get.
This is typical of your obfuscation in this thread but I will respond anyway.

1) I never claimed to be physicist, in fact I have gone out of my way in other threads to point out my background is applied mathematics.
2) Your motivation behind posting the Feynman video is a classic example of the argument of authority fallacy.
3) The video is horribly outdated; it goes back to 1985 when AI was still in a rudimentary stage and AI in computer chess was decades away.
One can only wonder if Feynman would have the same opinion if he was alive today.
4) I respect expertise and if a peer reviewed paper claims there was no human involvement in the self training of AlphaZero I will accept it.
You accuse me of not explaining things in ‘my terms’.
I don’t have to explain in my terms because it is based on what the experts claim which is why is I have referred you to the peer reviewed paper in dealing with your questions.
5) Since you disagree with the experts who developed AlphaZero or in this case the role of AI in driverless cars, the onus is on you to answer the questions in order for me to understand the reasons behind your assertions.
Instead I get these blatant diversionary tactics.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
6,172
1,963
✟176,122.00
Faith
Humanist
Marital Status
Private
It may be impressive, but I'm disputing that the AI is intelligent. The Sistine Chapel is impressive, but the paint brush isn't the artist.
This topic intrigues me because I lean towards a philosophy of Science which acknowledges (incorporates?) the human mind of the observer. (I think this approach is more consistent than say, one which relies on philosophical realism, (aka: the universe exists independently from our minds). One conclusion that can be formed with the mind dependency mode is that what we see happening with human minds, is a continual process of exploring its own perceptions (and thinking capabilities).

So what you're leaning towards, is sort of a rejection of the notion of a mind exploring itself, when we think about AI intelligence. I'm not at all sure this tracks towards an optimally consistent version of Science .. but I have to accept you may also be right in treating AI as something which exists indenpendently from our human minds .. (which is interesting for me).
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

sjastro

Newbie
May 14, 2014
4,902
3,960
✟276,494.00
Faith
Christian
Marital Status
Single
I'd like to point out that traditional chess engines surpassed the benchmark made by Alpha Zero and the strongest engines currently are those that use a combination of traditional and neural network techniques.
I read somewhere traditional engines are using common Stockfish trained neural nets practically reducing them to Stockfish clones.
In which case it is Leela Chess Zero vs Stockfish clones.
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
I never claimed to be physicist, in fact I have gone out of my way in other threads to point out my background is applied mathematics.

That is indeed my error.

One can only wonder if Feynman would have the same opinion if he was alive today.

I doubt he would. I specifically noted the purpose for the video, providing you a timestamp. It had no other purpose. If it is the case that physicists provide definitions but, for some reason, applied mathematicians are not so obligated, that is new information.

I don’t have to explain in my terms because it is based on what the experts claim which is why is I have referred you to the peer reviewed paper in dealing with your questions.

They are your terms in the sense that you are defending them. The paper never defines the term "learn" (or the more specific term "reinforcement learning"), which is what I asked for. I explicitly stated that providing an accepted definition from the AI field is fine.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
6,172
1,963
✟176,122.00
Faith
Humanist
Marital Status
Private
They are your terms in the sense that you are defending them. The paper never defines the term "learn" (or the more specific term "reinforcement learning"), which is what I asked for. I explicitly stated that providing an accepted definition from the AI field is fine.
(Confession: I only just read the paper).
In the case of AlphaZero, I think those terms are defined by the algorithms they're using?
(Don't ask me to explain the principles behind them though ..)
Here's a sample of the first part:
Instead of a handcrafted evaluation function and move ordering heuristics, AlphaZero uses a deep neural network (p,v) = fθ(s) with parameters θ. This neural network fθ(s) takes the board position s as an input and outputs a vector of move probabilities p with components pa = Pr(a|s) for each action a, and a scalar value v estimating the expected outcome z of the game from position s, v ≈ E[z|s]. AlphaZero learns these move probabilities and value estimates entirely from self-play; these are then used to guide its search in future games. ...
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
(Confession: I only just read the paper).
In the case of AlphaZero, I think those terms are defined by the algorithms they're using?
(Don't ask me to explain the principles behind them though ..)

Fair enough. It's perfectly fine if early on someone would say, "That's the term they used. I'm not an expert so I don't know how they would define it."

Once they dig in and insist the meaning is clear, I'm expecting that means they have a familiarity with the published work and don't need to say "go read the paper yourself" except out of spite. They should be able to explain it themselves.

The paper references work by Silver published in Nature as the source of "reinforcement learning". I don't have access to Nature to see what that paper says.
 
Upvote 0

sjastro

Newbie
May 14, 2014
4,902
3,960
✟276,494.00
Faith
Christian
Marital Status
Single
They are your terms in the sense that you are defending them. The paper never defines the term "learn" (or the more specific term "reinforcement learning"), which is what I asked for. I explicitly stated that providing an accepted definition from the AI field is fine.
:scratch:

Let me get this right so there is no confusion here.
Since the paper is not to your satisfaction because of vague terms such as "learn" you want me to fill in the gaps?
I don't pretend to be an expert on the subject like I am not an expert of neurosurgery and wouldn't dole out advice on brain surgery either.
I suggest you contact Deepmind and direct your queries there.
 
  • Like
Reactions: SelfSim
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

SelfSim

A non "-ist"
Jun 23, 2014
6,172
1,963
✟176,122.00
Faith
Humanist
Marital Status
Private
.. I'm expecting that means they have a familiarity with the published work and don't need to say "go read the paper yourself"
I think the people who wrote the paper know what the term means though .. and should be given the benefit of any doubts one may have, particularly where one doesn't understand their definition and where AlphaZero is clearly able to demonstrate its stuff!?
J_B_ said:
They should be able to explain it themselves.
Meh .. I think that might just be an ego based opinion, there(?) .. I mean .. especially, apparently, given that most of us don't yet understand the algorithms without expending a bucket of effort/time to do so ..(?)
J_B_ said:
The paper references work by Silver published in Nature as the source of "reinforcement learning". I don't have access to Nature to see what that paper says.
Bummer, eh? .. (I'm familiar with that particular feeling).
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
I think the people who wrote the paper know what the term means though ...

Yeah, I'm sure they know what they meant. If you look at my early requests, I was asking if there was rigor behind the term or if it was a metaphorical use in danger of conflation. Because there are actually 2 texts involved here: 1) the chess paper & 2) the article on morals

The rigor is possibly there in the chess paper, however, I can't get to their references to verify it.
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
Since the paper is not to your satisfaction because of vague terms such as "learn" you want me to fill in the gaps?

If you're going to defend it, say the meaning is clear, and accuse me of obfuscation for thinking it's not clear, then yes.

I don't pretend to be an expert on the subject.

That's the first time I recall seeing you say that. You needn't bother giving me an answer then.
 
Upvote 0

sjastro

Newbie
May 14, 2014
4,902
3,960
✟276,494.00
Faith
Christian
Marital Status
Single
If you're going to defend it, say the meaning is clear, and accuse me of obfuscation for thinking it's not clear, then yes.



That's the first time I recall seeing you say that. You needn't bother giving me an answer then.
The level at which this paper is being discussed in this thread only requires a basic understanding of English.
It states categorically there was no human involvement in the self training process.
You have been given every opportunity to state your case why this premise is wrong and I have come to the conclusion you are engaging in nothing more than self denial.

The other point is your criticism of the paper as not being clear.
Perhaps you are unaware the readers of peer reviewed papers are generally in the same field as the authors and the detail in the paper is directed towards them.
They don't need to be given a crash course in reinforcement learning.

Furthermore there are fifty references given in the paper for those who do want more detail.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

SelfSim

A non "-ist"
Jun 23, 2014
6,172
1,963
✟176,122.00
Faith
Humanist
Marital Status
Private
It states categorically there was no human involvement in the self training process.
I guess that's technically correct but there must be hundreds/thousands(?) of person years of human research gone into: trial/error statistics, game theory and search algorithm experience etc, that's been poured into distilling the present 'winning formula'/algorithm choices?

Exactly how that might manifest itself in producing the final product, has sort of been lost in all the number crunching, I think(?)
 
Upvote 0