• Welcome to Christian Forums
  1. Welcome to Christian Forums, a forum to discuss Christianity in a friendly surrounding.

    Your voice is missing! You will need to register to be able to join in fellowship with Christians all over the world.

    We hope to see you as a part of our community soon and God Bless!

  2. The forums in the Christian Congregations category are now open only to Christian members. Please review our current Faith Groups list for information on which faith groups are considered to be Christian faiths. Christian members please remember to read the Statement of Purpose threads for each forum within Christian Congregations before posting in the forum.
  3. Please note there is a new rule regarding the posting of videos. It reads, "Post a summary of the videos you post . An exception can be made for music videos.". Unless you are simply sharing music, please post a summary, or the gist, of the video you wish to share.
  4. There have been some changes in the Life Stages section involving the following forums: Roaring 20s, Terrific Thirties, Fabulous Forties, and Golden Eagles. They are changed to Gen Z, Millennials, Gen X, and Golden Eagles will have a slight change.
  5. CF Staff, Angels and Ambassadors; ask that you join us in praying for the world in this difficult time, asking our Holy Father to stop the spread of the virus, and for healing of all affected.
  6. We are no longer allowing posts or threads that deny the existence of Covid-19. Members have lost loved ones to this virus and are grieving. As a Christian site, we do not need to add to the pain of the loss by allowing posts that deny the existence of the virus that killed their loved one. Future post denying the Covid-19 existence, calling it a hoax, will be addressed via the warning system.

Preventing artificial intelligence from taking on negative human traits.

Discussion in 'Physical & Life Sciences' started by sjastro, May 6, 2021.

  1. SelfSim

    SelfSim A non "-ist"

    +1,199
    Humanist
    Private
    So I get that the plan (which might get encoded directly into some program) might have problems if it attempts to make explicit references to something which may well be random .. so wouldn't the programmer just avoid doing that by creating a different set of rules which aren't dependent on random things (as you say there: the position of other vehicles). Eg: Rule #1: always leave the immobile car 2 metres from any other two immobile (parked) vehicles(?) .. or something like that?
     
  2. SelfSim

    SelfSim A non "-ist"

    +1,199
    Humanist
    Private
    I think what I'm getting at here, is that the complex responses which we humans apparently employ in order to navigate an evidently complex universe, are more or less intuitive to us, yet we are (mostly) still following a fairly simple set of rules .. survival, laziness (energy conservation), avoidance of pain bad, smells, unpleasant images, etc.

    I suppose then if we are able to distinguish what those rules actually are in a given context, (which involves a lot of hard work), then I would suppose those rule could be encoded into AI software(?)

    The behaviours of that AI software, would then display similar (but nonetheless, slightly different) appearances as our own under those conditions(?) I don't think the individual responses would be predictable in advance, but the overall general behaviours would be very recognisable to us (intuitively so)?

    Is that what we're talking about when it comes to AI 'intelligence' in this thread? (whilst trying to avoid the definitions bottleneck?)
     
  3. J_B_

    J_B_ Well-Known Member

    548
    +182
    United States
    Christian
    Private
    My grad work was in machine control. When I was in grad school in the late 1980s (30+ years ago) I had a robot that could handle collision avoidance for randomly placed stationary objects. So, it's a problem that's been solved for many years.

    Moving objects are more complicated. Moving objects whose trajectory can change at any moment is even harder. Still, in terms of logic & adaptation, the chess problem could be more challenging than self-driving cars. I'd have to dig into more, but it wouldn't surprise me.

    But when I got into industry, I was frustrated to find the code is only one challenge. The more difficult challenges are the placement & durability of the sensors, and how easily they're fooled or compromised by something like getting covered in mud.
     
  4. SelfSim

    SelfSim A non "-ist"

    +1,199
    Humanist
    Private
    So, there's inaccuracies creeping into sensor measurements, which may have more impact in the dynamic cases .. but there are other arrows in AI's quiver to overcome those inaccuracies, which we don't have (such as: pretty well infinite perfect memory retention/retrieval, and pretty well instantaneous, perfect application of uniniterrupted logical focus, etc ).
    Swings and roundabouts there .. and nothing which comes to mind which would distinguish AI from human intelligence there .. (other than by way of us actually knowing how the two go about 'being intelligent', as the basis of saying the two are fundamentally different intelligences)?
     
  5. J_B_

    J_B_ Well-Known Member

    548
    +182
    United States
    Christian
    Private
    There are computers available that could give AI some of those advantages, and those things are often brought to bear in chess AI, but not as much as you would think in autonomous vehicles. Cost, space, and heat loads become an issue. If you'd like to drive a tank-sized car where you are squeezed into a tiny seat and everything else is for the AI, it might have all the capabilities you envision. I and my colleagues were constantly battling over memory, clock cycles, and sensor channels for our competing goals in improving the vehicle.

    I worked on clutch control, and a constant headache for us was that we could never get approval for a pressure feedback sensor. It meant we had to "tune" the system (essentially guess) because by the time other sensors detected something was happening, it would be too late for the controller to respond - and that was only a matter of tens of loops (a few tenths of a second) - but people in the vehicle could feel it if we got it wrong.

    Still, sensor issues are something AI can't overcome. Imagine a game of blind man's bluff. If you can't sense it, you can't calculate what to do, no matter how much memory you have and how fast you are.

    So, while chess AI may possibly be the more technically challenging AI problem the stakes are much higher in autonomous vehicles if you get it wrong.
     
  6. J_B_

    J_B_ Well-Known Member

    548
    +182
    United States
    Christian
    Private
    If I understand you, I would say the AI being discussed here is more impressive than what you describe.

    Imagine being told the rules of baking: what happens when you mix various things, heat them, etc. Everybody "knows" the perfect cake involves mixing flour, eggs, sugar, oil, and chocolate. But the AI tells you something crazy, to mix ingredients that make no sense, heat it to temperatures that make no sense, etc. No one's ever done that before. But you do it and the result is the most amazing cake you've ever tasted.

    It may be impressive, but I'm disputing that the AI is intelligent. The Sistine Chapel is impressive, but the paint brush isn't the artist.
     
  7. Oneiric1975

    Oneiric1975 Well-Known Member

    +665
    United States
    Seeker
    In Relationship
    Ironically enough that was EXACTLY the training set I was going to use for my document classifier algorithm. Now I'll have to change it I guess.
     
  8. Strathos

    Strathos No one important

    +6,289
    Christian
    Single
    US-Democrat
    I'd like to point out that traditional chess engines surpassed the benchmark made by Alpha Zero and the strongest engines currently are those that use a combination of traditional and neural network techniques.
     
    • Agree Agree x 2
    • Useful Useful x 1
    • List
  9. sjastro

    sjastro Newbie

    +1,966
    Christian
    Single
    I asked you how a programmer can anticipate the random conditions a driverless car encounters and this is the response I get.
    This is typical of your obfuscation in this thread but I will respond anyway.

    1) I never claimed to be physicist, in fact I have gone out of my way in other threads to point out my background is applied mathematics.
    2) Your motivation behind posting the Feynman video is a classic example of the argument of authority fallacy.
    3) The video is horribly outdated; it goes back to 1985 when AI was still in a rudimentary stage and AI in computer chess was decades away.
    One can only wonder if Feynman would have the same opinion if he was alive today.
    4) I respect expertise and if a peer reviewed paper claims there was no human involvement in the self training of AlphaZero I will accept it.
    You accuse me of not explaining things in ‘my terms’.
    I don’t have to explain in my terms because it is based on what the experts claim which is why is I have referred you to the peer reviewed paper in dealing with your questions.
    5) Since you disagree with the experts who developed AlphaZero or in this case the role of AI in driverless cars, the onus is on you to answer the questions in order for me to understand the reasons behind your assertions.
    Instead I get these blatant diversionary tactics.
     
  10. SelfSim

    SelfSim A non "-ist"

    +1,199
    Humanist
    Private
    This topic intrigues me because I lean towards a philosophy of Science which acknowledges (incorporates?) the human mind of the observer. (I think this approach is more consistent than say, one which relies on philosophical realism, (aka: the universe exists independently from our minds). One conclusion that can be formed with the mind dependency mode is that what we see happening with human minds, is a continual process of exploring its own perceptions (and thinking capabilities).

    So what you're leaning towards, is sort of a rejection of the notion of a mind exploring itself, when we think about AI intelligence. I'm not at all sure this tracks towards an optimally consistent version of Science .. but I have to accept you may also be right in treating AI as something which exists indenpendently from our human minds .. (which is interesting for me).
     
  11. sjastro

    sjastro Newbie

    +1,966
    Christian
    Single
    I read somewhere traditional engines are using common Stockfish trained neural nets practically reducing them to Stockfish clones.
    In which case it is Leela Chess Zero vs Stockfish clones.
     
  12. J_B_

    J_B_ Well-Known Member

    548
    +182
    United States
    Christian
    Private
    That is indeed my error.

    I doubt he would. I specifically noted the purpose for the video, providing you a timestamp. It had no other purpose. If it is the case that physicists provide definitions but, for some reason, applied mathematicians are not so obligated, that is new information.

    They are your terms in the sense that you are defending them. The paper never defines the term "learn" (or the more specific term "reinforcement learning"), which is what I asked for. I explicitly stated that providing an accepted definition from the AI field is fine.
     
  13. SelfSim

    SelfSim A non "-ist"

    +1,199
    Humanist
    Private
    (Confession: I only just read the paper).
    In the case of AlphaZero, I think those terms are defined by the algorithms they're using?
    (Don't ask me to explain the principles behind them though ..)
    Here's a sample of the first part:
     
  14. J_B_

    J_B_ Well-Known Member

    548
    +182
    United States
    Christian
    Private
    Fair enough. It's perfectly fine if early on someone would say, "That's the term they used. I'm not an expert so I don't know how they would define it."

    Once they dig in and insist the meaning is clear, I'm expecting that means they have a familiarity with the published work and don't need to say "go read the paper yourself" except out of spite. They should be able to explain it themselves.

    The paper references work by Silver published in Nature as the source of "reinforcement learning". I don't have access to Nature to see what that paper says.
     
  15. sjastro

    sjastro Newbie

    +1,966
    Christian
    Single
    :scratch:

    Let me get this right so there is no confusion here.
    Since the paper is not to your satisfaction because of vague terms such as "learn" you want me to fill in the gaps?
    I don't pretend to be an expert on the subject like I am not an expert of neurosurgery and wouldn't dole out advice on brain surgery either.
    I suggest you contact Deepmind and direct your queries there.
     
  16. SelfSim

    SelfSim A non "-ist"

    +1,199
    Humanist
    Private
    I think the people who wrote the paper know what the term means though .. and should be given the benefit of any doubts one may have, particularly where one doesn't understand their definition and where AlphaZero is clearly able to demonstrate its stuff!?
    Meh .. I think that might just be an ego based opinion, there(?) .. I mean .. especially, apparently, given that most of us don't yet understand the algorithms without expending a bucket of effort/time to do so ..(?)
    Bummer, eh? .. (I'm familiar with that particular feeling).
     
  17. J_B_

    J_B_ Well-Known Member

    548
    +182
    United States
    Christian
    Private
    Yeah, I'm sure they know what they meant. If you look at my early requests, I was asking if there was rigor behind the term or if it was a metaphorical use in danger of conflation. Because there are actually 2 texts involved here: 1) the chess paper & 2) the article on morals

    The rigor is possibly there in the chess paper, however, I can't get to their references to verify it.
     
  18. J_B_

    J_B_ Well-Known Member

    548
    +182
    United States
    Christian
    Private
    If you're going to defend it, say the meaning is clear, and accuse me of obfuscation for thinking it's not clear, then yes.

    That's the first time I recall seeing you say that. You needn't bother giving me an answer then.
     
  19. sjastro

    sjastro Newbie

    +1,966
    Christian
    Single
    The level at which this paper is being discussed in this thread only requires a basic understanding of English.
    It states categorically there was no human involvement in the self training process.
    You have been given every opportunity to state your case why this premise is wrong and I have come to the conclusion you are engaging in nothing more than self denial.

    The other point is your criticism of the paper as not being clear.
    Perhaps you are unaware the readers of peer reviewed papers are generally in the same field as the authors and the detail in the paper is directed towards them.
    They don't need to be given a crash course in reinforcement learning.

    Furthermore there are fifty references given in the paper for those who do want more detail.
     
  20. SelfSim

    SelfSim A non "-ist"

    +1,199
    Humanist
    Private
    I guess that's technically correct but there must be hundreds/thousands(?) of person years of human research gone into: trial/error statistics, game theory and search algorithm experience etc, that's been poured into distilling the present 'winning formula'/algorithm choices?

    Exactly how that might manifest itself in producing the final product, has sort of been lost in all the number crunching, I think(?)
     
Loading...