Preventing artificial intelligence from taking on negative human traits.

sjastro

Newbie
May 14, 2014
4,910
3,963
✟276,758.00
Faith
Christian
Marital Status
Single
Shoulda asked her out on date during the game!! :cool:
You expect me to ask this out on a date?

leela.jpg
I prefer my dates to be of the flesh and blood variety.
 
Upvote 0

durangodawood

Dis Member
Aug 28, 2007
23,566
15,704
Colorado
✟431,767.00
Country
United States
Faith
Seeker
Marital Status
Single
You expect me to ask this out on a date?

leela.jpg
I prefer my dates to be of the flesh and blood variety.
Once you try AI, you'll be burning for some machine learning.
 
Upvote 0

Neogaia777

Old Soul
Site Supporter
Oct 10, 2011
23,290
5,242
45
Oregon
✟958,691.00
Country
United States
Faith
Non-Denom
Marital Status
Celibate
I don't think it's a matter of computer beings (A.I.'s) being "intelligent", but rather are they really truly fully "conscious", do they act with their own "will", are they really "alive", etc...?

Or are they just building on "cause and effect", and learning or just writing new code that way, etc...?

Now, I don't even know if human beings even, are really truly free in this aspect yet, but I would consider a machine maybe even less so in this area or aspect, etc...

We can make illogical decisions, a computer intelligence really can't, etc, and sometimes the less logical can be more right, and sometimes absolutely must be chosen for the "more right", etc...

But it oftentimes takes emotion to be able to do that, etc, to discern that, etc, something machines just don't have, and maybe can't ever even take into account in their "calculations", etc...

And I think unless they someday can, they will always be more limited than a human being in that way, etc...

No matter how "logically or logarithmically intelligent" they become, etc...

Logic and cold logarithmics can only go far, take Spock and the Vulcan vs. Humans logic in Star Trek for example, etc...

Anyway,

God Bless!
 
Upvote 0

Neogaia777

Old Soul
Site Supporter
Oct 10, 2011
23,290
5,242
45
Oregon
✟958,691.00
Country
United States
Faith
Non-Denom
Marital Status
Celibate
"V'ger must join with it's creator"

And when asked why a group of "many" humans would risk their lives for only just one person only, "Because sometimes the needs of the one outweigh the needs of the many or the few", etc...

Humans "illogic", etc...

In my opinion, any A.I. now mater how logically or logarithmically intelligent it becomes, will always be missing that vital component of a free will thinking choosing being, without the ability to experience emotions or itself "feel", etc...

God Bless!
 
Upvote 0

durangodawood

Dis Member
Aug 28, 2007
23,566
15,704
Colorado
✟431,767.00
Country
United States
Faith
Seeker
Marital Status
Single
...Since you are the one that has the issues about "teach" or "learn" it's up to you to demonstrate the link is consistent with your definitions.
You can start by addressing this quote in the link.
The problem is that "teach" and "learn" in normal use have traditionally implied some receptive understanding consciousness.

Tech people looove to take words from normal use and apply them metaphorically to their creations. Rarely do all the implications of the traditional meaning transfer over to the new application.

I think the mismatch is happening here. @J_B_ wants a demonstration of the traditional notion of learning. AlphaWhatever cant provide.
 
Last edited:
  • Agree
Reactions: J_B_
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
The problem is that "teach" and "learn" in normal use have traditionally implied some receptive understanding consciousness.

Tech people looove to take words from normal use and apply them metaphorically to their creations. Rarely do all the implications of the traditional meaning transfer over to the new application.

I think the mismatch is happening here. @J_B_ wants a demonstration of the traditional notion of learning. AlphaWhatever cant provide.

Again, I wasn't nitpicking on "learn" just to nitpick about learning. Rather, I was pointing to the conflation between the normal meaning of the word and the tech metaphor (I suspect there is no technical definition of the term, but can't be sure) which has now taken the next step to discussing the "morals" of an AI (or at least that was my impression).

My position would be the AI has no morals. It's an amoral machine. It's the programmers who do (or don't) have morals, and AI isn't in some special class, but inherits all the same legal burdens as any other software program. I think I said as much very early on, but somehow the message was lost.

Computers don't defeat chess masters. People defeat chess masters. grin.

While it's a major accomplishment for computer science, it's not of any significance to actually sitting at a board and playing the game. Allowing a computer to play a person is like allowing robots to run races in the Olympics - a complete misunderstanding of what sports/games are all about.

And I'm being honest here. I think tech geeks often don't get it - what sports & games are all about for we mere mortals. The Big Bang Theory is an obvious exaggeration of the geek mystique, but there is a grain of truth in the "You just don't get it" complaint. Proving your tech knowledge can defeat athletes & chess masters is not it.

But chess must now (or maybe already is) wrestle with what baseball (and other sports) have been wrestling with for some time. You can make a rule to keep computers off the playing field - people only (the Houston Astros were just caught recently in a big cheating scandal in that regard). But you can't stop computers from training those athletes behind the scenes and forever tilting the playing field.
 
Last edited:
Upvote 0

durangodawood

Dis Member
Aug 28, 2007
23,566
15,704
Colorado
✟431,767.00
Country
United States
Faith
Seeker
Marital Status
Single
...Computers don't defeat chess masters. People defeat chess masters. grin.
Well now youre downplaying the part of learning that the AI actually did. The idea of a machine programmed with no strategy but just the rules and a directive to win, and then finding, assimilating, and deploying strategies superior to those humans had ever come up with. That is pretty astonishing and belongs in some kind of different category from ordinary human tool making.

And yes of course, no morally relevant "self" is there.....yet. (On the flip side, we might be overrating free-will and the human morally relevant self.)
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
Well now youre downplaying the part of learning that the AI actually did.

I am, and maybe too much for the sake of rhetoric. Don't overlook my acknowledgement that it is a significant achievement for computer science. I just dispute it's an accomplishment of any other kind. I downplay it because I think people overplay it.

The idea of a machine programmed with no strategy but just the rules and a directive to win, and then finding, assimilating, and deploying strategies superior to those humans had ever come up with. That is pretty astonishing and belongs in some kind of different category from ordinary human tool making.

I've worked with AI throughout my career (even written AI code), and been constantly underwhelmed by what it accomplishes. As I said in one post, once you see how the sausage is made ...

It has its uses, but it's being oversold IMHO. I have personal experience with oversold tech and could prattle on with anecdotes.

But for our purposes here, I will note the common anthropomorphic tendencies people have. That's the phenomena at work here when referring to the computer "finding, assimilating, and deploying strategies". You're attributing human activities to a machine that is mindlessly executing lines of code (adaptive or otherwise).

Human chess players could easily learn new chess strategies based on what these programs do - no doubt of that. But it takes human observation to identify what's happening here as a strategy. I constantly battle managers who think the computer has a strategy. If so, we don't need engineers - maybe we don't need people at all - just push a button and wait for the magic. Ha! Good luck!

[edit] We don't need people to play chess at all. Let's just set up two computers and watch them go at it. Won't that be fun!
 
Last edited:
Upvote 0

durangodawood

Dis Member
Aug 28, 2007
23,566
15,704
Colorado
✟431,767.00
Country
United States
Faith
Seeker
Marital Status
Single
....I will note the common anthropomorphic tendencies people have. That's the phenomena at work here when referring to the computer "finding, assimilating, and deploying strategies". You're attributing human activities to a machine that is mindlessly executing lines of code (adaptive or otherwise)....
Can I even say the machine "does" anything? Or is that word too anthropomorphic?
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
Can I even say the machine "does" anything? Or is that word too anthropomorphic?

Touché. Most of the old war horses I know are equally touchy about which terms are used and how they're used. Once the "but you said" conversation starts, you've already lost.

All the metaphorical language is fine when it's "just us girls", but once you let one o' them other kinds into the locker room, the air is ripe for misinterpretation.
 
Upvote 0

Gene Parmesan

Well-Known Member
Apr 4, 2017
695
547
Earth
✟36,853.00
Country
United States
Faith
Atheist
Marital Status
Married
If I build a bunch of robots and give them the capacity to do right or wrong and I have bestowed upon them personalities in the "image" of myself, I should be ultimately responsible for any harm they commit since I could have made them differently.
 
Upvote 0

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
If I build a bunch of robots and give them the capacity to do right or wrong and I have bestowed upon them personalities in the "image" of myself, I should be ultimately responsible for any harm they commit since I could have made them differently.

Would I be reading too much into this if I detected some commentary on theodicy?
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

J_B_

I have answers to questions no one ever asks.
May 15, 2020
1,258
365
Midwest
✟109,455.00
Country
United States
Faith
Christian
Marital Status
Private
No worries. I don't hold God responsible for anything at all, personally.

Maybe we at least agree the creators of AI are held responsible for what AI does?
 
  • Like
Reactions: Gene Parmesan
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,258
8,056
✟326,229.00
Faith
Atheist
Sure it would. It's just a different kind of 'feeling' than humans, or other creatures, have. You're being rather anthropocentric here.
The question is, what do we mean by feeling? IOW we define it clearly enough in this context, and without equivocation, to say what it would take for a machine to have feelings?

What usually happens in these sematic explorations is that the deeper one delves, the more human-related traits are invoked, until one ends up with a definition in terms of human characteristics that effectively excludes the non-human...

Are you suggesting that if someone starts chucking rocks at a drone, onboard (or network) AI should have an option to retaliate?
One argument is that if you give an AI too much leeway (cognitive flexibility) in terms of avoiding harm to itself, it might find neutralising the threat to be an effective response (hence Asimov's Laws of Robotics).
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,258
8,056
✟326,229.00
Faith
Atheist
Further, let me ask you this: You have not denied computational speed is the key to chess AI's success, which is a tacit agreement. As such, I, the slow wit, have identified the weakness that would allow me to defeat the chess AI quick wit. Would the chess AI quick wit be able to do the same (identify my weakness) if a human didn't add that feature to it's programming? I think not. As such, the chess AI quick wit would not object to my suggested rule change, and would have no reaction whatsoever to the fact that it now loses every game. It would simply chug along, doing it's computations. I simply can't label that "intelligence".
I think it depends on the 'depth' of the AI - for example, Alpha Go came up with a move, called 'miraculous' and 'sublime' by the world's top players (some hyperbole, perhaps), that no-one had envisaged, that no human player would have played, and that was not part of any explicit coding or strategy from its creators. AIUI there is an AI poker system that, without explicit programming or strategy for it, learns to exploit its opponents' weaknesses and will even play poor moves in the short term in order to exploit this 'understanding'.

Now that AIs can learn any game (of a particular type) from scratch, rule changes can easily be accommodated - although the difference would currently be that the AI can not do this on-the-fly, whereas a human can. But I see no reason why an AI could not, in principle, do this, given an accumulation of experience of rule changes during games (as occurs in children's play); but I think this would require a higher order of complexity.
 
Upvote 0