• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

If we had incredibly advanced robots that were just like humans...

elman

elman
Dec 19, 2003
28,949
451
85
Texas
✟54,197.00
Faith
Methodist
Marital Status
Married
"Thou shalt not make a machine in the likeness of a man's mind."

Not really what I was saying. If you did create a machine exactly like a man's mind, it would be some kind of intelligent machine, but not a robot. A robot implys controll from outside the robot and a lack of ability to think for itself.
 
Upvote 0

keith99

sola dosis facit venenum
Jan 16, 2008
23,112
6,802
72
✟381,362.00
Gender
Male
Faith
Atheist
Marital Status
Single
The Bicentennial Man - especially the novel - really does put this conundrum on its head. For those that haven't read it, it's about a robot that, through faulty programming, gets a will of his own (all robots in Asimovian novels are already sentient). He gets his own hobbies, a profession, owns his own property etc, but longs to be human. He becomes an expert at prosthetics and cybernetics and eventually has built himself a completely biological body. At that point, there are many humans who are more artificial than he is.

By Gottservants definition, he would at that point have to be defined as human. Of course, in the novel, he isn't - it isn't until he gives himself a finite lifespan by making his brain slowly destroy itself that the government formally declares him human, and the first bicentennial man - hence the title.

I think the novel aptly points out the problems with declaring that artificial beings are different. If we can create an artificial being, and if we can replace parts of ourselves with artificial prosthetics, then where do we draw the line? At what point does a human with prosthetics become a robot, and what makes a human different from a biological robot? Its origins? That's a meaningless distinction. Its soul? Before asserting that, we must show that the soul exists to begin with - otherwise that distinction is meaningless, too. So what, then?

If 'robots' get to this level it raises many problems. With Homo-Sapiens being unique as to rights the line is clear. But with Robots where is the line drawn? And how can it be measured? Intelligence? Fast powerful computers will soon be able to 'fake' this, eg. outperform many humans on established tests. Emotion? Well I guess in a way that does come in, at least there is no issue until some machine says 'I wish to live' when scheduled to be destroyed. Once a set of qualifying criteria is made for machines what happens to Homo-Sapiens who fail those tests? I'm sure yuoung enough children would! Or should organic beings be considered 'under construction/programing' until some age or other measure?

Reasonable thinking about this should also include the possibility of other races (eg space aliens) and also other kinds of artificial beings.

A few works of Science Fiction worth reviewing.

'The Moon is a Harsh Mistress'. (Mike) and 'Jerry was a Man' (Jerry is an artificially enhanced ape). Oh and for that matter 'Starship' Troopers' (Just where do neo-dogs fit. They are tightly bonded to their handlers, described as closer than marriage. If the handler dies the neo-dog is always destroyed. But we meet a handler whose neo-dog died and he comments that he thinks it might be kinder if handlers got the same treatment when the neo dies). All these by Heinlein.

'Reason' (I think it could be 'Logic') by Asimov. Mainly about just where logic can lead with limited data, but also an examination of just who is superior, man or possible future machines.

Lots of questions and I'm afraid the answers may well depend on just when and which of them make it to the courts first.
 
Upvote 0

Deadbolt

Mocker and Scoffer
Jul 19, 2007
1,019
54
40
South beloit, IL
✟23,955.00
Faith
Humanist
Marital Status
In Relationship
Politics
US-Others
Not really what I was saying. If you did create a machine exactly like a man's mind, it would be some kind of intelligent machine, but not a robot. A robot implys controll from outside the robot and a lack of ability to think for itself.
*shakes head sadly*

Are you telling me no one got the Dune Reference?
 
Upvote 0

anonymous1515

Senior Member
Feb 8, 2008
658
22
✟23,445.00
Faith
Seeker
I dunno. This thread raises a good question. If we create robots that, for all intents and purposes, have emotion and are autonomous, I think they should be treated as humans. After all - our brains are just a bunch of electrochemical impulses too.

What if they began manufacturing their own offspring? Now there's something to think about. Or if they started praying (creepy!). On what basis could we justify killing them?
 
Upvote 0

MoonlessNight

Fides et Ratio
Sep 16, 2003
10,217
3,523
✟63,049.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Private
Politics
US-Others
This post is 100% rambling, so be forewarned.

The trouble with the whole debate is that until this point we have had a very clear cut way of determining whether someone should be treated as a human, whether they are a human being or not. You can always answer the question and the line is very clear.

If we add robots to the equation, the line gets very fuzzy. This is partially because just what constitutes a robot/artificial intelligence is very fuzzy. It's easy to make definitions that include things that we do not consider to be in those categories. For example, it is possible to define robots to include hard automated machines (like assembly lines), and it is possible to define artificial intelligence to include things like spell check programs (if we use a definition like "a computer that does things that if a human did them, we would say that the human used his or her intelligence to do so"). These are clearly not what we want when we talk about an artificially intelligent robot.

But if we truly do believe that AI will eventually get to the point that they could be consider in some sense "human," it is very easy to be too restrictive. My CS adviser had his own definition of artificial intelligence, which was "Artificial Intelligence is the stuff we can't get computers to do yet." His point was that solved questions in artificial intelligence quickly become mundane to the point that they don't seem like AI anymore, regardless of the work needed to get there.

Now there are three ways to approach this question, how to treat robots from a legal standpoint and how to treat robots on a day to day basis. On a personal level I think there is something to be said for a sort of courtesy to robots, that is even if you don't think that robots are intelligent or if you have good reason (perhaps from knowledge of their programming) to believe that robots are not intelligent, there's no need to be hostile to robots without reason. It's sort of, if there's room for doubt play it safe kind of situation.

Legally speaking, however, we have to have a decision one way or the other. A mistake either way is bad. If robots really are human (in the future) in some respect, and we call them property, this opens up all kinds of potential abuse. But if they are just machines and we define them to be human it weakens human rights. And there is the troubling fact that any legal code that can make nonhumans human can make humans nonhuman. What I mean is that it would come with a new legal definition of humanity different from "biologically speaking you are a human being" which has the potential to make some human beings nonhuman.

It gets even more problematic when you consider that, as I noted earlier, there is not so much of a line between robots and machines (or AIs and computer programs) as there is a spectrum. What this means is that different people are going to draw different lines. Theoretically, only one of these lines is correct, but it is unlikely that the law will get it right so we are pretty much doomed to make the wrong decision in the legal code.

Of course all of this is only a problem if you believe that it is possible for robots to be somehow human (eventually) in the first place. I suppose such a definition would have to be mainly behavioral, since that is the only thing that we can test with robots and have it line up with humanity. But if the robots we call human are the ones that act (sufficiently) like humans, I have to ask why we would want to make robots like that in the first place. They certainly wouldn't be better at the jobs for which we currently like to use robots. And if we really do think that such robots should be treated as humans we lose one of the major benefits of robots: it's not a big deal putting them in dangerous situations. So what I mean is that you certainly wouldn't want a human bomb squad robot, even if you could do it.
 
Upvote 0

brinny

everlovin' shiner of light in dark places
Site Supporter
Mar 23, 2004
249,106
114,203
✟1,378,064.00
Faith
Non-Denom
Marital Status
Private
Politics
US-Constitution
and they had artificial intelligence, and showed emotions like humans, but were in fact made up of circuits and wires (like the terminator), is there any reason why we shouldn't treat them the same as normal human beings?

are they human?
 
Upvote 0

keith99

sola dosis facit venenum
Jan 16, 2008
23,112
6,802
72
✟381,362.00
Gender
Male
Faith
Atheist
Marital Status
Single
This post is 100% rambling, so be forewarned.

.....

Of course all of this is only a problem if you believe that it is possible for robots to be somehow human (eventually) in the first place. I suppose such a definition would have to be mainly behavioral, since that is the only thing that we can test with robots and have it line up with humanity. But if the robots we call human are the ones that act (sufficiently) like humans, I have to ask why we would want to make robots like that in the first place. They certainly wouldn't be better at the jobs for which we currently like to use robots. And if we really do think that such robots should be treated as humans we lose one of the major benefits of robots: it's not a big deal putting them in dangerous situations. So what I mean is that you certainly wouldn't want a human bomb squad robot, even if you could do it.

Doesn't that depend a lot on just waht a robot can do? We use a lot of farm machinery, something that can actually make judegement calls could do this a lot better. On the bomb squad or other dangerous there is a solution that raises even more questions. Think of a computer decended robot. Is the personallity in the memories? For the mechanical man these could be copied. A robot in bomb disposal could download before going out. Then 'death' just loses the last day.


BTW there has been a test for a long time the Turing test. I wonder just what percentage of humans pass it however.
 
Upvote 0

toirewadokodesuka

Well-Known Member
Dec 15, 2007
602
23
✟862.00
Faith
Catholic
Marital Status
Single
and they had artificial intelligence, and showed emotions like humans, but were in fact made up of circuits and wires (like the terminator), is there any reason why we shouldn't treat them the same as normal human beings?

go'bla'meeeeeeeeee thats a random thot rofl

... robiology: the study of robotic biology, robotic reproduction and robots co-existing with humans :cool:
 
Upvote 0

MoonlessNight

Fides et Ratio
Sep 16, 2003
10,217
3,523
✟63,049.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Private
Politics
US-Others
Doesn't that depend a lot on just waht a robot can do? We use a lot of farm machinery, something that can actually make judegement calls could do this a lot better. On the bomb squad or other dangerous there is a solution that raises even more questions. Think of a computer decended robot. Is the personallity in the memories? For the mechanical man these could be copied. A robot in bomb disposal could download before going out. Then 'death' just loses the last day.

Why bother giving a robot a personality? All it needs to do is to do its job. This may require some level of judgment, but that isn't the same thing as building a human personality.

The only jobs that it would be necessary to have a personality is for customer relations and the like, but even in those cases it'd probably be easier to just have a facsimile of a personality, and there wouldn't be much benefit to having the robot actually be "human."

BTW there has been a test for a long time the Turing test. I wonder just what percentage of humans pass it however.

The problem with the Turing test is that it is imprecise, hard to implement and rather arbitrary. Even Turing himself recognized that the test picked a completely different issue than what he actually wanted tested, but thought that if a computer could pass it one should treat it as intelligent out of courtesy to the possibility that it might be. But it is both possible to imagine a computer that is intelligent that cannot pass the test (even for reasons such as it calculates numbers too fast to be human and does not fake a human response time, but also for sufficiently nonhuman type intelligences or ones with limited communications abilities), and it is possible to imagine a non intelligent computer passing the test (in this case the program would blindly mimic human responses without comprehension; this could work if the control human is sufficiently odd, or the tester isn't thorough enough, or if the questions are just unlucky. There is also the "Chinese Room" argument which states that it could be possible to create a machine which would have apparent human proficiency in a language without understanding a lick of what it is saying.)

But getting back to my other points, why should we even bother designing AIs that pass the Turing test? In most cases such abilities are completely superfluous to their function, unless that function is "fool a human into thinking that you are also human." But motivations behind making that sort of robot are a bit troubling.
 
Upvote 0

quatona

"God"? What do you mean??
May 15, 2005
37,512
4,302
✟182,802.00
Faith
Seeker
Why bother giving a robot a personality? All it needs to do is to do its job.
Good point.
I used to be under the impression that their lack of emotions and feelings was what we value most about computers and robots and the way they do their jobs.
 
Upvote 0

TheGreatBongChicken

Follow porceliene Chicken
Mar 12, 2004
1,144
3
38
Middle of nowhere, NY
Visit site
✟24,295.00
Faith
Pentecostal
Marital Status
Single
Politics
US-Others
The bible teaches that "the life is in the blood".

robots = No blood = no life = no equality.

(I'm speaking briefly because I don't want to slow any discussion down).
You know, sort of playing the devils advocate here...

Are angels alive? Do they have blood? Does God have blood Himself? Is He alive? Should He be treated less than us?

I don't really disagree, but I just don't know if that's the strongest argument... I think that verse is talking about us, we need blood to live. This said I don't believe robots would be "life", because I believe that it would need to be biological. What would it be? Should it be treated equal to human beings? I think if it's ok to kill a "fetus"(that does in fact feel pain) than aside from monetary expenses we can't expect consequences for destroying machinery. I agree with the others that regardless, it's a bad idea to make artificial intelligence of that capacity. It could turn on us anyway!

-James'
 
Upvote 0

randomman

Regular Member
Jun 11, 2007
381
5
✟23,041.00
Faith
Muslim
Marital Status
Single
considering in simple terms the choice to believe or to do good vs evil

for an atheist, humans have complicated brains and their choices are governed by the mechanics of these brains and how they devolop

for the religious, humans have free choice regardless of the complexity of the brain

so for a religious person, the creator of the robot is responsible for its actions, and for the atheist the robot is responsible for its actions
 
Upvote 0

quatona

"God"? What do you mean??
May 15, 2005
37,512
4,302
✟182,802.00
Faith
Seeker
considering in simple terms the choice to believe or to do good vs evil

for an atheist, humans have complicated brains and their choices are governed by the mechanics of these brains and how they devolop

for the religious, humans have free choice regardless of the complexity of the brain

so for a religious person, the creator of the robot is responsible for its actions, and for the atheist the robot is responsible for its actions
Doesn´t follow.
 
Upvote 0