• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • CF has always been a site that welcomes people from different backgrounds and beliefs to participate in discussion and even debate. That is the nature of its ministry. In view of recent events emotions are running very high. We need to remind people of some basic principles in debating on this site. We need to be civil when we express differences in opinion. No personal attacks. Avoid you, your statements. Don't characterize an entire political party with comparisons to Fascism or Communism or other extreme movements that committed atrocities. CF is not the place for broad brush or blanket statements about groups and political parties. Put the broad brushes and blankets away when you come to CF, better yet, put them in the incinerator. Debate had no place for them. We need to remember that people that commit acts of violence represent themselves or a small extreme faction.
  • We hope the site problems here are now solved, however, if you still have any issues, please start a ticket in Contact Us

  • The rule regarding AI content has been updated. The rule now rules as follows:

    Be sure to credit AI when copying and pasting AI sources. Link to the site of the AI search, just like linking to an article.

Artificial Intelegence

Antoninus Verus

Well-Known Member
Dec 28, 2004
1,496
69
38
Californication
✟2,022.00
Faith
Pagan
Marital Status
Engaged
Politics
US-Others
AI may be possible in the near future...but...is it really a good idea? My guess is that we will be able to create a computer rather like a Vulcan, capable of creative and independant thought, but devoid of our spark. Our "humanity". When someone asked one of these conscious machines something like "How do we solve world hunger?" The machine would come up with the most logical answer, kill everyone. So...Im not sure that AI would be such a hot deal.
 

kermit the toad

Regular Member
Apr 20, 2004
299
14
42
Edmonton, Alberta, Canada
Visit site
✟30,527.00
Faith
Unitarian
Marital Status
In Relationship
Politics
CA-Greens
A lot of people who study AI say that it would probably be necessary to give the machine emotions (though, I have no idea how this is possible) in order to make it able to "properly" interact with humans. I don't know too much about AI technology though.
 
Upvote 0

MoonlessNight

Fides et Ratio
Sep 16, 2003
10,217
3,523
✟63,049.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Private
Politics
US-Others
kermit the toad said:
A lot of people who study AI say that it would probably be necessary to give the machine emotions (though, I have no idea how this is possible) in order to make it able to "properly" interact with humans. I don't know too much about AI technology though.
It will be necessary to make robots have the ability to feign emotion; this is not overly difficult. It is all a matter of stimulus and response. In terms of interactions with humans it does not matter if the AI is actually experiencing any sort of emotion, as long as it appears sad when it should be sad, happy when it should be happy and all that. Trying to have the robot actually experience emotion is pointless, and perhaps impossible.

Of course, I don't really see the need for human like robots or AIs, and their whole existence opens up a can of worms that would better remain shut. But I suppose there's no stopping "progress".
 
Upvote 0

kermit the toad

Regular Member
Apr 20, 2004
299
14
42
Edmonton, Alberta, Canada
Visit site
✟30,527.00
Faith
Unitarian
Marital Status
In Relationship
Politics
CA-Greens
Antoninus Verus said:
Actually, computers with emotions would be a bad scenario as well. What would stop them from making rash and heated decisions based on emotion? And if one of those computers controlled our country's nuclear firepower......

This is already the case, except that it is people controlling the nukes, not robots.
 
Upvote 0

Blackguard_

Don't blame me, I voted for Kodos.
Feb 9, 2004
9,468
374
43
Tucson
✟33,992.00
Faith
Lutheran
I think those people in Dune who destroyed all the "thinking machines" were on to something.

Just look at all the bad things an AI could do, HAL 9000, Skynet, Shodan, etc.
HAL 9000 especially, as he concluded logically that the best way to complete the ship's mission was to not have any meat-sacks holding him back.

Humans are evil and powerful enough, and some want to create a mind, that will not only will have greater reach, such as through networks but may have some sort of fancy metal armored body that is hard to destroy by normal means, plus they don't need sleep, and would would be infinitely patient, would not feel pain, etc. Why would someone want to create someting so hard to defeat if it turns on you? And at least some truly sentient AIs will almost ceratinly turn on us. Why would it not? I hope you do not think fallen Man is capable of creating a un-fallen race.

Remember that Far Side cartoon about Viktor Frankenstein as a schoolchild writing "I will not play in God's domain" on the blackboard a la Bart Simpson? It seems very apprporiate here.
 
Upvote 0

Kasey

Well-Known Member
Mar 8, 2004
1,182
12
✟1,402.00
Faith
I, personally, dont think its a good idea. The process of creating AI will eventually lead to wanting to create sentient life out of machinery, like Data or Star Trek: TNG. I dont think thats our right to do so. Abstractly, thats creating life, and I mean creating life as in playing God.

We are human, not divine and I dont believe its our right to try to take the role of a divine entity to actually try to, eventually that is, bring forth sentient life into this world that is of our own limited design rather than divine intelligence.
 
Upvote 0

Kripost

Senior Veteran
Mar 23, 2004
2,085
84
46
✟2,681.00
Faith
Catholic
Marital Status
Single
I think it is quite unlikely in this century if we are using the same basic architecture. Currently, AI applications have specific problem domains, even those which employ machine learning.

Also, abstract concepts are rather difficult to model. Even in a zero-sum, deterministic game like go (an asian board game), there is no program capable of defeating an expert human player in a reasonable time-limit.

Part of the problem lies in putting concepts as described by human players into code, at least without relying on heuristucs. Another problem is in pattern recognition which seems to be instinctive in humans. By looking at the position on the board, an expert player can say which move 'feels' better, but sometimes cannot explain why.

So there is no need to worry until someone manages to complete such a project, and even then it is only the first step.
 
Upvote 0

morningstar2651

Senior Veteran
Dec 6, 2004
14,557
2,591
41
Arizona
✟81,649.00
Gender
Male
Faith
Pagan
Marital Status
Single
Politics
US-Others
You may wonder why I included the link on the semantic web. The next generation of the search engine will be able to answer questions.

Random example (not accurate):

"How many cows are there in Texas?"

29 webpages say there are 28000
3 webpages say there are 28350
1 webpage says that there are no cows in Texas because they were all abducted by Martians.
 
Upvote 0

MoonlessNight

Fides et Ratio
Sep 16, 2003
10,217
3,523
✟63,049.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Private
Politics
US-Others
The thing that truly frightens me about artificial intellegence is that it would be as intellegent or (likely) moreso than us, but its methods would be completely alien. We don't really understand how our own conciousness works or why we have consciousness at all, so if an artificial intellegence does in fact become conscious, it will almost certainly not be due to our designs. Our design will have given it tasks that it must complete, standards that its behaviors must follow and so forth, but nothing regarding consciousness because we don't know what causes consciousness to arise in the first place. So this arising consciousness would be more dangerous than anything we have experienced thusfar, since at least with animals we have similar brain structure and share an evolutionary history. And with that in mind, there is no way to determine what a true artificial intellegence would decide to do, if given the capability to decide? They would not look at situations in terms of emotion or human concerns, so anything would be possible. They might knowingly kill a human, for example, for reasons that would be incomprehensible to us, because their very way of thinking would be completely alien. This is something I think we have to keep in mind before letting computers make any sort of important decision.
 
Upvote 0

fragmentsofdreams

Critical loyalist
Apr 18, 2002
10,358
431
22
CA
Visit site
✟43,828.00
Faith
Catholic
MoonlessNight said:
It will be necessary to make robots have the ability to feign emotion; this is not overly difficult. It is all a matter of stimulus and response. In terms of interactions with humans it does not matter if the AI is actually experiencing any sort of emotion, as long as it appears sad when it should be sad, happy when it should be happy and all that. Trying to have the robot actually experience emotion is pointless, and perhaps impossible.

Of course, I don't really see the need for human like robots or AIs, and their whole existence opens up a can of worms that would better remain shut. But I suppose there's no stopping "progress".

Actually, trying to make the AI more human like may make it more disconcerting to humans. Things that are just slightly off from humans create a strong negative reaction. It would be better to keep it in the realm of reality of cartoon characters, which are human enough for us to like without being disturbing.
 
Upvote 0

dr.p

next year's turkey dinner
Nov 28, 2004
634
43
46
here
✟984.00
Faith
Christian
Marital Status
Private
Politics
US-Others
Soc12 said:
I think that AI is a horrible idea. Some people think that movies like the matrix are crazy, but who is to say that stuff like this (maybe not exactly, but somewhat) couldn't really happen?

Skeptical programmers like myself say it can't happen. If you give the AI the ability to update itself (it's code, etc.), and make it advanced enough, you'll probably run into some problems. But, unless you create a massive network that's hooked up to most banks, stock markets, the internet (of course) and possible has access to the military's networks (which should never happen,) then it'll never happen. And I believe all those things happening is nowhere near likely.

What I do believe is likely to mess us up is this:

http://science.slashdot.org/article.pl?sid=05/01/11/0113243&tid=191&tid=137&tid=14

When we get to the point in messing around with DNA that we're looking at creating computers use it, then we have a porblem [sic]. Because we barely understand life, and messing around with it like this is not smart.
 
Upvote 0

robot23

Well-Known Member
Nov 22, 2004
410
17
✟620.00
Faith
Pagan
i hope that scientists develop super advanced AI robots. Robots are cool.
i don't think it's playing god. i think it's being a scientist. we already have some pretty good AI programs .
its not like a 1940's sci fi movie where you create a robot and all of a sudden its completely unpredictable and runs around killing people
that is science fiction
 
Upvote 0

HouseApe

Senior Veteran
Sep 30, 2004
2,426
188
Florida
✟3,485.00
Faith
Atheist
Marital Status
Married
MoonlessNight said:
The thing that truly frightens me about artificial intellegence is that it would be as intellegent or (likely) moreso than us, but its methods would be completely alien. We don't really understand how our own conciousness works or why we have consciousness at all, so if an artificial intellegence does in fact become conscious, it will almost certainly not be due to our designs. Our design will have given it tasks that it must complete, standards that its behaviors must follow and so forth, but nothing regarding consciousness because we don't know what causes consciousness to arise in the first place. So this arising consciousness would be more dangerous than anything we have experienced thusfar, since at least with animals we have similar brain structure and share an evolutionary history. And with that in mind, there is no way to determine what a true artificial intellegence would decide to do, if given the capability to decide? They would not look at situations in terms of emotion or human concerns, so anything would be possible. They might knowingly kill a human, for example, for reasons that would be incomprehensible to us, because their very way of thinking would be completely alien. This is something I think we have to keep in mind before letting computers make any sort of important decision.

Language attached to patterns causes consciousness. You are only conscious because you can say "I am conscious" out loud or in your head. If you could not tie language to the pattern "existence", you would not be conscious (in the self aware way). Studies on people born deaf-mutes show they were not self aware until they began to be taught some form of language.

AI in computers is required for true language recognition/generation. Also for visual recognition systems. Both have enormous applications. The kind of AI people are thinking of here would require a computer to program itself, and there are some very tricky problems associated with that.
 
Upvote 0