• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • CF has always been a site that welcomes people from different backgrounds and beliefs to participate in discussion and even debate. That is the nature of its ministry. In view of recent events emotions are running very high. We need to remind people of some basic principles in debating on this site. We need to be civil when we express differences in opinion. No personal attacks. Avoid you, your statements. Don't characterize an entire political party with comparisons to Fascism or Communism or other extreme movements that committed atrocities. CF is not the place for broad brush or blanket statements about groups and political parties. Put the broad brushes and blankets away when you come to CF, better yet, put them in the incinerator. Debate had no place for them. We need to remember that people that commit acts of violence represent themselves or a small extreme faction.

Does artificial intelligence have a soul?

K

kharisym

Guest
You can create a piano-playing robot and teach it how to play Fur Elise, but you cannot instill in that robot an appreciation for the beauty of Ludwig van Beethoven's music. You can even program the robot with an understanding of what "beauty" means to humans, examples of beautiful things and encyclopedic analysis of them. But in the end, the machine knows what beauty is only because it's been told by us what beauty is.

We're working on that. Three research programs taking place right now can pave the way to 'computational comprehension'.

First is the blue brain/FACETS programs in Europe, each of which is rather tightly linked. blue brain is currently attempting to reverse engineer the neocortical column of mice brains and simulate them to a high degree of neurologic and biochemical accuracy. FACETS is taking the database from blue brain and building a chip-based instead of virtualized model, allowing them to run it at around 100,000 times the speed of an organic network.

The next I forget the name of, it has DARPA funding though so you can probably dig it up off a list somewhere. In it, they're studying the methods cat brains use to analyze pictures and are attempting to build a functional model of a cat's occipital lobe. This is far more aggressive than the blue brain/facets projects, but also has a higher degree of failure due to the steeper knowledge base they've gotta hurdle and the sheer complexity of the net they're attempting to mimic. It's pretty much a guarantee given our current technology that they'll have to either get really creative or significantly trim down the number of nodes.

The last off the top of my head is more recent than the other two, and isn't directly dealing with neural networks. It's an experiment testing the limits of indirect distributed computing over massive numbers of cores using neural network models and packet bus infrastructures. It's called Spinnaker, and could provide the technology necessary to make neural computing a cheap reality. Some theories hold that the limit to complex behavior in neural networks is just one of scale- if we can create a nonscalar neural network comprising of millions of nodes instead of hundreds or thousands it could achieve some level of sapience and self action. This personally isn't my opinion, I go with the field of thought that believes there's a need for basic prestructuring to make sapience achievable and some experiments have proven this. Namely I recall a research paper a while back stating that no matter the size of a neural network, it can hold a max of 500 bits of related data before bleed takes place and the stored data incurs noise.

It's interesting to note that neural networks already show a disturbingly large amount of 'comprehension' of images even in basic forms such as the hopfields I've worked with. They're capable of building relational structures between disparate stimuli, so take a sufficiently large hopfield, plug into it a camera and a microphone, and in time it can begin relating your mom's voice to her image. Then you can have her talk or show it her face and it'll light up nodes indicating acknowledgment of her presence. This is rudimentary memory very similar to human memory.
 
Last edited:
Upvote 0

DeathMagus

Stater of the Obvious
Jul 17, 2007
3,790
244
Right behind you.
✟27,694.00
Gender
Male
Faith
Atheist
Marital Status
Engaged
Politics
US-Others
You don't accept Scripture to begin with, DM. You don't rely on the Scriptural basis of anything.
I know. I said as much.

You don't believe in God but you acknowledge His existence insomuch as He can blamed for everything. What a piece of work you are! :cool:
Now you're just making things up. I thought Catholics weren't big into lying - or is it OK to lie about atheists?
 
Upvote 0
K

kharisym

Guest
The god of the humanists is a machine. :D

Proof or it never happened. Technically speaking, however, your statement here is as close to slandering humanists as it would be if I had said "The god of Christians is an imaginary friend." Please keep that in mind and show some basic courtesy.

Doesn't your holy book say something about treating others as you would like to be treated?
 
Upvote 0

JGG

Well-Known Member
Mar 12, 2006
12,018
2,098
✟65,945.00
Faith
Seeker
Marital Status
Private
Of course our intelligence is a response to stimuli, but it is our response not a pre-programmed response (except in some cases but even those can be changed if we do not like them.)

It's a learned response, which is esentially programmed. Should we want to change a response, that desire would in itself a response to stimuli, and the change would also be the response to a stimulus, namely a desire to change a response.

I think you'll find there is always a stimulus, or several hundred. As it, is our neurons work on a basic if-then binary system just like a computer.

Essentially the difference between us and AI is that we have the capacity to learn on our own. It's possible that AI will also have this capability someday.
 
Upvote 0

r035198x

Junior Member
Jul 15, 2006
3,382
439
41
Visit site
✟28,048.00
Faith
Christian
Marital Status
Single
The original question introduces the soul so t's difficult to answer the question without agreeing on a definition of the soul. I'm certain that it is near impossible to agree on that defintion.

My view is that AI is all about applying pattern matching and data storage to real life problems and explaining that as normal human behaviour. Simulating normal human behaviour though requires the ability to make mistakes and to favour certain choices more often than others (which translates to having a character).
I know genetic programming can go a bit into attempting these on high performance processors but even that requires human input for initial solution spaces.
 
Upvote 0

The Penitent Man

the penitent man shall pass
Nov 11, 2009
1,246
38
Clarkson, Ontario
✟24,154.00
Faith
Catholic
Marital Status
Single
Proof or it never happened. Technically speaking, however, your statement here is as close to slandering humanists as it would be if I had said "The god of Christians is an imaginary friend." Please keep that in mind and show some basic courtesy.

Doesn't your holy book say something about treating others as you would like to be treated?

Why is that statement offensive to humanists? Is it not true? An artificial intelligence --something like Skynet-- would be the only god a humanist could accept.
 
Upvote 0

david_x

I So Hate Consequences!!!!
Dec 24, 2004
4,688
121
36
Indiana
✟28,939.00
Faith
Protestant
Marital Status
Single
They do. Humans already have all sorts of "pre-programmed responses" at the very basic level. Fire neurons x, y, and z - the arm bends. Experience something startling - adrenaline releases. Some of these responses are "programmable" - we can control them consciously. Others are automatic.

Everything we do to pursue our desires builds on our pre-programmed building blocks. We use our memory and experience to match patterns of similarity between new desires and previous desires, and generate potential algorithms that may accomplish the new desires. If we fail, we compare the nature of the failure to previous failures, and tweak our algorithm in the same way we tweaked the old one for success - and try again.

All of this can be programmed, to one extent or another.

There is currently (AFAIK) no task that humans can perform that programming cannot accomplish on a more limited scale.

Lol, no i meant it had nothing to do with my argument.
 
Upvote 0
K

kharisym

Guest
Why is that statement offensive to humanists? Is it not true? An artificial intelligence --something like Skynet-- would be the only god a humanist could accept.

Then I guess you're not offended by people calling your god an imaginary friend? Good to know.

//skynet fails the god test- it was defeated by a whiny chick and and an austrian governor.
 
Upvote 0

LOVEthroughINTELLECT

The courage to be human
Jul 30, 2005
7,825
403
✟33,373.00
Gender
Male
Faith
Christian
Marital Status
Single
Politics
US-Democrat
I get the last laugh sometimes, though.

In a computer version of Monopoly I used to use the AIs' ruthlessness against them.

If I owned, say, Baltic and landed on Mediterranean I would let it go to auction and sit back and watch the AI opponents get into a bidding war to keep me from owning both properties and one of them would end up paying $1,000 for a $100 property. ^_^
 
Last edited:
Upvote 0

Penumbra

Traveler
Dec 3, 2008
2,658
135
United States
✟26,036.00
Faith
Other Religion
Marital Status
Private
I get the last laugh sometimes, though.

In a computer version of Monopoly I used to use the AIs' ruthlessness against them.

If I owned, say, Baltic and landed on Mediterranean I would let it go to auction an sit back and watch the AI opponents get into a bidding war to keep me from owning both properties and one of them would end up paying $1,000 for a $100 property. ^_^
The digital monopoly I played was really lame. Usually I'd pick three opponents (one of each difficulty level), and I'd win every time. And often the opponent of the highest difficulty level went bankrupt first.
 
Upvote 0