• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • CF has always been a site that welcomes people from different backgrounds and beliefs to participate in discussion and even debate. That is the nature of its ministry. In view of recent events emotions are running very high. We need to remind people of some basic principles in debating on this site. We need to be civil when we express differences in opinion. No personal attacks. Avoid you, your statements. Don't characterize an entire political party with comparisons to Fascism or Communism or other extreme movements that committed atrocities. CF is not the place for broad brush or blanket statements about groups and political parties. Put the broad brushes and blankets away when you come to CF, better yet, put them in the incinerator. Debate had no place for them. We need to remember that people that commit acts of violence represent themselves or a small extreme faction.

AI derail from Knowledge Thread

TScott

Curmudgeon
Apr 19, 2002
3,353
161
78
Arizona
Visit site
✟26,974.00
Faith
Non-Denom
Politics
US-Democrat
Originally Posted by TScott

Sure, LISP syntax has become a standard in AI, but you must understand then that LISP is using what it calls "atoms" which are representative tokens, in it's simulation of semantics.
KC said:
Seems like code for "it's not how human brains do it so it's not real".

Not real AI.

Or more to the point Strong AI.

Artificial Intellegence has different levels. An example of the lowest level would be the thermostat in your home. It wants (because you programmed it) to keep the temperature around it's sensor at a specified (by you) temperature. It senses the temperature and if it is to low or too high it will send a signal to the HVAC equipment to heat or remove the heat from the area. When it senses the temperature is at the specified level it sends a signal to the equipment to stop.

Artificial intellegence has been a quest since the first computer was invented. Science fiction writers have long proposed machines who could think just like humans, and somewhere along the way this concept was dubbed Strong AI.

In 1950, Alan Turing developed a thought experiment, a test for Strong AI, in which there were 3 participants 2 humans and 1 computer. A human judge was set in one room with a terminal in which she could communicate with the other human and the computer. She could ask them questions through her keyboard and they would answer her through her printer. If she could not tell from this conversation which was the computer, the computer, according to Turing, would pass as strong AI. And for decades this became the standard. LISP and other programs were developed (although as I said in the other thread LISP became sort of a standard template for language recognition programs).

About 30 years later a philosopher named John Searle challenged the Turing Test with what became known as The Chinese Room thought experiment. In this test a human is set in a room in which messages can be passed through a slot into the room. The messages are written in Chinese and the human does not understand any Chinese at all. The human has a look-up table that he can compare the Chinese writing to and that gives answers that the human can copy to a piece of paper and pass out the slot. The human is answering the questions without really knowing what is being transpired. Searle beleives this is what computer programs do. They are given words, and groups of words in a certain syntax that they can compare to arrays in their memories. Through simple "if-then" logic they can formulate answers.

Searle's experiment was attacked by the Computationalists in the AI field because of it's impracticality, and he defended it by simplifying his theorem into 3 axioms: Computer programs are based on syntax; Human mentality is based on semantics; Syntax in insufficient for semantics.

IMO strong AI is becoming more and more a reality, but not through the school of Computation, but instead through the Connectionist school. IOW strong AI will not be accomplished with programming alone, but through hardware design, such as neural networking, a relatively new field in which, coupled with fuzzy logic programming, much of our industrial robotics is being based.

Will a Robot ever be able to accomplish true human AI, where it can enjoy subjectiveness and be totally creative? I guess my feelings are why would we develop such a thing?
 

Genersis

Person of Disinterest
Sep 26, 2011
6,073
752
34
London
✟53,700.00
Country
United Kingdom
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
UK-Labour
Hmm. Interesting post.

Concerning your two questions:
For the first, I'd say yes. Eventually.

As for your second question...i don't see much reason to make AIs that have minds that work exactly like ours.
Though obvious there there would be benefits to building AIs which are BETTER at problem solving than us, or other tasks.

To create AIs exactly like us(with emotions, empathy, EVERYTHING) without giving it a body/means to express itself ETC and expecting it to serve us still would be cruel IMO.

...
I hope i'm not completely missing the point of this thread.:o
 
Upvote 0

Stoneghost

Newbie
Mar 23, 2010
106
3
✟22,759.00
Faith
Agnostic
Marital Status
Married
People like Searle are the reason why sixty years after the invention of computers we're saying AI is 10-20 years away without having ever produced any results. The rationalist assumption about 'higher' thought processes is just that, an assumption. It has no merit and no objective basis to support its validity. We don't even have a model to explain the intelligence of an ant, much less a human, much else a way to model that in a binary system. Yet somehow these [expletive deleted] still run around trying to shove their cognitive psych BS down everyone's throat. Thousands have people have spent their lives on AI and we have nothing to show for it. That is because we are doing it wrong, the paradigm is wrong. Insanity is doing the same thing repeatedly and expecting different results.

Intelligence of a product of a biological organism, the biologic components are what provides us the context to begin to model such a meaningless word as intelligence. We have spent decades trying to use products of intelligence, such as semantic logic, to define intelligence. This is backwards. That's like trying to build a tree out of house. Yes trees and houses are related, but the relationship is one way. If you try to build a tree out of a house you're going to end up with a pile wood, and if you're AI researcher you're going to point to the pile of wood as proof you are on the right track.

Fear, pleasure, homeostasis, the environment between organisms and their environment, the effect of that environment over time (evolution) provide the context to understand "intelligence." Within an organism neuroanatomy, activation patterns, molecular dynamics of synapses, spatial and temporal dynamics of neural networks, and to a lesser extent hormone activity and regulation, gene activation, even cellular respiration are what provide the context to understand an intelligent system in action. This attempt to abstract intelligence away from biology and try to create it using its own products hasn't worked, won't work and needs to die.
 
Upvote 0

Stoneghost

Newbie
Mar 23, 2010
106
3
✟22,759.00
Faith
Agnostic
Marital Status
Married
Will a Robot ever be able to accomplish true human AI, where it can enjoy subjectiveness and be totally creative? I guess my feelings are why would we develop such a thing?
Exactly. We wouldn't. That would make no sense. That is an infeasible goal, in part related to what I said in my previous post, and it is a pointless goal. We already think like us, why would we want something else that thinks like us? What we need is something that allows us to leverage the abilities of computers directly, and more importantly I think something that allows computers to leverage our abilities. What that is I don't know but at least it would be useful.
 
Upvote 0

TScott

Curmudgeon
Apr 19, 2002
3,353
161
78
Arizona
Visit site
✟26,974.00
Faith
Non-Denom
Politics
US-Democrat
No you're missing nothing at all. This thread is to discuss Strong AI.

Many of the neuroscientists and asscoiated philosophers tell us that developing a conscious machine will help us understand more and more about our own mind and how it works. I suppose that is true as we really know very little baout our own minds. I just don't think that rationalization has been enough to stoke much funding.

However, I think that a certain amount of caution is required here, not just for the idea you have espoused; that we would risk being cruel in building a subserviant conscious machine, but also that certain cultural "dangers" could be involved.

All the "computer gone mad" science fiction aside, it has been acknowledged for over half a century by computer scientists that there could be pitfalls in building a super intellegent machine. But regardless of any sinister results it is hard to imagine what it would be like, what would the ramifications be and how would it, or would it change our entire civilization. (Few predicted where we would be today with the world wide web, and it's ramifications) The idea of an intellegence "explosion" has been around for a long time. In the 60s a mathemetician named Jack Goode wrote about what would happen if machines became slightly more intellegent than humans. He believed that they would begin to improve on their designs and that those improvements whould become exponential to the extreme to where there would be a sort of "event horizon" to where the intellect of machines would explode into what he euphemistically called a singularity. Since then there has been a great deal of writing among mathematicians and other nerdy types involving this Technological Singularity. If you google that term I'm sure you'll find plenty of details.
 
Upvote 0

Genersis

Person of Disinterest
Sep 26, 2011
6,073
752
34
London
✟53,700.00
Country
United Kingdom
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
UK-Labour

I have encountered the idea before of the singularity. A few times actually.
To be honest i have no idea what the consequences of the singularity could be, as i doubt most would considering the machines would be forming ideas quite possibly beyond our comprehension.

I've always held the idea that before we give machines that level of intelligence, and before we give machines autonomous control over systems which could pose a danger to human beings; we'd need to have some kind of empathy in place in these strong AI's. They need to understand others feelings, and what pain is.
How that would work...well...i haven't the foggiest but this is all hypothetical musing.
 
Upvote 0

Paradoxum

Liberty, Equality, Solidarity!
Sep 16, 2011
10,712
654
✟43,188.00
Gender
Female
Faith
Humanist
Marital Status
Private
Politics
UK-Liberal-Democrats
Will a Robot ever be able to accomplish true human AI, where it can enjoy subjectiveness and be totally creative? I guess my feelings are why would we develop such a thing?

Because it would be really cool and an amazing achievement? Talking to an intelligent robot would be interesting.
 
Upvote 0

TScott

Curmudgeon
Apr 19, 2002
3,353
161
78
Arizona
Visit site
✟26,974.00
Faith
Non-Denom
Politics
US-Democrat
So what I'm getting is that machines use different mechanisms for understanding language than humans do. And that they aren't all that good at it. But how does that make it a simulation rather than simply a different approach?
You can call it a different approach if you like. I call it simulation because I think a machine is simulating "understanding".

AI programs such as LISP use a symbol or token (LISP calls it an atom) to represent an entity in the real world. A level of a symbol is too high to enable us to achieve a good model of understanding comparable to the human mind. It is too rigid. The rules for the symbol are made too tight. There is no accounting for semantics and no subjectivity. Our minds operate at the lowest of levels which allow us to assign intrinsic meaning, or intrinsic semantics if you will, to things we experience-things we see, hear, smell etc.

IMO true AI does not have to operate all the way down to the level of a neuron, but it should operate at some sub-symbolic level, below the semantic level. The idea being that some semantic properties will emerge. I have read that some systems have been built and that these are working up to allow the machines to experience basic subjectivity.

The thing is KC, this is mostly just my opinion (shared with others in the Connectionist School of AI, to be sure) people who work in the field of what I call symbolic AI, or Computational AI believe they can achieve semantics and subjectivity with their programs, so it's not like I am necessarily right here. The fact is we really don't understand enough about our own consciousness and how our own mind achieves subjectivity. We may be 50 years out from really understanding how our own mind really works.
 
Upvote 0

Genersis

Person of Disinterest
Sep 26, 2011
6,073
752
34
London
✟53,700.00
Country
United Kingdom
Gender
Male
Faith
Atheist
Marital Status
Single
Politics
UK-Labour
So what I'm getting is that machines use different mechanisms for understanding language than humans do. And that they aren't all that good at it. But how does that make it a simulation rather than simply a different approach?

Mainly because the AI itself has no idea what it's saying beyond hearing A, and knowing it should reply B.
It doesn't actually understand what A or B actually mean.

...

Maybe not even i'm getting it.:o
 
Upvote 0

TScott

Curmudgeon
Apr 19, 2002
3,353
161
78
Arizona
Visit site
✟26,974.00
Faith
Non-Denom
Politics
US-Democrat
Mainly because the AI itself has no idea what it's saying beyond hearing A, and knowing it should reply B.
It doesn't actually understand what A or B actually mean.

...

Maybe not even i'm getting it.:o
Yes, you're getting it.
 
Upvote 0

KCfromNC

Regular Member
Apr 18, 2007
30,256
17,181
✟553,130.00
Faith
Atheist
Marital Status
Private
You can call it a different approach if you like. I call it simulation because I think a machine is simulating "understanding".

Much like children's minds are only simulating understanding if they don't do it as well or in the exact same manner as an adult.

The fact is we really don't understand enough about our own consciousness and how our own mind achieves subjectivity. We may be 50 years out from really understanding how our own mind really works.

Yep, and this lack of understanding shouldn't be used as any sort of evidence one way or another, which is why I objected to it in the first place.
 
Upvote 0

KCfromNC

Regular Member
Apr 18, 2007
30,256
17,181
✟553,130.00
Faith
Atheist
Marital Status
Private
Mainly because the AI itself has no idea what it's saying beyond hearing A, and knowing it should reply B.
It doesn't actually understand what A or B actually mean.

What do you mean by "actually" understand, and how can you show that a random human "actually" understands instead of just says B in reply to A?
 
Upvote 0

TScott

Curmudgeon
Apr 19, 2002
3,353
161
78
Arizona
Visit site
✟26,974.00
Faith
Non-Denom
Politics
US-Democrat
Much like children's minds are only simulating understanding if they don't do it as well or in the exact same manner as an adult.
No. That isn't it at all.



Yep, and this lack of understanding shouldn't be used as any sort of evidence one way or another, which is why I objected to it in the first place.
No. You are objecting because you do not have any understanding of the subject. Your objections show this, and instead of engaging in a discussion on the subject you appear to be cherry picking the internet for contradictions, as you did with LISP, which it turned out you didn't understand at all. This isn't discussion, KC. I have worked in electrical engineering for 40 years and have had extensive training in neural networking and have worked all those years in the field of advanced industrial control. I'm not a neural scientist, however it has been an interest of mine since my college days.

All of this doesn't mean that my points are valid and yours aren't. but what it means in this discussion is that you should consider my points instead of tossing them aside without discussion. I'm taking a lot of time to research what I'm saying to make sure it is accurate. It's rude for you to just willy-nilly toss unsupported opinions out as rebuttal.

The lack of understanding about the human mind doesn't mean there is no understanding, it means that the mind's operation is so complex that it is extremely difficult to find the answers on how it works. That doesn't mean we know nothing.
 
Upvote 0

KCfromNC

Regular Member
Apr 18, 2007
30,256
17,181
✟553,130.00
Faith
Atheist
Marital Status
Private
 
Upvote 0

TScott

Curmudgeon
Apr 19, 2002
3,353
161
78
Arizona
Visit site
✟26,974.00
Faith
Non-Denom
Politics
US-Democrat
 
Upvote 0

Archaeopteryx

Wanderer
Jul 1, 2007
22,229
2,608
✟78,240.00
Gender
Male
Faith
Atheist
Marital Status
Private

Maybe not semantical. Autonoetic.


Perhaps the problem is being approached in the wrong way. Instead of assuming that learning will lead to experiencing (and re-experiencing) we should focus on feeling. I have heard of efforts being made to develop robots that are capable of more than just learning their environment, but "feeling" it also.
 
Upvote 0