• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Unsatisfactory Scientific Explanations?

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
let's take this "neural net" type of machine one step further, just to show the limitations of which i speak of.
let's suppose the input was visual instead of typewritten text, and for simplicity we will limit the input to english.
the computer is now faced with, not only parsing the words, but now has to contend with ambiguous writing styles.
IOW, people do not always write legibly.
when we start adding other languages, the task most likely becomes impossible.
can the machine cope with this?
not without some kind of instruction.
We already have systems that are not explicitly programmed to recognise written text (e.g. handwritten text), but do so through training - multiple examples are presented, and the system responses are rated for accuracy (positive & negative reinforcement). In this way, the system learns how to recognise a variety of writing styles - and can make a good fist of recognising a new writing style it has not previously encountered.
i guess what i'm really saying here is that a bare computer with no program at all is useless, a paper weight.
so, what is really needed?
sensors, both audio and video, and the corresponding decoding logic.
these 2 items alone are simply beyond anything we can come up with.
audio seems to be making some progress, but video, and its interpretation, is beyond what we can deal with.
Siri, Cortana, Alexa, Google Now, etc., are all based on learning systems; video processing involves an order of magnitude more processing, but will be with us soon enough. It's not longer a question of how, but when.
the 2 best known examples of AI that i know of is:
1. asimo, designed and built by honda.
and
2. big dog, designed and built by boston dynamics.
there are others of course, 2 others:
1. deep blue, the chess playing machine by IBM
and
2. the machine that played jeopardy.
On that list, only IBM's Watson, the Jeopardy winner, could be really considered a domain-specific AI - and I doubt many in the field would would be happy with that. Asimo and Big Dog are robots with limited automony, and Big Blue was just a big number cruncher.
all of the above displays "intelligence", but to have true conciousness will require something other than what we currently have.
Intelligence is generally taken to be generalised problem-solving capability; on your list, only Watson barely reaches the threshold, and only in preconfigured language searching, filtering, & sorting.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
... I refer to ideas like "one particle knowing where the other is" and Schrodinger's Cat (where the cat is both dead and alive until one observes it) which ultimately means an object can both exist and not exist at the same time. These inconsistancies in thought (and other examples of this) exist precisely because the current world view surrounding our science is wrong. ...
It may be worth pointing out that the former is a misplaced anthropomorphic metaphor - which leads to just the kind of misunderstanding you highlight (I assume you don't really think it's literal!), and the latter is a misplaced reductio-ad-absurdum, intended to show the limitations of a particular quantum interpretation (it was later shown to be as spurious as the interpretation it mocked).

So neither example suits the point you're making. Just sayin'.
 
Upvote 0

whois

rational
Mar 7, 2015
2,523
119
✟3,336.00
Faith
Non-Denom
Of course, you can't just throw an unstructured bunch of neurons together and expect it to learn any computational task efficiently. The system I described is programmed to emulate a neural network - it has explicitly coded instructions to behave like a network of 2.1 million interconnected neurons - structured (layers and connectivity between layers) according to a proposed model for human-like language acquisition. To this extent, it is structured for language acquisition. There is no explicit code for language acquisition, but the network structure is particularly well suited to it. If you train it appropriately, it will learn the basics of a language reasonably well. If you train it poorly, it will perform poorly.

What it does demonstrate is that a neuromorphic model based on, but far less sophisticated than, that in a human brain, can achieve recognisably similar informational processing without explicit programming; which was my point.
i have no doubt that computers can give the appearance of learning.
all that is needed are the instructions to do it.
the difference between human and machines is ability.
your example above might model the acquisition of language quite well, but the very same set of "neurons" couldn't play a game of chess, or analyze a photograph.
why, they simply lack the instructions to do so.
the next question is, could they acquire these instructions on their own.
the instant analysis of a photograph is simply beyond, maybe far beyond, current computing technology.
i will give computers credit for their ability to be tireless, their logical approach to problems, and their incredible speed, but there are some things that they simply cannot do.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
i have no doubt that computers can give the appearance of learning.
all that is needed are the instructions to do it.
It's the same kind of learning done by neurons in the brain; the only instructions are the training words and phrases, just as in the brain.
your example above might model the acquisition of language quite well, but the very same set of "neurons" couldn't play a game of chess, or analyze a photograph.
why, they simply lack the instructions to do so.
This model isn't intended to be a whole brain, it's structured for language. You could probably train it to play simple games, and make simple discriminations in images (lines, patterns, simple shapes), but it wouldn't be very good. The same applies to the language areas in your own brain, e.g. Wernicke's area, Broca's area, etc; they're structurally optimised for language processing. Other parts of your brain are used for chess and image analysis; we have electronic systems that can do those tasks too.
the next question is, could they acquire these instructions on their own.
Which instructions, specifically? Neural networks don't have explicit instructions, they're trained by examples; that's why they're called learning systems.
the instant analysis of a photograph is simply beyond, maybe far beyond, current computing technology.
It depends on the task, but yes, in some areas the human eye and brain (though definitely not instant) are still more reliable than electronic systems. Electronic systems are catching up though; even smartphones now have cameras with autofocus, auto ISO, aperture, colour balance; face and smile recognition, and apps to read and translate text and even speech on the fly (badly, but it's very early days).
 
Upvote 0

whois

rational
Mar 7, 2015
2,523
119
✟3,336.00
Faith
Non-Denom
This model isn't intended to be a whole brain, it's structured for language.
well see, the same set of neurons in the brain takes care of ALL of it's processing.
your very simple example could never do the following:
start a game of chess.
be given a photo and told to ad lib on the red object.
read a letter and describe what was being said.
then go back to the chess game where it left off.
it's simply not doable, and i don't think it ever will with current technology.
You could probably train it to play simple games, and make simple discriminations in images (lines, patterns, simple shapes), but it wouldn't be very good.
i don't consider chess a simple game.
computers are very good, i would say excellent, at logical things, but are severely lacking at abstract concepts.
Which instructions, specifically? Neural networks don't have explicit instructions, they're trained by examples; that's why they're called learning systems.
current technology REQUIRES instructions, there is no way out of this.
from the moment you turn your computer on until you turn it off, it is executing instructions.
the same with your hand held calculator, the microchip in your microwave, ALL current technology requires these instructions.
you need at least a basic course in computers to understand why.
i have an outboard modem/router that i hardly ever turn off, even this thing is executing instructions every single second its on, thousands of them.
like i mentioned above, if these instructions aren't provided by a program then they are hard coded into the machine at manufacture.
the first can be changed by loading in a different program, the second can never be changed.
It depends on the task, but yes, in some areas the human eye and brain (though definitely not instant) are still more reliable than electronic systems. Electronic systems are catching up though; even smartphones now have cameras with autofocus, auto ISO, aperture, colour balance; face and smile recognition, and apps to read and translate text and even speech on the fly (badly, but it's very early days).
not only are they more reliable, they completely outstrip the processing density of any computer known.
just the ability to turn your head 180 degrees and picking out a random object and telling me about it is completely beyond current technology.
 
Upvote 0

Ratjaws

Active Member
Jul 1, 2003
272
37
69
Detroit, Michigan
Visit site
✟24,722.00
Faith
Catholic
It may be worth pointing out that the former is a misplaced anthropomorphic metaphor - which leads to just the kind of misunderstanding you highlight (I assume you don't really think it's literal!), and the latter is a misplaced reductio-ad-absurdum, intended to show the limitations of a particular quantum interpretation (it was later shown to be as spurious as the interpretation it mocked).

So neither example suits the point you're making. Just sayin'.

Frumious,
I appreciate your points but I suspect you've missed mine... and this is that the very reason they must use such metaphors and re-ad-absurdum descriptions is because there is something incoherent in how reality is being interpreted. The world view (philsosphy) being used by too many scientists is flawed and needs to be changed to one that avoids such problems. Putting form back in matter does just this. Atomism and dynamism does not. In fact there are other components to the popular view that need to be addressed such as Cartesian bifurcation and the idealist school of thought that has crept into the scientific understanding of nature at the so-called fundamental level.

Rene Descartes was instrumental in the establlishing the foundations of modern science, in that he as a mathematician-philosopher, tried to simplify what the scientist works with so it would fit his mathematical leanings. In doing so he stripped all accidental categories (quantity; quality; relatives; somewhere; sometime; being in a position; having; acting; and being acted upon) from reality, especially that of quality, in order to leave simple quantity which could then be addressed mathematically. Yet empirical science deals with the other categories, albeit to a lesser degree... and especially qualities such as color. So in effect Descartes gave us a world berift of color. Yet who sees colorless balls? The problem is if it cannot be quantified then to Descartes it was unimportant. Unfortunately this view has crept into modern science with the consequent problem of math being overused to the point of it becoming the only trustworthy tool. It's a problem especially found in the branch of physics which deals with the infinitesimally small and cosmically large. This is why I hear so much talk in this discussion about computers as the means to judge the truth of a given concept. For example global warming is said to be supported by the data of scientific research but this only when inserted into computer programs whose designers have a world view that is not necessarily in accord with reality. Just because a computer predicts it will rain does that mean it will?

We have to come back to reality as our measure of truth, realizing even work with computers begins with our five senses, and ends with our mind interpreting the data. Just as we can trust our mind in making coherent the facts of scientific research we can also trust it to abstract from reality metaphical truth. Since empirical science works directly with the nine accidental categories, especially quantity and quality, and not what is substantial to the being we study, we need metaphysical research to help us comprehend what we are really "seeing" at the so-called fundamental or atomic level of nature. We need sound philosophy to limit our theories so we don't cross over into absurd ideas like time travel, multiverses, particles constantly being "created" and particles that possess knowledge.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
well see, the same set of neurons in the brain takes care of ALL of it's processing.
The language areas of the brain only process language. The brain has many areas, each structured according to its function.
your very simple example could never do the following:
start a game of chess.
be given a photo and told to ad lib on the red object.
read a letter and describe what was being said.
then go back to the chess game where it left off.
it's simply not doable, and i don't think it ever will with current technology.
Of course it couldn't - it's a simplified model for language acquisition, not a whole brain. Nobody claims a minimal language processing network can do everything a human brain can do, that's a straw man.
i don't consider chess a simple game.
Me neither, that's why I said it could probably handle simple games; i.e. not chess.
computers are very good, i would say excellent, at logical things, but are severely lacking at abstract concepts.
That depends how you structure and use them. Again, neural networks are more appropriate to abstraction than conventional digital processing. Many artificial neural networks have been constructed that can generate or deal with abstract concepts. The Annabell network itself is one.
current technology REQUIRES instructions, there is no way out of this.
from the moment you turn your computer on until you turn it off, it is executing instructions.
the same with your hand held calculator, the microchip in your microwave, ALL current technology requires these instructions.
That's true in as much as a neural network implementation on a digital computer must involve instructions to behave like a neuron, but those instructions (in principle) will be common to any neural network, whatever its trained function. Specificity of function lies in the connectivity of those virtual neurons and the training given. In other words, the only instructions involved are those to emulate a neural network; the particular function of the network is at a higher level of abstraction. It would be possible to create a hard-wired artificial neuron without any microprocessors involved, but it would be very complicated and you'd have to make and connect millions of them; quite impractical. They are virtualised using computers because you can make and connect as many as you like with the same code; the key point is that the instructions are for emulating the neurons, not for carrying out the specific task of the network, that is a matter of initial configuration and learning/teaching by example.

It is a case of functional abstraction. The function of a neural network is abstracted from the instructions that emulate the neurons. Consider the cellular automaton Conway's Game of Life running on a computer; it's a 2D grid of squares or 'cells', each of which can be black or white ('alive' or 'dead'). The cells are all identical and all follow identical rules to change their colour according to how many 'alive' immediate neighbors they have. The instructions to the computer to apply the identical rules to each cell in turn are the only instructions involved. Yet if you set up a particular pattern of 'alive' cells on the grid and let the computer repeat those rules on each cell, you see patterns of cell activity that move and interact (you can even set it up so the interacting patterns emulate a computer, or Game of Life itself). There are no instructions to make patterns, the particular patterns and their subsequent behaviour depend only on the initial configuration.
you need at least a basic course in computers to understand why.
I've been building and programming computers since the late 1980s. I've had plenty of courses in that time, from assembler programming to design techniques.
not only are they more reliable, they completely outstrip the processing density of any computer known.
The brain certainly far outstrips the density and efficiency of current technology, but it's equally certainly much less reliable. Neural networks are inherently fuzzy systems, and brains are no exception. From perception to memory, our brains are characterised by their unreliability; fortunately they also have the adaptability, resilience and redundancy to cope, and most of the time, it doesn't really matter.
 
Upvote 0

whois

rational
Mar 7, 2015
2,523
119
✟3,336.00
Faith
Non-Denom
Of course it couldn't - it's a simplified model for language acquisition, not a whole brain. Nobody claims a minimal language processing network can do everything a human brain can do, that's a straw man.
this is exactly what i'm talking about.
the whole brain consists of just one set of neurons, no matter if they are grouped or not.
this brain, a single set of neurons, takes care of ALL the processing done by the brain.
this is where the above "intelligent neural nets" just falls flat on its face, your language learning net can't play chess or analyze a photograph.
also, these nets you speak of sounds like a weighted system, and we've had this concept for years.

it's going to take a fundamental breakthrough to emulate the intelligence of humanity
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
frumious,
maybe i missed the technical aspects of these systems you are describing.
can you provide links to the language used and the machine it runs on?
Which system are you referring to?
If you mean 'Annabell', the language learning network, here is the published study paper: A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language, and here is the open-source documentation and code (C++) on Github.
It will run on any reasonably spec'd development computer capable of compiling and running C++ with OpenMP v3.0 or higher. Full details are in the manual.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
this is exactly what i'm talking about.
the whole brain consists of just one set of neurons, no matter if they are grouped or not.
this brain, a single set of neurons, takes care of ALL the processing done by the brain.
this is where the above "intelligent neural nets" just falls flat on its face, your language learning net can't play chess or analyze a photograph.
The brain is many networks connected together. But, as I said, no-one is claiming this simplified model can emulate a whole brain; it's not intended to - it's not even supposed to emulate the human language areas, it's an exploratory language learning system. That it does a surprisingly good job suggests that some problems in information processing previously thought almost intractable - such as language acquisition, are easier than expected once you have an appropriate model.
... these nets you speak of sounds like a weighted system, and we've had this concept for years.
Sure, neural networks are a kind of weighted system, and they've have been around quite a while (since the first complex nervous systems evolved), but this one is surprisingly capable for an artificial system.
it's going to take a fundamental breakthrough to emulate the intelligence of humanity
Possibly... but I wouldn't put money on it; I suspect the fundamentals are already in place, and it's a matter of mapping, scaling, and technology. Despite the knowledge coming out of the European Blue Brain Project (Human Brain Project) and the US BRAIN Initiative, I suspect that, for commercial reasons, artificially intelligent systems are likely to be restricted to domain-specific tasks, at least in the medium term, although such systems may well be transparently linked from a user perspective, so they present as a single AI with broad capabilities.
 
Upvote 0

whois

rational
Mar 7, 2015
2,523
119
✟3,336.00
Faith
Non-Denom
Which system are you referring to?
If you mean 'Annabell', the language learning network, here is the published study paper: A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language, and here is the open-source documentation and code (C++) on Github.
It will run on any reasonably spec'd development computer capable of compiling and running C++ with OpenMP v3.0 or higher. Full details are in the manual.
okay, i thought i might have missed something but it seems i didn't.
from the link:
This perspective has led to the implementation of a class of cognitive architectures called symbolic [35]
-ibid.
symbolic code, AKA tokenized code, has been around for years.
as far as i know, ALL high level code is tokenized.
i guess you might even say assembler code is tokenized.
in this respect, i don't see how annabel can be anything other than already existing chess learning programs.

from the link:
Symbolic architectures can realize high-level cognitive functions, such as complex reasoning and planning. However, the main issue of such architectures is that all information must be represented and processed in the form of symbols pertaining to a predefined domain.
-ibid.
what is the equivalent "predefined domain" of the brain.
what does this even mean?

from the link:
The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process
-ibid.
proposed system, this implies it hasn't been achieved, which could be irrelevant.
most importantly, the above seems to suggest that annabel should be able to learn ANY AND ALL languages.
has this been accomplished?

the most telling part though is, the paper neglects a large overhead in computing power.
for example, the machine it runs on, the high level code (that must be written by humans BTW).
and more importantly, this "net" must emulate the brain, which i have serious doubts it will ever do.

yes, i will give credit to the astounding aspects of computers, their tireless ability to sort through vast mountains of data, their incredible speed, their almost foolproof reliability.
OTOH, computers fail miserably at abstract concepts.
computers will never "think".
computers will never have a conscious.
these will require a fundamental breakthrough in computer technology.
 
Upvote 0

Ratjaws

Active Member
Jul 1, 2003
272
37
69
Detroit, Michigan
Visit site
✟24,722.00
Faith
Catholic
Gracchus,
Ratjaws said:
Gracchus,
There is difference and it is not just complexity, nor simply chemical reaction or change of energy states.
Gracchus said:
And what is the difference?

I've laid this out in other posts that I assume you've read but allow me to clearify further. I see that matter comes in two types: inanimate and animate. Living matter is the larger category that can be broken down threefold: plant life, animal and human persons. I've gone into the difference between these groups but suffice to say these differences are there for everyone to see. Those differences cannot be without cause (form) nor can the latter category have a material cause. It is their form that gives life to the matter that composes their body. Complexity, chemical reactions and energy states are all results of this form acting in matter.

Ratjaws said:
Of course what is substantial in a maple wood plank or chair is the same even while their accidental components differ. If this we not true we would not distinguish plank from chair.
Gracchus said:
What is different is not the type of matter, it is the patterns in which it appears.
Ratjaws said:
Yet you gloss over the fact that they are not living matter despite the fact they came from a maple tree that was alive. Simple observation informs us that the tree shows life by growth and assimilation of nutrients while a plank or chair lack these characteristics of life.
Gracchus said:
Yes, but assimilation and growth are chemical reactions.

Yes there may be patterns but this does not exclude their cause. Again if you take the material a rock is composed of, a plant, animal and person, and break them down you find they are identical at their fundamental level. This is precisely what our material science tells us. Yet something causes this clump of matter to be rock, another clump to be plant, another animal and yet another to be person. It's the form that causes these patterns in their matter. The instruments of science cannot "see" this form but our mind can by abstracting from sense information. This act of our intellect is simple yet profound and it is just as certain as our mind lying hold of truth at the empirical level of reality. The matter of a rock cannot take in food but similar matter of a living being can. Rocks don't grow as living beings do yet they are composed of the same matter at the most fundamental level. We can account for these profound differences only if forms are taken into account. The chemical reactions have their cause not in the matter itself since this same matter, if it were rock, would not have the reactions, but in the form that acts upon the matter.

Ratjaws said:
The key word in what said is "orderly," the implication is that life has dynamic order while non-living being possesses a non-dynamic order.
Gracchus said:
"Pure" HOH (water) is actually a dynamic process, involving shifts between H2O, H3O+ (hydronium ion), OH- (hydroxide ion), and even trace amounts of H2O2 (hydrogen peroxide). The Oxygen is continually losing and gaining Hydrogen atoms. MOst people would probably deny, however, that water is alive. It is a different set of reactions. Chemicals react to form more stable molecules. But energy flows can reverse reactions. "Life" is simply one set of chemical reactions. It follows the same laws of physics and chemistry as non-life.

Of course it does not violate physical law but by dynamic I mean animate matter has powers that inanimate matter does not. These powers whether they be vegetive, sensitive or intellective are determined by the form in the particular "clump" of matter we are looking at. And yes life and non-life both have chemical reactions going on inside but their powers are vastly different. How therefore can "same laws of physics and chemistry" cause these different kinds of powers? It must be attributed to their forms that differ. For the energy of chemical reactions are as Einstein made plain... convertible with the matter of those chemical reactions. Matter and energy are equivelent (E=MC^2). Furthermore it is invisible form that causes matter and energy to possess these properties of convertibility.

Ratjaws said:
The problem for you is to explain why the different kinds of order and you cannot do it with the scientific method alone.
Gracchus said:
Why not? How are the "different kinds of order" different?

You yourself have expressed the answer to this question when you used terms like "different patterns" and "chemical reactions." This is the order I speak of. Something is behind it as it's cause. Form is the cause of each kind of matter and the order it has. Matter and energy CANNOT account for either their own powers or order nor can they account for their own existence. Matter is not existence itself nor is energy.
 
Upvote 0

Ratjaws

Active Member
Jul 1, 2003
272
37
69
Detroit, Michigan
Visit site
✟24,722.00
Faith
Catholic
Gracchus said:
Which data have I misinterpreted?

You have proclaimed that there are "different kinds of order" but you have not defined the differences.

And "disorganized" matter also becomes organized when subjected to flows of energy. Consider that flowing water can render unsorted rock detritus into boulders, cobbles, pebbles, gravel, sand and clay.

Aristotle was a very smart man, but rather deficient, by modern standards, in physics, chemistry, biology, and even mathematics.

No! Modern science does however have different methods and standards from ancient Greece.

You keep saying it, as if repeating unsupported and ... problematic ... assertions can make it true.

What is "immaterial" is the patterned flow of energy following the observed laws of thermodynamics. Science has long since discarded the idea of "vitalism". Matter is matter, and life is a shifting pattern of matter.

Have you considered that what you desire to be real may not be necessary? "Desire" don't feed the bulldog. Wishing and wanting doesn't make it so.

I regard "metaphysics" as the study of the non-perceptible. To quote Laplace, "I have no need of that hypothesis." "Metaphysics" cannot be demonstrated, or at least I have never seen such.

You have not demonstrated any difference much less a profound difference. It is as if you were saying that I was making an error because I was not taking into account the political relations between fairies and leprechauns. I have no reason to consider such imponderables.

As I have pointed out, flows of energy (or matter) can create orderly, even dynamic, patterns from disorder.

Here is a question: What contributions has "metaphysics" made to anything but emotional states? It makes you feel good because you can fill your ignorance with conjecture and imagination that cannot be confirmed. Other than that, what good is it?

Gracchus,
What exactly is disordered matter or energy? It must be something since you can refer to it. Yet it is not ordered... or organized. My point here is that even disordered or disorganized matter has existence and therefore a cause. How do you account for it? I account for it by pointing you to matter that lacks form. More accurately the form in it remains in potency. It must be activated to be "seen" or sensed as any particular kind of matter we are familar with. In metaphysics we call this undifferentiated matter prime matter. Once something acts on the matter like something hot, or an instrument or tool, the form becomes active and apparent to our senses. Without form you go around in circles telling me that matter and energy do their thing. You write volumes on how this order looks but again you neglect cause.

You may regard metaphysics as the study of the non-perceptible as Laplace did but you merely state the obvious. Your quote simply means it is unimportant to your world view yet it is a flawed perspective that cannot explain why matter and energy exist at all. It cannot explain why matter and energy has power and order. It cannot explain disorder because your world view works with only what is perceptible in matter. You have been taught to accept only the accidens, that which is constant flux, and ignore what is substantial in material being. You further reduce your view of our world by insisting that empirical science is the only intellectual tool that can give us valid knowledge. Yet as modern science continues to ignore it's source... the physics before physics... metaphysics, it does so at the price of less clarity. Modern physics has become less and less intelligible as time has moved on. Just look at the number of "particles" now said to be in existence despite no one ever seeing even an atom (no, we see traces of light on a plate). In fact modern science has run off the beaten path of reality by asking us to believe that a "particle of matter" can be in two places at once or that it can have knowledge. There are many other examples that defy logic and our everyday experience.

For instance modern physics speaks of time travel as if it were a worthy goal to pursue... yet! ...it has not come to grips with what time is. All our limits to thinking about this subject have been thrown off because we ignore the metaphysics that can keep us on the straight and narrow. Time for the metaphysician is simply a measure of change. Thus when one says they can go back or forward in time by manipulating matter or energy, use of light or worm holes, they simply manifest how the scientific method has gone astray overreaching it's bounds. How can one go forward or backward in changeable being? Likewise for the concept of space. Those who teach modern science have not come to grips with what space is. They make the mistake of thinking we can compress and expand it. With the right technology we can act as if space were not an obstacle to extremely long distance travel. We also speak of space as though there can be "parallel universes". This is confirmed by referring to mathematical formulas and computer simulations as if either is reality itself. If one defines space as a place holder for being they error. Space to the metaphysician is not an entity. I does not exist. Space is conceived of precisely because being is extended. Time and space therefore are not beings in reality but beings of reason. They are concepts in our mind we use to think and talk about changeable being. So in effect if one tries to enter a parallel universe they are playing a mind game. If one intends to travel through time they again have moved into their inner universe of thought and have left the real world. Think for a moment, if another universe were to exist what laws would it have? If the laws were different then ours how would we know of its existence. If they were the same why would we consider it another universe since the word itself, "universe: one verse," connotes the totality of all existing things. No, to one who studies philosophy these ideas fall into the arena of idealism where what's in ones mind is of utmost importance in determining reality. On the contrary from my perspective truth is conformity of our mind to reality.

This is the kind of error one who dismisses metaphysics falls into. Metaphysical reality is not imponderable as you say but an extension of scientific knowledge. Metaphysics undergirds empirical knowledge. When done right it violates no physical law, instead it puts proper limits on our mind, aids us in conforming ourselves to the real world. In quoting Laplace that metaphysics concerns what is non-perceptible and cannot be demonstrated you are half correct. True the knowledge metaphysics gives cannot be demonstrated by the scientific method because it deals with what is beyond it. Yet it remains valid knowledge. It is perceptible only in a way different from the scientific method. Both scientific and philosophical knowledge require an act of the senses to initiate it. The empirical kind ends with sense information while philosophical goes a step further to knowledge about being that is much deeper. Scientific knowledge stops at the nine categorical accidens (primarily at quantity and quality) while metaphysical ends at essential being. It does so by abstracting from the senses information that falls into the substantial category of being. It gives us knowledge of what is immutable and unchangeable in being. On the other hand empirical science utilizes a straight forward sensitive act that informs our intellect of what is mutible and chaneable to being.
 
Last edited:
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
...from the link:
This perspective has led to the implementation of a class of cognitive architectures called symbolic [35]
-ibid.
symbolic code, AKA tokenized code, has been around for years.
as far as i know, ALL high level code is tokenized.
i guess you might even say assembler code is tokenized.
in this respect, i don't see how annabel can be anything other than already existing chess learning programs.
You've misunderstood it, Annabell is not a symbolic architecture. The introduction is explaining that the constraints of a symbolic architecture make the subsymbolic processing of the neural network architecture they use preferable, particularly when the input may consist of large amounts of 'noisy' data (i.e. real-world conditions).
what is the equivalent "predefined domain" of the brain.
what does this even mean?
A predefined domain would be some restricted area of interest for which a reasonably complete list of symbols could be assembled for a symbolic system to manipulate. A human brain doesn't have predefined symbolic domains, it has many functional domains (and corresponding functional areas); Annabell's functional domain is language; neither the brain nor Annabell are explicitly symbolic systems - although you could argue that symbolic processing is emergent in both.
...proposed system, this implies it hasn't been achieved, which could be irrelevant.
It is irrelevant - the code is there for you to try for yourself, if you don't believe the example interactions they provide are authentic.
most importantly, the above seems to suggest that annabel should be able to learn ANY AND ALL languages.
has this been accomplished?
Yes, in principle, Annabell should be able to learn any natural language; I seriously doubt it has the capacity to learn all languages, although it might be able to learn two similar Indo-European languages; it's an interesting question. I don't know if they've tried any other languages - probably not yet; you could always ask them - or try it for yourself.
the most telling part though is, the paper neglects a large overhead in computing power.
for example, the machine it runs on, the high level code (that must be written by humans BTW).
Well yes; resources are required to process information; why is this 'the most telling part'? your own brain uses roughly 20% of your total energy requirements, and about two thirds of that is for neural signalling (the other third is for maintenance & repair). The code written by humans in this context is the neural network emulation - analogous to the genetic & developmental instructions that direct the structural arrangement of the brain's language areas.
more importantly, this "net" must emulate the brain, which i have serious doubts it will ever do.
Why 'must' it emulate the brain? It's an exploratory language acquisition model, no more than that. There are other projects whose ultimate aim is a brain emulation (I linked a couple previously), but this isn't one of them.
.. computers fail miserably at abstract concepts.
What, like learning a language from scratch? ;)
computers will never "think".
Define 'think'.
computers will never have a conscious.
I wouldn't put money on it, but you're in good company:
There is no reason anyone would want a computer in their home.
Ken Olson, president, chairman and founder of Digital Equipment Corp. (DEC)
Heavier-than-air flying machines are impossible.
Lord Kelvin, British mathematician and physicist, president of the British Royal Society
Fooling around with alternating current is just a waste of time. Nobody will use it, ever.
Thomas Edison
The energy produced by the breaking down of the atom is a very poor kind of thing. Anyone who expects a source of power from the transformation of these atoms is talking moonshine.
Ernest Rutherford
-----
...these will require a fundamental breakthrough in computer technology.
Already addressed.
 
Upvote 0

whois

rational
Mar 7, 2015
2,523
119
✟3,336.00
Faith
Non-Denom
You've misunderstood it, Annabell is not a symbolic architecture. The introduction is explaining that the constraints of a symbolic architecture make the subsymbolic processing of the neural network architecture they use preferable, particularly when the input may consist of large amounts of 'noisy' data (i.e. real-world conditions).
what would to consider to be a real world analogy of "noisy data"?
A predefined domain would be some restricted area of interest for which a reasonably complete list of symbols could be assembled for a symbolic system to manipulate. A human brain doesn't have predefined symbolic domains, it has many functional domains (and corresponding functional areas); Annabell's functional domain is language; neither the brain nor Annabell are explicitly symbolic systems - although you could argue that symbolic processing is emergent in both.
wouldn't you call a "noun" or a "verb" a sort of symbol?
It is irrelevant -
i thought it might be.
the code is there for you to try for yourself, if you don't believe the example interactions they provide are authentic.
it isn't i don't believe it.
learning types of programs have been around for a long time.
some of them are quite remarkable in their ability to arrive at correct answers just by asking you specific questions.
medical diagnosis programs immediately spring to mind.
chess playing programs are even more remarkable, even to the point of beating the grand master.
the fact still remains that these programs require, one, a human to write them, and two, a large overhead of computing power, and three, all of these are essentially text based.
in all reality, even given all of their achievements, computers are as "smart" as a brick.
Yes, in principle, Annabell should be able to learn any natural language; I seriously doubt it has the capacity to learn all languages, although it might be able to learn two similar Indo-European languages; it's an interesting question. I don't know if they've tried any other languages - probably not yet; you could always ask them - or try it for yourself.
the human mind does, easily.
this brings up something else too.
it's been shown that after a certain age, if a person hasn't learned to talk, it most likely won't, ever.
plus, humans show a propensity for learning language at a young age, but find it diffecult to learn another in later years.
i don't think computers will display these types of reactions.
this implies that we haven't got it (annabel) right in this regard.
Well yes; resources are required to process information; why is this 'the most telling part'? your own brain uses roughly 20% of your total energy requirements, and about two thirds of that is for neural signalling (the other third is for maintenance & repair). The code written by humans in this context is the neural network emulation - analogous to the genetic & developmental instructions that direct the structural arrangement of the brain's language areas.
it was a size comparison along the lines of processing density.
Why 'must' it emulate the brain?
again, to demonstrate we can achieve the processing density of the human mind.
we aren't even close to this.
the grand master beating chess machine deep blue required 64 cores, 64 standard architecture machines.
even if we took the conservative approach, this would require 64 square feet at 6 kilowatts.
i can't even imagine the size of the jeopardy playing machine.
What, like learning a language from scratch? ;)
so far, a specific language from scratch.
but, like i mentioned above, this brings up some interesting questions in regards to learning a language.
once this technology is perfected, it seems it would be able to master ALL languages.
it seems astounding, but remember, a computer is about as smart as a brick.
even you must realize that, if you understand the technology.
Define 'think'.
the best answer i can give is "the day to day wanderings of the human mind"
I wouldn't put money on it, but you're in good company:
without a fundamental breakthrough, i would.
current technology will never achieve the processing density of the human mind.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
what would to consider to be a real world analogy of "noisy data"?
In this context (text input), it would be things like a misspelled words, misplaced or random characters, non-grammatical constructs, etc. Anything that doesn't fit the language schema being trained.
wouldn't you call a "noun" or a "verb" a sort of symbol?
Yes, that why I said that you could argue that symbolic processing is emergent in brains and networks like Annabell; but they're not symbolic architectures, they have no inherent symbol processing capabilities.
chess playing programs are even more remarkable, even to the point of beating the grand master.
Chess playing programs are, in the main, entirely conventional evaluative deterministic algorithmic systems. Neural networks are unsuitable for that sort of application - that's why a (relatively) simple number cruncher like Big Blue could beat the finest chess playing mind of its time. You need to understand the fundamental difference between conventional algorithmic computing and the way neural networks function. You can emulate a neural network on a conventional system, just like you can emulate Windows on a Mac, but when considering the function of the emulated system, the underlying computational substrate (conventional digital computer and Mac OS X respectively) is irrelevant (and they themselves are emulations of emulations by the microcode in the processor chip itself).
the fact still remains that these programs require, one, a human to write them, and two, a large overhead of computing power, and three, all of these are essentially text based.
Not necessarily text based; you'll find neural networks behind most advanced speech recognition systems (e.g. Siri, Cortana, Google Now, etc).
it's been shown that after a certain age, if a person hasn't learned to talk, it most likely won't, ever.
plus, humans show a propensity for learning language at a young age, but find it diffecult to learn another in later years.
i don't think computers will display these types of reactions.
Yes, that's right; the brain continues its structural development into the early 20s, but most is done in infancy, so it's analogous to incremental development of an artificial neural network while in use. In practice, artificial neural networks are released as fully configured architectures.
this implies that we haven't got it (annabel) right in this regard.
Why? Annabell is an exploratory tool, I suspect it has exceeded it's developer's expectations; again, it's not intended to emulate a brain.
it was a size comparison along the lines of processing density.
...again, to demonstrate we can achieve the processing density of the human mind.
we aren't even close to this.
That's right - although if Moore's Law continues to hold, it only be another 3-5 years. The challenges are not in processing density or hardware (you can virtualise large neural nets on relatively low power computers), but in architecture and connectivity.
the grand master beating chess machine deep blue required 64 cores, 64 standard architecture machines.
even if we took the conservative approach, this would require 64 square feet at 6 kilowatts.
i can't even imagine the size of the jeopardy playing machine.
As I already said, Deep Blue was a conventional algorithmic number cruncher. Watson is a massively parallel mixed architecture knowledgebase system - it probably uses neural network filtering at some point - it seems to have some of everything (see The AI behind Watson).
it seems astounding, but remember, a computer is about as smart as a brick.
even you must realize that, if you understand the technology.
I understand the technology, but I don't really know what you mean - and I don't think you do either. A computer is just computational substrate - stuff that can process information; neurons in the brain are just computational substrate too - they must be organised, connected together, and trained, in order to be 'smarter than a brick'. A neural network emulation on a computer must be organised, connected together, and trained, in order to be 'smarter than a brick'. A brain may be orders of magnitude more complex than any artificial neural networks we have today, but the underlying principles are the same.
current technology will never achieve the processing density of the human mind.
I agree; the technology of 5 or 10 years from now probably will - but as I said, that's not the real challenge.
 
Upvote 0

Justatruthseeker

Newbie
Site Supporter
Jun 4, 2013
10,132
996
Tulsa, OK USA
✟177,504.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Widowed
Politics
US-Others
We already have systems that are not explicitly programmed to recognise written text (e.g. handwritten text), but do so through training - multiple examples are presented, and the system responses are rated for accuracy (positive & negative reinforcement). In this way, the system learns how to recognise a variety of writing styles - and can make a good fist of recognising a new writing style it has not previously encountered.
Siri, Cortana, Alexa, Google Now, etc., are all based on learning systems; video processing involves an order of magnitude more processing, but will be with us soon enough. It's not longer a question of how, but when.
On that list, only IBM's Watson, the Jeopardy winner, could be really considered a domain-specific AI - and I doubt many in the field would would be happy with that. Asimo and Big Dog are robots with limited automony, and Big Blue was just a big number cruncher.
Intelligence is generally taken to be generalised problem-solving capability; on your list, only Watson barely reaches the threshold, and only in preconfigured language searching, filtering, & sorting.


Learning systems already pre-programed by an intelligence to learn - to adapt. We are not disagreeing a computer can recognize handwritten symbols - but without that Intelligent Designer writing the code first - that computer as someone said - would be simply a paper weight.

You can prove me wrong by taking a computer - wiping all code from the system - and show me it learns to run itself. Or better yet - take that code and program it to randomly write lines of code to its operating system and let's see how long it takes to improve itself. I say within moments it will shut down and no longer operate as those random bits of code introduce errors into the operating system.

You may of course put in an error subroutine which changes the random code back to the original when it doesn't work right.

We can all see the effects that random code has. I am not claiming there is no evolution (mutation) just it has nothing to do with improving anything.

https://www.google.com/search?q=bir...tKTJAhUW8GMKHQL0CuMQ_AUIBygB&biw=1920&bih=969
 
Upvote 0

[serious]

'As we treat the least of our brothers...' RIP GA
Site Supporter
Aug 29, 2006
15,100
1,716
✟95,346.00
Faith
Non-Denom
Marital Status
Married
The comparison between neural nets and biological brains is useful to a degree, but the assumption that they should be equivalent if they work on generally the same principles is misguided.

Lets, instead, look at differences between various brains and computers.

Both brains and computers work with logic gates. The logic gates of brains are slow though. They can only fire dozens to hundreds of times per second. (Let's call the upward bound 200 hz) Computers, on the other hand, can operate at orders of magnitude greater speed. So how can a 2000000000hz computer lose out to a 200hz brain? The brain is massively parallel. We have on the order of 100 trillion synapses in the human brain. No computer even approaches this.

Computers do lots of different jobs though. Aren't some better suited to massively parallel processing? Yes. Video cards, for example, operate at much slower speeds, but can process vastly more data in parallel. We can use the faster cpu to process data, but there is a big performance hit there because it's got to do it more linearly. We can likewise simulate the even more massively parallel brain in a computer, but we take a performance hit just like we do using a CPU as a GPU.

But can we simulate a simpler brain or a part of a brain? Yes. We can. We can pretty much exactly replicate the functions of a number of simple organisms and, as discussed earlier, we are working on parts of more complex brains.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,144
✟349,292.00
Faith
Atheist
Learning systems already pre-programed by an intelligence to learn - to adapt. We are not disagreeing a computer can recognize handwritten symbols - but without that Intelligent Designer writing the code first - that computer as someone said - would be simply a paper weight.
Yes, of course. The point is just that a learning system can learn a language by example without any prior linguistic coding.
I am not claiming there is no evolution (mutation) just it has nothing to do with improving anything.
Improvement is a subjective term. If a mutation allows bacteria to better survive antibiotics, it's not an improvement from our point of view, but if the bacteria had a point of view, it would disagree; if a mutation makes a yeast help brew a better flavoured beer, its debatable whether that's an improvement for either the yeast or us; brewers & beer drinkers might say so, and the yeast would be bred in huge numbers but most would be killed afterwards - swings & roundabouts.

If a mutation in a plant makes its flower more visible or attractive to a pollinating insect, or a mutation in the insect makes it better at recognising the flower, or stronger in competition with mates, or able to hide better from predators, it can be viewed as an improvement for the mutated organism.

Nobody is saying that mutations are necessarily advantageous; current opinion is that of mutations in a population, the vast majority are neutral; of the remainder, most are maladaptive, and only a few are advantageous - but those few can and do make a difference where reproductive advantage is involved. There are also quite a few that work both ways - disadvantageous in some conditions and advantageous in others (e.g. thalassemia, heterozygous sickle-cell trait, etc).
 
Upvote 0