K
kharisym
Guest
You can create a piano-playing robot and teach it how to play Fur Elise, but you cannot instill in that robot an appreciation for the beauty of Ludwig van Beethoven's music. You can even program the robot with an understanding of what "beauty" means to humans, examples of beautiful things and encyclopedic analysis of them. But in the end, the machine knows what beauty is only because it's been told by us what beauty is.
We're working on that. Three research programs taking place right now can pave the way to 'computational comprehension'.
First is the blue brain/FACETS programs in Europe, each of which is rather tightly linked. blue brain is currently attempting to reverse engineer the neocortical column of mice brains and simulate them to a high degree of neurologic and biochemical accuracy. FACETS is taking the database from blue brain and building a chip-based instead of virtualized model, allowing them to run it at around 100,000 times the speed of an organic network.
The next I forget the name of, it has DARPA funding though so you can probably dig it up off a list somewhere. In it, they're studying the methods cat brains use to analyze pictures and are attempting to build a functional model of a cat's occipital lobe. This is far more aggressive than the blue brain/facets projects, but also has a higher degree of failure due to the steeper knowledge base they've gotta hurdle and the sheer complexity of the net they're attempting to mimic. It's pretty much a guarantee given our current technology that they'll have to either get really creative or significantly trim down the number of nodes.
The last off the top of my head is more recent than the other two, and isn't directly dealing with neural networks. It's an experiment testing the limits of indirect distributed computing over massive numbers of cores using neural network models and packet bus infrastructures. It's called Spinnaker, and could provide the technology necessary to make neural computing a cheap reality. Some theories hold that the limit to complex behavior in neural networks is just one of scale- if we can create a nonscalar neural network comprising of millions of nodes instead of hundreds or thousands it could achieve some level of sapience and self action. This personally isn't my opinion, I go with the field of thought that believes there's a need for basic prestructuring to make sapience achievable and some experiments have proven this. Namely I recall a research paper a while back stating that no matter the size of a neural network, it can hold a max of 500 bits of related data before bleed takes place and the stored data incurs noise.
It's interesting to note that neural networks already show a disturbingly large amount of 'comprehension' of images even in basic forms such as the hopfields I've worked with. They're capable of building relational structures between disparate stimuli, so take a sufficiently large hopfield, plug into it a camera and a microphone, and in time it can begin relating your mom's voice to her image. Then you can have her talk or show it her face and it'll light up nodes indicating acknowledgment of her presence. This is rudimentary memory very similar to human memory.
Last edited:
Upvote
0