Yes, in principle. In practice, it's tricky - even the simplest life we can observe today has had 3.4 billion years of evolution. We don't know what the earliest simplest replicators were like, and we're not certain of the environment they appeared in. But there's been some very promising work in recent years on a variety of hypotheses with a variety of plausible early Earth environments.
We
may be fairly close to producing a simple replicator that can evolve - but there could be a probabilistic stumbling block - although life appears to have got started relatively soon after conditions were suitable, that's 'relatively soon' in geological timescales. It's possible, even likely, that the precise conditions needed are rare, and even in ideal conditions, replicator assembly may be extremely unlikely. However, when you have a whole planet of environments and hundreds of millions of years of reactions, even the extremely unlikely becomes probable. In the wider scale of things, for those still sceptical about the probabilities, there is a whole universe with trillions of potentially suitable planets, so even literally astronomical odds against become near certainties.
So scientists working in the lab
may have the odds heavily stacked against them.
Any idea as to what kind of intuitions the spirit may provide?
Anything else it provides? One can't help feeling that if its only involvement is some of our intuitions, it's pretty close to not involved at all...
Still not sure what you're asking here. By 'patterns of activity', I'm talking about various paths of neuronal signalling within and between neuronal networks that can be broadly categorised or distinguished - in much the same way as tracking the movements of someone over an extended period can reveal 'patterns of activity' in their movements. Consciousness is characterised by localised clusters of neurons signalling within the cluster and also with other clusters of neurons in widely dispersed areas of the cortex and midbrain, typically in synchronized rhythms, and over characteristic timescales. These diverse areas involve those which are known to contribute to specific aspects of conscious experience; e.g. sense of self, bounds, location, ownership, agency, planning, visualising, etc. When consciousness is absent, this widescale synchronised signalling is typically absent.
I wouldn't say 'merely' - subjective experience is crucial to who and what we are.
OK. I think it's more plausible that the idea of having a soul developed from early superstitious and animistic beliefs, partly prompted by our hyperactive agency detection (
HADD) and also the feeling that something seems to leave the body at death. Of course, it's a hypothesis from indirect evidence, but it's plausible, falsifiable, and more parsimonious than invoking an additional unexplained, unobserved entity.
The analogy did not involve dualism. If you find that distracting, take a watch - is it any less of a watch because it's stopped and need winding? it depends on your definition of watch; it's less of a timekeeper, but most people would say it's a stopped or run-down watch.
I think the individual is more than the mind, and I actually said, "
It's a set of communicating processes, dynamic patterns of neural activity in the brain."
As I already said, it's up to you what you think personhood or individuality involves. I have only said that consciousness involves a particular mode or set of those dynamic patterns of neural activity.
Skillful means having the ability to do something well or having expertise. I think it's redundant in a definition of knowledge, as is '
innately'. I'd agree that "
Knowledge <about the world> emerges from having the innately skillful ability to determine truths about the world." But even so, that's not a definition; how it emerges is not a definition of what it
is.
That's simply not the case. Skill isn't necessary to gain knowledge; as long as relationships between items of information (e.g. facts or truths about the world) can be identified, however unskilfully, then knowledge is acquired.
Making a system non-deterministic means introducing randomness, which increases uncertainty, making knowledge unreliable.
AI systems that learn are increasingly not directly programmed, but are designed as neuromorphic networks that learn much as biological brains do, by example; i.e. trial and error. You might be interested in
ANNABELLE, an AI that can learn the rudiments of a language and communicate using it, tabula-rasa; i.e. starting without dictionary, syntax, or grammar. It learns much as a baby does, by interaction with a trainer. Original paper and links to software & examples
here.
Unless you have a strange definition of 'truth' (a fact about the world?), an AI can check for itself whether it has learned something true about the world by the same means that we do - repeated measurement/observation, and/or verifying predictions based on what has been learned.
Huh? it's a
definition - it doesn't have a truth value. You can argue how useful or coherent it is, or whether it's broadly in line with other definitions of the word, but justification and belief aren't relevant. It just says "
This is what I mean when I use this word".
Sure, she can have knowledge of the things she imagines. If, instead of imagining items of information, she uses items of information about the world (facts), she can have knowledge of the world; and a mathematician can gain new knowledge about mathematics by manipulating mathematical information in his head - likewise a physicist with physics (e.g. Hawking).
No; my off-the-cuff definition was, "...
information about the relations between certain items of information, that can be mapped onto similar items of information in similar contexts". 'Assertion' is a confident and forceful statement of fact or belief. They're entirely different.
That's rather a poor example, but if it was something that I could obtain information about, and evaluate the probabilities for, I could give you an answer based on that evaluation. The fact that my evaluation is deterministic actually makes it more reliable than an evaluation that is partly non-deterministic (random).
I really don't see your point. It sounds like you're conflating determinism and fate, but I'm not sure.
Consider an autonomous car; it can be given a goal (destination) and constraints & rules (stay between the lines on the road, obey road signs, avoid collisions, etc.), and it can then travel to the destination following those constraints by acquiring facts about the world and applying it's knowledge of the constraints and rules to them to determine how to proceed.
One could even say it acquires that information more skillfully than a human because it has more types of sensors, that are more sensitive than ours. Whether it gains and applies the knowledge gained from that information more skillfully than a human is debatable - as of now, they're typically more skillful in normal conditions, but less skillful in abnormal conditions (because they're less flexible in behaviour), judged by outcomes.
Perhaps the significant difference, relevant to knowledge, is that an autonomous car doesn't have metacognition; it may have metaknowledge, i.e. it may know what it knows and what it doesn't know in its knowledge domain, but it doesn't (yet) know
that it knows; and it doesn't have understanding of what it knows, in the sense that it can't abstract and apply the relational principles underlying its knowledge - but then it doesn't need to.
Because that dynamic structure, made out of water and carbon and some other things, has sensors that feed it data about the world, storage for information and knowledge, and a sophisticated processor that can pattern-match incoming data with the stored information, and use stored knowledge to make (fairly) reliable deductions and inferences from it.
I didn't say they were wrong, I echoed what
you said, "
you might think that your points have validity, but that is just the perception that was determined by your material substrate."
In other words, what we think about the validity of our respective points is determined by the information and knowledge in, and functioning of, our brains, which are all the result of the interaction of our unique genetic inheritances with our unique life experiences. I don't know about you, but I want my decisions and choices to be based on my accumulated life experience, what I've learned about the world - that's what makes them uniquely
my decisions and choices.
I don't think it is self-defeating - I don't see how you reach that conclusion; self-defeating how?
The quote about free will not being a choice was a Bashevis witticism. It encapsulates the idea that, in a deterministic world where we don't have detailed conscious access to all the determinants of our mental states, our unconstrained or coerced decisions and choices will inevitably feel like the exercise of free will.
My view is that
subjectively, we do have free will, if free will means being able to make decisions and choices according to our personal preferences, desires, goals, etc. However,
objectively, those preferences, desires, goals, etc., are the result of complex deterministic processes resulting from our life experiences, etc.
If free will doesn't mean being able to make decisions and choices according to our personal preferences, desires, goals, etc., I'd be interested to know what it does involve, in practical terms.