As for false dichotomy, I don't see what you mean.
The false dichotomy is in only offering an either-or when there are other choices, such as the one I suggested.
Yes, God could do what you said, if He wanted little clumps of matter that could plan and act for the sake of planning and acting. I'd be hard-pressed to even guess at any purpose to that, though.
One could always play the GWIMW (God Works In Mysterious Ways) card... But that's the point, there is no underlying purpose in evolution - it occurs as a result of natural selection acting on reproducing populations (ultimately driven by the statistical tendency of entropy to increase, driving the maximization of energy dissipation, and so generating complexity).
Purpose is what we associate with our goals and intentions, and due to our tendency to over-attribution of agency, we also have a tendency to interpret the world in terms of purpose (e.g. Dennett's '
intentional stance').
I referred to the agency of God as mysterious, since it would be uncaused and eternal. But still, here on earth, intentionality is mysterious, at least in its origin. Plants act, they grow toward light (among other things) and I don't think we'd say they do it intentionally, because that requires a mind.
The various tropisms of plants are mostly simple chemical responses, but people often do use the intentional stance when describing such things, although they may know there's no thought involved, because actions that are beneficial can be interpreted or viewed as having the achievement of that benefit as a purpose.
This is the problem some people have with interpreting the results of evolution - repeated selection of variants with a reproductive advantage leads to the development of strong traits that give the appearance of having reproductive advantage as a purpose.
And mind is mysterious, or if you don't like the word "mystery" we can say "not figured out yet".
I think that's more because it's not a well-defined term - people can't agree on precisely what they mean by it, or they use it in different ways in differing contexts. It's mostly used as an abstraction for the operation of the brain in creatures that have them, i.e. thinking, and particularly in creatures that can be construed as having some level of subjective experience (though this is speculation by analogy).
I'd point you to any common dictionary definition of "inference". Only a reasoning being using logic can draw an inference from an association. I'm sure you don't think Pavlov's dog made a reasoned decision to salivate.
Pavlov's dogs were conditioned by association. Perhaps one should distinguish between implicit and explicit logic - for example, when the shadow of a bird passes over a mouse, it will freeze or run for cover because it associates the shadow with danger from above; there is an implicit logic in that response - e.g. "
if shadow
then danger", but such logic is typically hard-wired (perhaps through conditioning). To me, this is the foundation of inference, in the simplest limit - it takes a single premise and produces a conclusion; it's so simple that it doesn't need explicit logic, the logic of association is sufficient. Anything more sophisticated requires explicit logic with transitive relations.
Implementing explicit logic involves a level of abstraction, typically where the associations are produced through the operation of a general-purpose logic engine (i.e. information processing) that can process a series of premises and produce a logical conclusion. If 'only a reasoning being' can do this, then an
inference engine counts as a 'reasoning being'. I'm OK with that. I'm also fine with restricting the concept of inference to the explicit logic I described above.
I've cited a study which suggests that not every aspect of consciousness is produced by the brain.
If you mean the one you quoted as saying, "
The subjective experience is thus inconsistent with the neural circuitry", this was simply confirming that there is no stable high-resolution neural image map - our subjective experience, as is so often the case, is misleading. I explained that the apparent image we perceive is a predictive artefact rather than a real-time perceptual image.
What occurs is that the poor resolution peripheral detail is filled in on-the-fly as needed by visual saccades, giving the impression that the whole visual field is high-resolution (this kind of technology is being incorporated in to VR helmets to increase apparent resolution while reducing computational requirements). The parts of the visual field that are not in foveal focus are filled in, as occurs with the blind spot, with a rough expectation or extrapolation. Jeff Johnson, in 'Designing with the Mind in Mind' explains it quite well:
"The resolution in your peripheral vision is roughly equivalent to looking through a frosted shower door, and yet you enjoy the illusion of seeing the periphery clearly. … Wherever you cast your eyes appears to be in sharp focus, and therefore you assume the whole visual world is in focus.
If our peripheral vision has such low resolution, one might wonder why we don’t see the world in a kind of tunnel vision where everything is out of focus except what we are directly looking at now. Instead, we seem to see our surroundings sharply and clearly all around us. We experience this illusion because our eyes move rapidly and constantly about three times per second even when we don’t realize it, focusing our fovea on selected pieces of our environment. Our brain fills in the rest in a gross, impressionistic way based on what we know and expect. Our brain does not have to maintain a high-resolution mental model of our environment because it can order the eyes to sample and resample details in the environment as needed (Clark, 1998).
For example, as you read this page, your eyes dart around, scanning and reading. No matter where on the page your eyes are focused, you have the impression of viewing a complete page of text, because, of course, you are.
But now, imagine that you are viewing this page on a computer screen, and the computer is tracking your eye movements and knows where your fovea is on the page. Imagine that wherever you look, the right text for that spot on the page is shown clearly in the small area corresponding to your fovea, but everywhere else on the page, the computer shows random, meaningless text. As your fovea flits around the page, the computer quickly updates each area where your fovea stops to show the correct text there, while the last position of your fovea returns to textual noise. Amazingly, experiments have shown that people rarely notice this: not only can they read, they believe that they are viewing a full page of meaningful text (Clark, 1998). However, it does slow people’s reading, even if they don’t realize it (Larson, 2004)."
There are others, such as studies on brain plasticity.
I'm familiar with brain plasticity- how does it support the idea that not all consciousness is produced by the brain? I'd like to see those studies.
But since you mentioned "brain as reciever" - although that's not my view, it's a good analogy to refute your view somewhat - that is to say, if I mess with a radio, I can affect its output, therefore there's no such thing as radio waves, which would be false.
Yes, it would be false. But I think the brain deserves to be more than a radio - a TV makes a more, er, visual analogy; if, by messing with the TV, you could change the gender of a newsreader, or the layout of the studio, or the news itself, or change the plot of a play or its actors, or change the schedule of programmes, etc., i.e. change the
content of what was playing, that would be strong evidence that what you were watching was not a broadcast and what you were watching it on was not a TV.
I'd think an expectation of seeing anything would have to be based on having seen it before.
It can be much broader than that - for example, you expect to see similar things in similar contexts, i.e. expectation by association. The brain does much of its work by pattern-matching and associative retrieval. This is why
pareidolia is so common.
I guess I'll try an appeal to emotion now.
Do you ever worry that we're going to destroy humanity-as-we-know-it?
I think it's inevitable that humanity-as-we-know-it will be 'destroyed' by our actions; either by deliberate or accidental transformation. I think it's unlikely that we'll cause our complete extinction, but there are plenty of ways we could decimate the population and destroy our current way of life. On the other hand, it's equally - if not more - likely that nature will do it for us. I don't think we'd do well in a
Carrington-level or bigger solar storm, or if a supervolcano blew, e.g. Yellowstone, or volcanism like the
Deccan Traps, or a
Chicxulub or bigger asteroid impact, or a nearby
gamma-ray burst,
supernova or
hypernova, etc.
How would you feel if science hits a brick wall and can proceed no further, as it may have done with quantum mechanics? Trajectories don't continue forever, and it's entirely conceivable that there could be a post-science world.
Quantum mechanics research and discovery continues apace. Science will only stop when there's no more to discover, or no more scientists. In my opinion, the latter is far more likely than the former.
At least one person has to have the goals of the company; ideally every person should. That's why it's called a "company".
A person having the
intended goals of the company doesn't mean the company necessarily follows those goals. Corporate agency has been a subject of philosophical debate for years, for example, see Petit's '
Group Agency', and Mulgan's '
Corporate Agency and Possible Futures' (particularly the section 'The Present Debate About Corporate Agency'). The consensus issue today is not whether groups can act with an apparent agency and goals that the individual members don't have or support, but how they should be treated.
To me, a single living cell splitting itself in two seems goal-seeking and purposeful, not to mention a protein folding. But I guess it's a matter of interpretation, as you say. Or common sense.
Yup; if you look really closely, you'll see it's a complex sequence of chemical cascades regulated by gene activity (more chemical cascades).
My concern is that on the naturalistic view it cannot be "true". 2 + 2 = 5 would appear axiomatic to us if it were useful.
Science doesn't test for 'truth', it tests for consistency with observation. If you test the predictions of a belief or claim and they are not consistent with what you observe, the belief or claim is wrong. 2 + 2 = 4 is the
result of applying the axioms of mathematics, it's not itself axiomatic; it's correct
by definition. The axiomatic consistency of mathematics is what makes it useful.
We are animals, biological organisms. Do you think that an amoeba or a chicken can "do" anything true? If not, what makes you think human reasoning can be true when it's just another thing humans "do" to survive? We eat food, but eating is not true, and doesn't lead to any truth. Why is reasoning different? It seems you place reason on the same pedestal I do, except without grounds for doing so.
Truth is usually taken to be accordance with fact or reality; your usage seems peculiar. Human reasoning can be demonstrably correct, i.e. it can make true statements
in formal axiomatic systems because statements in such systems are (generally) proveably correct; i.e. they can be shown to be correct using the axioms.
In other words, I'd feel like what I actually am?
Kind of - the way I see it, the conscious self isn't a separate entity, it's a useful part of the whole, but its perceptual reality is not what it seems to be, it's constructed and tweaked so that it can function effectively, and the processes behind it are mainly hidden (various illusions and perceptual anomalies give clues that all is not quite as it seems).
But my question, re-worded, was, "what evidence could support my view"? (My view that "I" am somehow "real".) How would you falsify my view?
You are 'real'. But just as the experience of a phantom limb is real, the causal processes behind your experiences of the world are not necessarily what they appear to be. I'm not sure your view is falsifiable - if it is, it should provide a testable prediction or predictions that could falsify it. Does it?
I ask because everything you've mentioned I could just as easily chalk up to "those are the means by which God set up the "I" to operate". None of this experimental evidence actually goes very far to support your philosophical view.
Of course you could; the God hypothesis can account for anything. If you do that, you ultimately end up with the claim that God set everything up so it looks exactly as if He didn't set anything up; or maybe you end up with a deist God that set up the initial conditions of the universe and let it run. Quite a few believers do just that. Science just carries on studying how the world behaves - you can attribute that behaviour to whatever you like, as long as is consistent with what we observe when we test hypotheses.
Not sure what you mean by my philosophical view here - if you're referring to my atheism, my lack of belief in God, and all supernatural, paranormal, magical, etc., phenomena, is a result of there being no plausible evidence in support of such phenomena. I don't need evidence to support a
lack of belief, but to support a belief (or, by my values, to have confidence that it's the best available model).
Your examples and your definition above are fine, it's just that emergence lacks explanatory power. It's not unlike "God did it". You ask me exactly how little gods make decisions, and I can't tell you. I ask you exactly how consciousness emerges, and you can't tell me.
Sure, it's the 'hard problem' of consciousness - why we have subjective experience at all, why there is 'something it is like' to be us. But my view is that this will remain inexplicable precisely because of its subjectivity. However, we can explain how decisions can be made, we know how sensory, short-term, and long-term memories are made and processed, and where this happens. We already have empirically based models for much of what the brain does, including various aspects of consciousness. Currently, they're fairly limited, but we've only had the tools to study the fine details for a few years.
Today I heard a lecture that described how it's now possible to suppress traumatic memories and/or trigger positive memories in contexts with traumatic memory associations, by shining red or blue light into the brain to stimulate or suppress the relevant memory pathways; this has been done with mice, and the implications for treatment of PTSD, flashbacks, and other debilitating traumatic recall conditions are obvious (as, of course, are the ethical issues).
The point is that our understanding of brain function and pathways is reaching the point where we have the possibility of directly intervening to rectify debilitating mental problems with the same specificity that a pacemaker regulates an errant heartbeat. We'll be able to drop the crude bludgeon of drug treatments with their side-effects for precision mental engineering.