I think this question of causal primacy is fundamental to our conversation. You think that the substrate causes the emergent property; that the objective, measurable realities cause subjectivity; that the parts cause the whole; that the mechanistic material causes the organism. This is precisely the inference that I see as unjustified.
Emergence is described as, "...the condition of an entity having properties its parts do not have,
due to interactions among the parts."
Wikipedia (my italics). It's the interactions of the parts that is causal; without the parts, there are no interactions and so, no emergence.
I do want to at least point out that there is a battle of terms going on underneath this, for no one denies that substrates are more causally primary than that which emerges from them. Yet this terminology is based on a modern understanding. Aristotle would not have used the terms that way. He would have seen the substance as more primary than the material cause in an ontological sense. That is to say that the material plays only a small role in the causal explanation of the entity. Ergo: contrary claims will will identify things like the substrate differently.
I don't understand the constant emphasis on primacy - there are various relationships between the entities we're considering which can be viewed in different ways in different contexts.
Intent in the context of our conversation about consciousness seems to rest on the idea of intentionality and active directedness. When I write a letter I actively direct the muscles in my arm and hand to control the pen in such a way that it produces visible English words. Intentionality is inevitably present in this act. This aspect of intent may become important in the context of our conversation about consciousness, objectivity, and subjectivity because it gets at the active/passive distinction. I think that is perhaps the core problem with the qualitative difference between objectivity and subjectivity: passivity and activity.
Intent isn't necessarily active; you can have the intent to do something but not do it. Intent is, effectively, a minimal plan
to make an attempt (which may involve planning) to achieve a goal. I don't really see how passivity and activity is particularly relevant to objectivity and subjectivity - perhaps you could elaborate or give some examples.
... isn't this just to say that there is an important phenomenological way in which subjectivity is more primary than objectivity?
Sure, in as much as phenomenology is, broadly, the study of the contents of consciousness & subjective experience; one might also say that phenomenology is the appropriate way to study the interacting patterns of neural activity in the brain from the subjective viewpoint
In your chain of communication the very first thing is subjective experience. That doesn't directly contradict the claim that objective realities such as neurons cause subjectivity, but it complicates it considerably.
The idea was to emphasise the fundamental dichotomy between the subjective and objective views and the inaccessibility of the subjective to the objective, not the relevant causality. If we can only communicate in terms of shared objective experiences, it is equally possible to suggest that the objective experience (whether interoceptive or exteroceptive) 'comes first'.
I agree there is a correlation between patterns of neural activity and consciousness. Whether they are two sides of the same phenomenon, I do not know.
The evidence from neuroscience suggests that they are. As far as we can tell, consciousness develops in infants as their brains become competent, fragments in old age as dementias destroy the brain, divides when the corpus callosum is divided, and every observable and reportable aspect of it that I know of can be modified by interference with specific areas of the brain in specific ways. It appears to have evolved where a high level of behavioural flexibility has a selective advantage, but is built on and dependent on the relatively inflexible modular heuristics of earlier evolutionary times.
Or they might ask, "Does X produce consciousness?" I'm honestly not even sure how rigorous the scientific approach is in this case. The human being is so complicated... Suppose the scientist identifies 1,000 correlates to consciousness (which are always present when the subject is reportedly conscious). What relation do those 1,000 correlates have to consciousness? Is it causal? A sufficient condition? If the scientist identifies these 1,000 correlates in a comatose patient has he demonstrated consciousness? Are these 1,000 correlates the entirety of the ontological correlates to consciousness? A tenth? A hundredth? A millionth?
We use the behavioural correlates of consciousness to judge our fellow man conscious every day. In fact, consciousness has been identified in some patients thought to be in a persistent vegetative state, by measuring their brain activity - first discovered when one such patient was scanned and showed high levels of cortical activity in the areas involved with semantic processing and sentence analysis in response to speech. Consciousness was confirmed by asking the patient to visualise either playing tennis or walking around her home as 'yes' and 'no' indicators. Subsequent questions consistently produced two distinct patterns of activity which corresponded to correct yes/no answers about the patient's life, etc. This procedure has been repeated for a number of patients previously thought to be vegetative.
If consciousness is fully caused by objective, "passive" entities, then intent is emergent and ultimately reducible to these objective correlates (or, in this case, causes). But then the active is reducible to the passive, and this in turn means that a human action such as intending is reproducible by forces outside the agent, denying the idea that the agent is the locus of his action. There ends up being really no difference between the things I do and the things that are done to me.
I don't agree - if I understand your point - the emergent phenomenon, human agency & its intent, are emergent from the interactions of neurons in the brain that are not passive and not 'outside' the agent, any more than the birds in a large swooping and twisting flock are passive and outside the flock entity.
(Apparently this is spiraling into the determinism/agency question, which often implicates free will. Avoid it if you can.
)
I have a simple description of free will that I currently find satisfactory - unsurprisingly, it involves the dichotomy between the subjective and objective viewpoints
Actually I don't see utility, agreement, and discourse as primary here. I would rather see reality and understanding as primary. I would rather you yourself come to a complete and incommunicable understanding of reality than limit yourself for the sake of utility and communication.
OK.
And yet there may be some deeper reason why someone sees the objective as primary. Earlier you implicated causality (and ontology) in that claim, which goes beyond utility. I doubt the average emergentist would be content to rest their system on utility. Of course, there may also be deeper reasons why someone sees the subjective as primary.
As I said before, the idea of primacy is contextual - I'm suggesting that when considering the objective and subjective views, people tend to start with the objective and what it means for the subjective view because it's our common language, it's all we can agree on. As Wittgenstein said, there's no private language.
In terms of causality, it seems to me that the substrate is primary as explained above. At the emergent level, the patterns of interactions of the substrate elements have their own causal relationships and behavioural rules, but they're ultimately dependent on substrate element activity.
I suspect part of the problem is the common implicit reification of consciousness as if it is an entity in its own right. As I see it, consciousness is a process; it starts and stops and can be interrupted and interfered with; being informational, it has more in common ontologically with Conway's Game of Life than it has with flocks, swarms, and shoals, yet even a trivially simple substrate like CGOL is capable of supporting universal computation and potentially, given suitable inputs and effectors, physical influence.
I would venture that the human being is magnitudes more complicated than the observable solar system.
I assume that's a rhetorical assertion - the Earth alone, even absent humans, contains ecosystems with more organisms than there are cells in the human body.
Supposing the mythological was not a sufficient tool to measure the stars, is the modern scientist's computer a sufficient tool to measure consciousness?
If brains function as they appear to, i.e. the biological interactions of brain cells - without exotic quantum or mystical influences, then I see no reason, in principle, why not. We're not technically anywhere close yet, but brain simulation projects like
The Human Brain Project and the
Blue Brain Project are aiming to eventually
simulate the human brain, and although they're not aiming to produce consciousness, it seems likely that it will eventually be within their compass - they are considering the ethical implications.