Yes, given that multiverse models tend to be motivated by a desire to save determinism...
I don't agree that this is the case; there seems to be a persistent claim that multiverses are motivated by sundry needs, such as the need to explain fine-tuning, or to save determinism, but the fact is that they're
predicted by current physical theories. It may be that they can be recruited post-hoc to some explanation, but they're motivated by the physical theories. For example, in Tegmark's taxonomy, the level 1 multiverse (cosmological) is the logical consequence of a spatially infinite universe that follows the Cosmological Principle (homogeneous and isotropic on large scales); the level 2 multiverse (inflationary) is a logical consequence of the theory of
eternal inflation; the level 3 multiverse (quantum) is a consequence of the simplest no-collapse interpretation of quantum mechanics (Everettian 'Many Worlds'); and similarly for other varieties.
Beliefs and desires are teleological and would not be "causing" behavior in the same sense that brain chemistry does.
I see teleological explanations as entirely reasonable descriptions of causal goal-seeking behaviour. I'm not keen on the Dennettian 'intentional stance' for non-goal-directed processes such as evolution, because it implies teleological development; but where a plan has been made and is pursued, it seems entirely appropriate - and that applies to autonomous robotics that formulate and pursue plans of action, too.
This is actually an interesting problem which leads quite nicely back to the original post, minus the AI red herring and implicit Creationism. What does it mean for beliefs and desires to be causal? These are intentional states--we could say that they represent attitudes towards information about the world. The traditional problem of Cartesian dualism is obviously explaining how the immaterial mind affects matter, but I would be hesitant to let materialism off the hook here, since it is unclear to me how intentional states can compel physical behavior. A muscle spasm produces motion, but what is the causal link between a desire and a motion? The options seem to be:
1) Reductive materialism. Intentional states such as beliefs and desires can be reduced to material interactions and information processing (granting for the moment that computation can be made sense of on a genuinely materialistic ontology, which I would deny). If intentional states are merely supervening upon specific physical states, then they themselves are superfluous to behavior, and it is unclear why they exist at all if we would function in exactly the same manner without them. It seems to me that you would need to retreat into panpsychism to make sense of this approach.
2) Eliminative materialism. Intentional states don't exist. We only believe that we have beliefs. (Just in case people doubted that materialists were capable of YEC level dogmatism.)
3) Non-reductive materialism. The mental is emergent from the physical but cannot be reduced to it. But if mental states cannot be reduced to physical ones, then we are back to wondering what a desire actually is and how it can play a causal role in physical processes. Non-reductive materialism is actually one of my favorite theories of mind, but I do not see how it avoids the problems of dualism simply by labeling itself differently.
Hopefully I have made myself sufficiently clear, since I think this is a better framework for a discussion of intentionality than the initial question of AI was.
I think intentional states exist as a high-level description of certain kinds of materialist causality (e.g. goal-directed behaviours). At a low (primitive) level our systems are hard-wired (programmed, if you like) by natural selection to seek stimulation of the pleasure and reward circuitry, which are wired to respond to feeding, reproduction, etc. Even relatively simple brains are dynamic and can modify their connectivity (hence behaviour) through learning (positive and negative reinforcement). More sophisticated brains can model their world and themselves and use prior experience as a basis for modeling behavioural outcomes, choosing favourable outcomes and planning routes to these outcomes. The underlying drives have the same basis, but may be diverted and sublimated in various ways, through learning (social, cultural, & religious indoctrination, and so-on), etc.
So what I'm saying is that our motivations and drives derive from evolution by natural selection; creatures that seek food, attack or flee from threats, seek mates, co-operate with kin, etc., are more likely to survive to reproduce and pass on those traits. Sophisticated brains build multiple levels of behaviour on top of this, and devise high-level conceptual abstractions to describe those behaviours.
Yes, but the fact that the majority of decisions are unconscious is irrelevant if conscious, deliberative decisions ever take place. It does not matter how much of our behavior is unconscious or automatic--you still need to account for the existence of System II decisions or declare conscious, deliberative thought entirely and always illusory. Which seems like it would be an extreme and anti-empirical position.
Conscious, deliberative thought is clearly real enough (I highly recommend Kahneman's book, "
Thinking, Fast and Slow"); it's simply a different mode of thinking - slow, sequential, effortful, and requiring the full focus of attention (arguably the distinguishing element of consciousness). I think a problem in such discussions is that consciousness is often seen as a unitary, high-level executor, a separate aspect of the mind, but experimental evidence suggests that it's composite, comprising various aspects of the sense of self and the self-model, feelings (interoception), attention, awareness, and so-on, with notifications from unconscious processes. The evidence is that unconscious processing tends to be localised in the brain, conscious awareness involves widescale activity across the brain, and the roots of agency are unconscious.
I would have a problem with anyone except an idealist declaring temperature and pressure illusory. What is from one point of view a vast number of molecules moving around with random velocities is from a different perspective a gas with temperature and pressure. How are molecules real but pressure not real? And if we wish to reduce reality, can we not go even further? I am not at all uncomfortable denying the existence of molecules and particles and taking the relations without relata approach.
Exactly, this is my point. Temperature and pressure are emergent phenomena with their own physical rules and descriptive language that vastly simplify dealing with the behaviour of gas molecules in bulk. Similarly, we have high-level conceptual abstractions for describing human behaviours that are reducible to composites of lower-level behaviours, themselves reducible to causal interactions between neurons, which are reducible to complex organic chemistry, and so-on. We use concepts and language appropriate to the level of emergence or abstraction we're dealing with, and it's a mistake to mix descriptive levels.
I don't think it's coherent to invoke the concept of illusion when discussing aspects of reality that are by their nature subjective and experiential, since the notion of an illusion presupposes the existence of a phenomenal state that matches up to reality. If none do, then the illusory is indistinguishable from the non-illusory.
I broadly agree, although there are situations where the subjective experience misrepresents phenomenal reality in some way, e.g. phantom limb pain, which is real enough, but illusory with respect to the apparent source.