Most common descriptions about the takeaways of what the "butterfly effect" is telling us, are themselves described in terms that automatically assume that determinism is true, and that this then breaks down in practice, solely because of the impossibility of specifying the initial data accurately enough. This claim is everywhere.A scale error? You mean like how a difference of one part in a thousand can rapidly lead to very large changes?
What we do not see however, is a recognition that no real-world application deals in, or even attributes meaning to, the concept of 'complete information' of the initial state. As soon as one recognizes that information is by its nature, an interval of uncertainty, then one recognizes that real-world chaos applications are about what happens to intervals, not about what happens to points (or initial datapoints). And what happens to intervals is that they map into statistical tendencies. In the presence of chaos, those statististical tendencies evolve through three temporal domains:
i) an early domain where the entire (initially small) interval behaves the same way and is essentially the same behavior;
ii) an intermediate domain where the interval breaks up into regions of clearly different statistical behavior, which one might attempt to manipulate to increase the chances of a desired outcome, by making small controlled perturbations within that initial interval which can be resolved into these statistically different behaviors;
iii) a long-term domain where the entire interval results in essentially identical behavior, which can only be modeled as completely random over that interval.
Making the initial interval smaller (reducing the "uncertainty" in the initial data), only changes the durations of these intervals, not their basic nature, and even their durations is only changed a small amount, even with huge reductions in initial uncertainty. In particular, if phase (ii) evolves into phase (iii) before the scale of some initial perturbation can grow, (via its Lyapunov exponent), into the large-scale difference of interest (like tornadoes), then language like the perturbation 'changing the outcome', is nonsense.
Many descriptions of 'the butterfly effect' are clearly completely missing the point of this effect. The problem is not that the initial data is 'insufficiently precise', because even if you improve it a millionfold, the long-term behavior is still random, with identical tendencies, and thus unaffected by small perturbations within that interval.
So exactly when did the powerlessness of a butterfly to affect long-term weather patterns turn into the powerfulness of butterflies to affect long-term weather patterns? (This is the scale error I'm referring to). It happened when a mistake was made, and now that mistake is so widespread it is almost impossible to correct it.
Upvote
0