I don’t think they are saying that these constants are so many decimals long in their measurements. They are refining the small variations that will make a difference to the outcomes down to many decimal places. In other words if something was exactly 10 it could be varied by .00000000001 to make it 9.99999999999. It’s the very small variations they are talking about which make the fine tuning.
There are many great scientists who have commented on the fine tuning of the universe and have proposed massive fine tuning of certain constants. Steven Weinberg who is a noted atheists said the following,
He goes on to describe how a beryllium isotope having the minuscule half life of 0.0000000000000001 seconds must find and absorb a helium nucleus in that split of time before decaying. This occurs only because of a totally unexpected, exquisitely precise, energy match between the two nuclei. If this did not occur there would be none of the heavier elements. No carbon, no nitrogen, no life. Our universe would be composed of hydrogen and helium. But this is not the end of Professor Weinberg’s wonder at our well-tuned universe. He continues:
One constant does seem to require an incredible fine-tuning — The existence of life of any kind seems to require a cancellation between different contributions to the vacuum energy, accurate to about 120 decimal places.
This means that if the energies of the Big Bang were, in arbitrary units, not:
100000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000 000000000000000000,
but instead:
100000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000 000000000000000001,
This is perhaps the most ridiculous fine tuning argument I've ever heard. Not only is the vacuum energy not known to any reasonable precision, it isn't even consistently predicted by the mathematics. In fact, different aspects of physics predict different values of the vacuum energy of 100 orders of magnitude. I actually didn't think these arguments could get more wrong, but to claim the least mathematically precise value in physics is tuned to 10 decillion times the precision of the most accurately measured constant is, quite simply, insane. Or, to utilize an unreasonable number of zeros as you have, using arbitary units, the vacuum energy is predicted to be between:
100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
and
1
Read up on it, it's pretty interesting:
https://en.wikipedia.org/wiki/Vacuum_catastrophe
Now as far as estimations of the effect OBSERVED values would have, we end up with only 1 order of magnitude of sensitivity according to wikipedia (though that specific line doesn't have a citation and I don't have time to dig at the moment). Either way, this argument is catastrophically wrong.
there would be no life of any sort in the entire universe because as Weinberg states:
the universe either would go through a complete cycle of expansion and contraction before life could arise, or would expand so rapidly that no galaxies or stars could form.
http://www.reasonablefaith.org/transcript-fine-tuning-argument
Repeats the earlier claims without citation. This is a transcript of a youtube video, nothing more.
Kind of rambling, but at least cites Brandon Carter as a source, the trouble being that Brandon Carter posits the weak anthropic principle rather than any form of fine tuning. "we must be prepared to take account of the fact that our location in the universe is
necessarily privileged to the extent of being compatible with our existence as observers." and the strong anthropic principle, "the universe (and hence the
fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. To paraphrase
Descartes,
cogito ergo mundus talis est." (the latin bit translates as "I think, therefore the world is such [as it is]"). No universe can be observed but those for which some observer may be formed, and no observer can be formed except at such a time and place within that universe that observers may be formed. The actual source of the precision claimed by the linked PDF I have yet to discover.
It's a sad day when the most relevant scientific source presented is a philosophy journal.
It's a sadder day when it isn't even a link to the paper, or even the abstract of the paper, but rather just a title of a paper.
The site at least has a link to the paper (the fact that you linked to a place that linked the paper rather than linking the paper yourself suggests you haven't read it by the way. If you haven't, as I suspect, please provide me with a page number where such figures are supported:
http://www.commonsenseatheism.com/wp-content/uploads/2009/09/Collins-The-Teleological-Argument.pdf
I hardly have time to read someone else's link to an 80 page writing on philosophy if you lack the time to review your own citation of that same source
This is also known as the
flatness problem and the
critical mass in the cosmic inflation model of the universe
http://www.physicsoftheuniverse.com/topics_bigbang_accelerating.html
Not a primary source, but at least what appears to be a moderately competent science blog at first glance. What i don't see is any reference to the flatness problem, and the references to critical mass are in the context of being invalidated by observations of the expanding universe.
From your link:
"The cosmic inflation model hypothesizes an Omega of exactly 1, so that the universe is in fact balanced on a knife’s edge between the two extreme possibilities. In that case, it will continue expanding, but gradually slowing down all the time, finally running out of steam only in the infinite future. For this to occur, though, the universe must contain exactly the critical mass of matter, which current calculations suggest should be about five atoms per cubic metre (equivalent to about 5 x 10-30 g/cm3).
This perhaps sounds like a tiny amount (indeed it is much closer to a perfect vacuum than has even been achieved by scientists on Earth), but the actual universe is, on average, much emptier still, with around 0.2 atoms per cubic metre, taking into account visible stars and diffuse gas betweengalaxies. Even including dark matter in the calculations, all thematter in the universe, both visible and dark, only amounts to about a quarter of the required critical mass, suggesting a continuously expanding universe."
You will note that we are not even close to such a critical mass, but rather about an order of magnitude off. As such, we can not be finely tuned to many orders of magnitude of such a value. Once again, your own source betrays your argument.
I take it back, the philosophy one might not be your best source. I would argue that Wikipedia is probably a better source than anything else you've yet presented. It states the problem, but sadly for you, also presents the solution if you had kept reading:
Inflation[edit]
Main article: Cosmic inflation
The standard solution to the flatness problem invokes cosmic inflation, a process whereby the universe expands exponentially quickly (i.e.
grows as
with time
, for some constant
) during a short period in its early history. The theory of inflation was first proposed in 1979, and published in 1981, by Alan Guth.[15][16] His two main motivations for doing so were the flatness problem and the horizon problem, another fine-tuning problem of physical cosmology.
The proposed cause of inflation is a field which permeates space and drives the expansion. The field contains a certain energy density, but unlike the density of the matter or radiation present in the late universe, which decrease over time, the density of the inflationary field remains roughly constant as space expands. Therefore the term
increases extremely rapidly as the scale factor
grows exponentially. Recalling the Friedmann Equation
,
and the fact that the right-hand side of this expression is constant, the term
must therefore decrease with time.
Thus if
initially takes any arbitrary value, a period of inflation can force it down towards 0 and leave it extremely small - around
as required above, for example. Subsequent evolution of the universe will cause the value to grow, bringing it to the currently observed value of around 0.01. Thus the sensitive dependence on the initial value of Ω has been removed: a large and therefore 'unsurprising' starting value need not become massively amplified and lead to a very curved universe with no opportunity to form galaxies and other structures.
This success in solving the flatness problem is considered one of the major motivations for inflationary theory.[3][17]
In other words, it seems to be that the inflationary epoch would naturally adjust any arbitrary omega to the levels we see today.
Now, given the time i've already spent on this reply, perhaps instead of Gish Galloping a bunch of links that are either irrelevant, or flat out contradict your claim, perhaps you could take the time to review your own argument and present a single source you feel best establishes a finely tuned constant of your choosing.