• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

What is Fine Tuning in General?

Uber Genius

"Super Genius"
Aug 13, 2016
2,921
1,244
Kentucky
✟64,539.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Married
Politics
US-Libertarian
How was it ruled out, by whom?
Citation needed.



How was chance ruled out and by whom?
Citation needed.



That depends on your ability to answer the previous two questions.



Citation needed.
And let's not forget that you also claim that "chance" is ruled out as well.

I haven't seen any support for that claim either.

Wow. Again, it becomes clear you don't realize why it is called the fine-tuning problem.

So you haven't even read the Wikipedia site on this problem.

Citations aren't needed, you need to click through to the links provided and actual read the responses!

You haven't invested 2 minutes.

INVESTMENT OF 2 MINUTES NEEDED!

the cosmological constant, λ, is fine-tuned on the order of 10 to the 122 power.

ratio of the strength of electromagnetism to the strength of gravity for a pair of protons, is approximately fine-tuned to 10 to the 36 power.

Now remember that there are currently 31 independent constant and conditions. They are multiplied to find the "chance" hypothesis.

Just those two above give 1 chance in 10 raised to the 158th power.

If I were to take a sub-atomic particle and mark it somehow and then shoot it out into the universe at random and let you pic that a particle at random that had my mark on it your chance would be 1 in 10 raised to the 86th power.

Not much of a chance right? But still 72 orders of magnitude more likely than the two constants above holding the narrow life-permitting range.

That is only 2 numbers! Remember 10 orders of magnitude is ten billion times less likely. So 72 orders of magnitude less likely that you picking a marked sub-atomic particle out of the universe at random in your first try.

Now you could have figure that out by clicking on Wikipedia and typed fine-tuning.

I'm not going to do any more of your homework.

INVESTMENT OF 2 MINUTES NEEDED!
INVESTMENT OF 2 MINUTES NEEDED!
INVESTMENT OF 2 MINUTES NEEDED!
 
Last edited:
Upvote 0

Yekcidmij

Presbyterian, Polymath
Feb 18, 2002
10,469
1,453
East Coast
✟261,917.00
Country
United States
Faith
Presbyterian
Marital Status
Married
Politics
US-Others
the cosmological constant, λ, is fine-tuned on the order of 10 to the 122 power.

Doesn't this confuse precision with probability? That seems to be the fundamental problem with this. For the cosmological constant you only have 1 observation. There is no way to know the distribution of possible values, or any statistical characteristics for that matter. You implicitly must assume a uniform distribution across all real numbers from negative to positive infinity for the argument to work. Of course, that would beg the question, why a uniform distribution? The law of large numbers would suggest a Gaussian distribution, for example.

This same issue will recur with any of the "fine tuned" constants. Not to mention, calling them "constants" on one hand and then immediately treating them as variable later in the same sentence might be problematic too; is it a constant or not?
 
Upvote 0

Uber Genius

"Super Genius"
Aug 13, 2016
2,921
1,244
Kentucky
✟64,539.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Married
Politics
US-Libertarian
There is no way to know the distribution of possible values, or any statistical characteristics for that matter.

So how Brandon Carter and dozens of other astrophysicists since him have developed the fine-tuning problem is straight forward. In order to calculate the probability of a constant’s being such that it leads to a life-supporting universe, we need to calculate the ratio between the range of life-permitting values and the range of values it might have, whether life-permitting or not. We can assess the range of life-permitting values by holding the laws of nature constant while altering the value of the constant which plays a role in that law. So, for example, we can figure out what would happen if we decrease or increase the force of gravity, and we discover that alterations beyond a certain range would result either in large-scale objects’ ceasing to stick together or else collapsing. That will give us an idea of the range of strength of the gravitational force that is compatible with life forms of any kind.

Robin Collins is best at these details.







This same issue will recur with any of the "fine tuned" constants. Not to mention, calling them "constants" on one hand and then immediately treating them as variable later in the same sentence might be problematic too; is it a constant or not?

It seems you, like other out here, haven't invested time studying the basic problem.

"Constants" and initial conditions are indeed constant, but that doesn't help us answer why they are so precisely tuned to produce a life-permitting universe. The question isn't if they can be variable. The question is do the laws of nature create these constants the way energy can be related to mass via the speed of light squared.

So certain fine-tuned constants have fallen off the list due to discovering that the other forces "necessitate" their tuning. And so in effect we are double counting. Luke Barnes does the best job of covering these points in one location.

lukebarnes

Finally it seems on your method nothing can be determined to be fine-tuned. We want a method that can differentiate between chance, necessity, and fine-tuning. So option A, you just figured out something that has stumped the best astrophysicists in the world for the last 75 years without even studying the problem or definitions. Option B, study the data and three inferences the and then form an justification for chance or necessity based on the preponderance of the evidence and your inference (what ever that turns out to be) has the most explanatory power.

Again Collins and Barnes and FIT at Oxford are helpful scholarly approaches and are all peer-reviewed.
 
Last edited:
Upvote 0

Yekcidmij

Presbyterian, Polymath
Feb 18, 2002
10,469
1,453
East Coast
✟261,917.00
Country
United States
Faith
Presbyterian
Marital Status
Married
Politics
US-Others
In order to calculate the probability of a constant’s being such that it leads to a life-supporting universe, we need to calculate the ratio between the range of life-permitting values and the range of values it might have, whether life-permitting or not.

So how do you know the range of possible values? This is the issue I raised in my last post.

We can assess the range of life-permitting values by holding the laws of nature constant while altering the value of the constant which plays a role in that law. So, for example, we can figure out what would happen if we decrease or increase the force of gravity, and we discover that alterations beyond a certain range would result either in large-scale objects’ ceasing to stick together or else collapsing. That will give us an idea of the range of strength of the gravitational force that is compatible with life forms of any kind.

This doesn't give you a range of possible values. This goes to my previous complaint - you must assume the data in question follows a uniform distribution. But you have 1 "observation." How can you get a range of possible values on only 1 data point? You don't know any of the statistical characteristics necessary to talk about probability, and I see no justification as to why anyone should assume a uniform distribution of possible values. For example, maybe the data in question is normally distributed, but is leptokurtic in such a way that 5% differences in some mean value are rare?

Also, I don't see a necessary connection between "range of possible values" and "range of life permitting values" and I don't know that it's safe to assume that there is a connection.

It seems you, like other out here, haven't invested time studying the basic problem.

Meaningless drivel.

Finally it seems on your method nothing can be determined to be fine-tuned.

That would be part of the issue, yes. I don't know that you have the required data to conclude fine tuning occurred at the level of universal constants using statistical methods.

We want a method that can differentiate between chance, necessity, and fine-tuning.

I don't know how you could do that given the observables we have.
 
Upvote 0

Uber Genius

"Super Genius"
Aug 13, 2016
2,921
1,244
Kentucky
✟64,539.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Married
Politics
US-Libertarian
So how do you know the range of possible values? This is the issue I raised in my last post.
Here is the approach to range of value for priors.

http://planck.caltech.edu/pub/2013results/Planck_2013_results_16.pdf

See section 2 for the method of establishing the priors. These methods are also discussed in Carter, Barow and Tipler, Reesr

You seem to be using someone like Richard Carriers approach, which is why I brought up familiarity with the FTP.
you must assume the data in question follows a uniform distribution. But you have 1 "observation."
Yep this is Richard Carriers frequentist approach.

Luke Barnes says the following:

"
Cosmology is impossible: Cosmology is the study of the universe as a whole. What is the prior probability that the universe would have a baryon density parameter in the range
latex.php
? Carrier would have cosmologists count the number of other known actual universes with
latex.php
, and compare that to the total number of other known actual universes. Thus the prior probability is 0/0. If we feed that probability into Bayes’ theorem, we discover that the probability of any cosmological hypothesis given any cosmological data is undefined. Thus, cosmologists should pack up and go home.

(What cosmologists actually do is assign a uniform probability distribution over some range in
latex.php
, thus rejecting finite frequentism."


don't know any of the statistical characteristics necessary to talk about probability,

To learn why Carrier (and your view) destroys science read:

Probably Not – A Fine-Tuned Critique of Richard Carrier (Part 1)

Meaningless drivel.

Again option A you just discovered that a field of physics with thousands of research articles, international conferences, and 42 years of study is a hoax!

Option B you are defining probability in such a way that destroys all inference (after all our background (priors) are 0 and so probability can even be defined at all given bayes theorem and your frequentist assumption.

don't know that you have the required data to conclude fine tuning

Well the conclusion is up to the individual. The data is out there and not questioned by scientists of statisticians. But one would consider various inferences to the best explanation of those data. The two we here about as alternatives are Weak anthropic principle, an near-infinite number of multiverses. They are certainly both held by the largest number on this field as opposed to the design inference. But most will tell you that is because they don't want the design inference to inference to be true.

So your more than welcome to adopt either inference and reject actual fine-tuning. Of coarse Einstein and Eddington were honest about the FTP and the Cosmological inference pointing to a creator. And they weren't happy about it.

So we don't have a data or probability or question about fine-tuning being a feature of our universe. We do have misrepresentation of same by N ew Atheist charletons, like Carrier, or Krauss, or maybe Stenger who falsify the argument much the way Young Earth Creationist misrepresent data about the speed of light being variable.

See this link for detailed discussions about the philosophical tricks the New Atheists and their ilk are producing as red herrings.
Luke Barnes' Blog
 
Last edited:
  • Like
Reactions: Yekcidmij
Upvote 0

Yekcidmij

Presbyterian, Polymath
Feb 18, 2002
10,469
1,453
East Coast
✟261,917.00
Country
United States
Faith
Presbyterian
Marital Status
Married
Politics
US-Others
Here is the approach to range of value for priors.

http://planck.caltech.edu/pub/2013results/Planck_2013_results_16.pdf

See section 2 for the method of establishing the priors. These methods are also discussed in Carter, Barow and Tipler, Reesr

You seem to be using someone like Richard Carriers approach, which is why I brought up familiarity with the FTP.

Yep this is Richard Carriers frequentist approach.

Luke Barnes says the following:

"
Cosmology is impossible: Cosmology is the study of the universe as a whole. What is the prior probability that the universe would have a baryon density parameter in the range
latex.php
? Carrier would have cosmologists count the number of other known actual universes with
latex.php
, and compare that to the total number of other known actual universes. Thus the prior probability is 0/0. If we feed that probability into Bayes’ theorem, we discover that the probability of any cosmological hypothesis given any cosmological data is undefined. Thus, cosmologists should pack up and go home.

(What cosmologists actually do is assign a uniform probability distribution over some range in
latex.php
, thus rejecting finite frequentism."




To learn why Carrier (and your view) destroys science read:

Probably Not – A Fine-Tuned Critique of Richard Carrier (Part 1)



Again option A you just discovered that a field of physics with thousands of research articles, international conferences, and 42 years of study is a hoax!

Option B you are defining probability in such a way that destroys all inference (after all our background (priors) are 0 and so probability can even be defined at all given bayes theorem and your frequentist assumption.



Well the conclusion is up to the individual. The data is out there and not questioned by scientists of statisticians. But one would consider various inferences to the best explanation of those data. The two we here about as alternatives are Weak anthropic principle, an near-infinite number of multiverses. They are certainly both held by the largest number on this field as opposed to the design inference. But most will tell you that is because they don't want the design inference to inference to be true.

So your more than welcome to adopt either inference and reject actual fine-tuning. Of coarse Einstein and Eddington were honest about the FTP and the Cosmological inference pointing to a creator. And they weren't happy about it.

So we don't have a data or probability or question about fine-tuning being a feature of our universe. We do have misrepresentation of same by N ew Atheist charletons, like Carrier, or Krauss, or maybe Stenger who falsify the argument much the way Young Earth Creationist misrepresent data about the speed of light being variable.

See this link for detailed discussions about the philosophical tricks the New Atheists and their ilk are producing as red herrings.
Luke Barnes' Blog

Good post. I won't argue a for a frequentest approach over a Bayesian. I will say that Bayesian vs. frequentist is by no means some settled issue in academia as there are good criticisms of both

http://www.stat.columbia.edu/~gelman/research/published/badbayesmain.pdf
https://www2.stat.duke.edu/courses/Spring10/sta122/Handouts/EfronWhyEveryone.pdf
Critique of Bayesianism
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.451.6137&rep=rep1&type=pdf
http://www.phil.vt.edu/dmayo/conference_2010/Albert Bayesian Rationality and Decision Making A Critical Review.pdf
...among others

I don't know that I would try to use an argument that is contingent on being right about the frequentist/Bayesian debate. Such an argument would seem far from obviously correct and very unconvincing. But this goes back to a previous point - you don't have the required data to draw the conclusion you wish - if your argument depends on determining the outcome of Bayesian vs. frequentism, then you seem to be short on the required information.

An unimportant side note: I don't read Richard Carrier; not sure why he was brought up. I certainly wouldn't have turned to him for issues in math.
 
Last edited:
Upvote 0

TagliatelliMonster

Well-Known Member
Sep 22, 2016
4,292
3,373
46
Brugge
✟81,672.00
Gender
Male
Faith
Atheist
Marital Status
Private
Wow. Again, it becomes clear you don't realize why it is called the fine-tuning problem.

So you haven't even read the Wikipedia site on this problem.

Citations aren't needed, you need to click through to the links provided and actual read the responses!

You haven't invested 2 minutes.

INVESTMENT OF 2 MINUTES NEEDED!

the cosmological constant, λ, is fine-tuned on the order of 10 to the 122 power.

ratio of the strength of electromagnetism to the strength of gravity for a pair of protons, is approximately fine-tuned to 10 to the 36 power.

Now remember that there are currently 31 independent constant and conditions. They are multiplied to find the "chance" hypothesis.

Just those two above give 1 chance in 10 raised to the 158th power.

If I were to take a sub-atomic particle and mark it somehow and then shoot it out into the universe at random and let you pic that a particle at random that had my mark on it your chance would be 1 in 10 raised to the 86th power.

Not much of a chance right? But still 72 orders of magnitude more likely than the two constants above holding the narrow life-permitting range.

That is only 2 numbers! Remember 10 orders of magnitude is ten billion times less likely. So 72 orders of magnitude less likely that you picking a marked sub-atomic particle out of the universe at random in your first try.

Now you could have figure that out by clicking on Wikipedia and typed fine-tuning.

I'm not going to do any more of your homework.

INVESTMENT OF 2 MINUTES NEEDED!
INVESTMENT OF 2 MINUTES NEEDED!
INVESTMENT OF 2 MINUTES NEEDED!

As Yekcidmij has pointed out, you seem to be confusing precision with probability.

Instead of trying to impress me with large numbers, why don't you just answer my question?

How and by whom was it demonstrated that these values could have been different or that chance is ruled out?

Even if I would accept your confusion between precision and probability, if there is ANY chance, no matter how small, then how can you say that chance is ruled out?

After all, only a probability of zero means that something is impossible.
Not only that, a probability calculation only is usefull if you actually know how many trials you potentially have. Which is yet another thing that is unknown at this time.

If X has a probability of 1 in 10^120, and you get to have 1 in 10^1000 trials, not only will X occur, it will actually occur many times!

You don't know any of this. You simply lack the required data to make any of these calculations. You would have to know and understand the process of how a universe forms. You would to know how many trials it gets. You would have to know if there are other universes as well.

As it stands, you have a set of exactly 1. And you have no clue where that 1 came from or how it originated. So it is quite baffling how you can sit there and pretend to "know" these things.

Having said that, it also seems that many in the field don't really consider Barnes to be very honest in his opinion pieces.

For example: https://arxiv.org/ftp/arxiv/papers/1202/1202.4359.pdf

In any case, I no longer expect any answers from you. If you could actually support the premises of this argument (from ignorance), you would have done so already.

So yeah... have fun further positing your assumed conclusion.
 
Last edited:
Upvote 0

Uber Genius

"Super Genius"
Aug 13, 2016
2,921
1,244
Kentucky
✟64,539.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Married
Politics
US-Libertarian
Good post. I won't argue a for a frequentest approach over a Bayesian. I will say that Bayesian vs. frequentist is by no means some settled issue in academia as there are good criticisms of both

http://www.stat.columbia.edu/~gelman/research/published/badbayesmain.pdf
https://www2.stat.duke.edu/courses/Spring10/sta122/Handouts/EfronWhyEveryone.pdf
Critique of Bayesianism
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.451.6137&rep=rep1&type=pdf
http://www.phil.vt.edu/dmayo/conference_2010/Albert Bayesian Rationality and Decision Making A Critical Review.pdf
...among others

I don't know that I would try to use an argument that is contingent on being right about the frequentist/Bayesian debate. Such an argument would seem far from obviously correct and very unconvincing. But this goes back to a previous point - you don't have the required data to draw the conclusion you wish - if your argument depends on determining the outcome of Bayesian vs. frequentism, then you seem to be short on the required information.

An unimportant side note: I don't read Richard Carrier; not sure why he was brought up. I certainly wouldn't have turned to him for issues in math.
Fair enough on the Richard Carrier presumption. And agreed he is not the "scholar" you are looking for. But he has popularized the "frequentist" approach which destroys a significant number of cosmological inferences.
 
Upvote 0

Uber Genius

"Super Genius"
Aug 13, 2016
2,921
1,244
Kentucky
✟64,539.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Married
Politics
US-Libertarian
Having said that, it also seems that many in the field don't really consider Barnes to be very honest in his opinion pieces

Vic Stenger is probably NOT the source for honesty. His publication on fine-tuning had the likelihood at 1 chance in 4 that life supporting universe would arise.

Barnes isn't his problem, the scholars in FTP that I have highlighted are his problem. None of them agree with his method. Further his popular level work is NOT his field of study. It is Luke's.

Stenger's work is similar to Larry Krauss. Both seem to do so research that adds to the overall body of knowledge at the scholarly level and then publish books that don't accurately represent other problems that are outside of their filed of study but have enough of a intersection to fool the general public.

So Barnes the expert called Stenger (not an expert in fine-tuning) out for misrepresenting the FTP. Stenger piled on with trying to escape the fact that no FTP-research in 45 years has defined the problem the way he has.

Stenger is not the only one to receive criticism from Barnes. Barnes has dozens of such articles on his site that criticize all theist, agnostics and atheists that misuse the FTP including Hugh Ross and William Lane Craig. So unlike Stenger, Barnes seems to level his criticism without regard to worldview.

Stenger sure seemed to get a lot of help on his rebuttal wonder if any of those individuals were familiar with the FTP?
 
Last edited:
Upvote 0

TagliatelliMonster

Well-Known Member
Sep 22, 2016
4,292
3,373
46
Brugge
✟81,672.00
Gender
Male
Faith
Atheist
Marital Status
Private
Vic Stenger is probably NOT the source for honesty. His publication on fine-tuning had the likelihood at 1 chance in 4 that life supporting universe would arise.

Barnes isn't his problem, the scholars in FTP that I have highlighted are his problem. None of them agree with his method. Further his popular level work is NOT his field of study. It is Luke's.

Stenger's work is similar to Larry Krauss. Both seem to do so research that adds to the overall body of knowledge at the scholarly level and then publish books that don't accurately represent other problems that are outside of their filed of study but have enough of a intersection to fool the general public.

So Barnes the expert called Stenger (not an expert in fine-tuning) out for misrepresenting the FTP. Stenger piled on with trying to escape the fact that no FTP-research in 45 years has defined the problem the way he has.

Stenger is not the only one to receive criticism from Barnes. Barnes has dozens of such articles on his site that criticize all theist, agnostics and atheists that misuse the FTP including Hugh Ross and William Lane Craig. So unlike Stenger, Barnes seems to level his criticism without regard to worldview.

Stenger sure seemed to get a lot of help on his rebuttal wonder if any of those individuals were familiar with the FTP?

As I predicted... still no answer to my question.
 
Upvote 0

Uber Genius

"Super Genius"
Aug 13, 2016
2,921
1,244
Kentucky
✟64,539.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Married
Politics
US-Libertarian
As I predicted... still no answer to my question.
Over a dozen links with 3+ hours of video and over 2 dozen technical articles demonstrating P2 which is the centerpiece of the FTP for over 40 years. I provided CERNs link to their calculation of each of the attributes in an award-winning technical paper which you clearly didn't read.

As I predicted, you are still to lazy to click a link or educate yourself on the basics of thesis problem.

 
Upvote 0

TagliatelliMonster

Well-Known Member
Sep 22, 2016
4,292
3,373
46
Brugge
✟81,672.00
Gender
Male
Faith
Atheist
Marital Status
Private
Over a dozen links with 3+ hours of video and over 2 dozen technical articles demonstrating P2 which is the centerpiece of the FTP for over 40 years. I provided CERNs link to their calculation of each of the attributes in an award-winning technical paper which you clearly didn't read.

As I predicted, you are still to lazy to click a link or educate yourself on the basics of thesis problem.


When you answer my "who and when" question with telling me to go watch 3+ hours of videos and study over 2 dozen technical articles, then you're really not answering my question.

That's rather like saying "here's a haystack, go find the needle".
 
Upvote 0

Uber Genius

"Super Genius"
Aug 13, 2016
2,921
1,244
Kentucky
✟64,539.00
Country
United States
Gender
Male
Faith
Non-Denom
Marital Status
Married
Politics
US-Libertarian
When you answer my "who and when" question with telling me to go watch 3+ hours of videos and study over 2 dozen technical articles, then you're really not answering my question.

That's rather like saying "here's a haystack, go find the needle".

Like Stenger and the other New Atheists you like fallacy and ad hominem over research and rationality.

Gave you one link, and told you to look at FTP anywhere.

But you wouldn't even read a two-paragraph description of the FTP.

Then gave you more... You didn't even click through.

Then more...too lazy to click a link.

More...your response "I'm not going to read 2-dozen articles, that's absurd"

You haven't read two dozen words.

Your posting on a problem that has been around for 40 years and you can't even state why atheist physicist studying the problem think it is a problem.

Lazy. It is clear you are more interested in propaganda than spending 5 minutes to find out why all the physicists in this area think it is a problem.

Sticking one's head in the sand is not how we gain knowledge about the external world.

33921670-scared-ostrich-burying-head-in-sand-under-danger-sign-Stock-Photo.jpg
 
Last edited:
Upvote 0

TagliatelliMonster

Well-Known Member
Sep 22, 2016
4,292
3,373
46
Brugge
✟81,672.00
Gender
Male
Faith
Atheist
Marital Status
Private
Like Stenger and the other New Atheists you like fallacy and ad hominem over research and rationality.

Gave you one link, and told you to look at FTP anywhere.

But you wouldn't even read a two-paragraph description of the FTP.

Then gave you more... You didn't even click through.

Then more...too lazy to click a link.

More...your response "I'm not going to read 2-dozen articles, that's absurd"

You haven't read two dozen words.

Your posting on a problem that has been around for 40 years and you can't even state why atheist physicist studying the problem think it is a problem.

Lazy. It is clear you are more interested in propaganda than spending 5 minutes to find out why all the physicists in this area think it is a problem.

Sticking one's head in the sand is not how we gain knowledge about the external world.

33921670-scared-ostrich-burying-head-in-sand-under-danger-sign-Stock-Photo.jpg

I didn't ask you to explain the "fine tuning problem". I also didn't ask you for a bunch of resources on the "problem".

I asked you a simple question directly related to a single claim about said problem.
A question of which the answer should not be longer then a couple sentences.

But whatever.

Like I said, I don't expect any actual answer from you.

Bye bye now
 
Upvote 0

Nihilist Virus

Infectious idea
Oct 24, 2015
4,940
1,251
41
California
✟156,979.00
Gender
Male
Faith
Atheist
Marital Status
Private
I'm satisfied that fine tuning is not an issue.

It is known that relativity is not compatible with quantum mechanics and that both fields of study must be invoked to describe an event that occurs on a small scale and yet involves a very large amount of mass. In particular, the Big Bang occurred in a region of space smaller than an electron (because that was all the space in the universe and thus that was all available space) and involved all matter in the universe, meaning that it is such a large amount of mass that relativity cannot be ignored unless relativity is unequipped to describe that much mass.

So in sum we have an event that must be described either by physics that we don't yet have or by an amalgamation of physics that are currently incompatible. This means that any conclusion or computer simulation is simply not correct, and this is known, undisputed fact. Yet the majority of theists cite the current scientific conclusions and claim that fine tuning is the only alternative to chance, completely ignoring the fact that this is still an open question.

Therefore I find theists to be generally dishonest in regards to this topic. My conclusion stated here is due entirely to my own research because no theist has openly admitted what is happening here. The theists who are honest simply haven't bothered to work this issue out.

I welcome a theist to challenge my conclusion but I am done with actively seeking for an answer to this issue.
 
Upvote 0