If you ever answered my technical questions raised maybe you would have credibility to dare make the above remark on what I know. I have cited numerous issues which can question an over simplistic dating. It’s the error bounds I am arguing about.
So I assume you can answer just the last one I asked, since you dare claim I don’t know a thing?
in calibrating a half life, on which the uranium lead dating is based , how do you know you are capturing all the radiation to count? you don’t. It’s a known issue, and one of many. I study a lot more widely than my detractors. I am guessing on calibrating half life’s, it is YOU that doesn’t know a thing about It.. At one point I worked on modelling nuclear fuel manufacture and hotspots. So I do know a thing or two about measuring counts.
It’s True you all agree with each other.
I quoted hawking on his ultimate conclusion on model based reality. You should read it.
scientism has taken over from science.
Wise philosophers could have told him that centuries ago on the question of what you can know.
You clearly do not know a thing or two about measuring counts such as being able to differentiate between a macroscopic and microscopic system.
Let’s start off with a very simple example, measuring the air pressure in a tyre.
How do you think the air pressure is measured with a gauge, does it measure the average pressure exerted by molecules or the individual effect of molecules striking the inside of the tyre?
Hopefully you would have answered the former which is a macroscopic system defined by a statistical distribution whereas the latter is a microscopic system.
The same principles apply to radiometric dating, you are dealing with a macroscopic system typically composed of trillions of radioactive atoms.
Radioactive decay is probabilistic in nature and not surprisingly follows an exponential statistical distribution given exponential decay is involved.
For very large numbers the distribution follows a Poisson distribution where the probability P(n) of n atoms decaying in time t is given by:
P(n) = [(kNₒt)ⁿexp(-kNₒt)]/n!
Nₒ is population or sample size, k is the decay constant.
Furthermore since radioactive decay is probabilistic, the half-life t₀.₅ = ln(2)/k is the median value for the distribution.
Now for the nonsense in your post, firstly mass spectrometers are highly efficient near 100% for decay counting and even if there was a radiation counting error of say 5% which is treating the system as microscopic and not macroscopic, the exponential decay curve can still be fitted statistically and does not meaningfully alter the half-life calculation.
Secondly it is not the half-life being calibrated but the decay constant.
When calibrating the decay constant for U-Pb as an example, the clock is reset by melting the sample to expel any Pb daughter atoms, mass spectrometers being highly sensitive will detect Pb daughter atoms during the decay process even when the time t is very short.
To give you an idea of what this time frame is and using uranium metal as the calibration sample instead of ores.
Decay constant k of U-238 is 1.55125×10⁻¹⁰ yr⁻¹.
The maximum practical sample weight for a thermal ionization mass spectrometer (TIMS) is ~1 μg and the molar mass of U-238 is 238 gmol⁻¹.
The number of U-238 atoms Nₒ = (1 μg/ gmol⁻¹) x 6.022 x 10²³ atoms/mol ≈ 2.53 x 10¹⁵.
For the detection threshold based on a 1% precision for k, Δk/k ≤ 0.01.
The uncertainty in Pb counts is Δn ≈ √n since the standard deviation σ of a Poisson distribution is √n.
The relative uncertainty is therefore Δn/n=√n /n = 1/√n ≤ 0.01 ⟹ n ≥10⁴ atoms.
The number of atoms n that have decayed in time t is given by the equation;
n = Nₒ(1-exp(-kt)) ≈ k Nₒt where kt << 1.
t = 10⁴ /(2.53 x 10¹⁵)(1.55125×10⁻¹⁰ ) ≈ 0.0255 years ≈ 9.3 days.
Note we don't need to use ridiculously long time frames to calibrate which can also be cross checked with other mass spectrometers.
Get yourself an education on the subject and stop making up rubbish.