Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.
We seem to have two choices here.
(1) Early experiments with very basic equipment were less accurate and contained more sources of error than the experimenters realised. The true value of c lay outside their estimated range of error.
(2) The error bars were accurate and reliable and are evidence, not of c decay, but of c fluctuating wildly.
To answer the third question, we analyzed the data by error bar size. If the decreases in the measured value of c are a result of increased measurement precision we would expect to see a decrease in the significance of the confidence levels as the true value of c is approached. The results are shown in Table 7.
Overall, 6 of 11 tests were significant at the 95% confidence level. There are three distinct subgroups in Table 7. The first, from ±1000 km/sec to ±100 km/sec, clearly shows decreasing confidence levels. In the ±50 km/sec to ±10 km/sec group the confidence level suddenly jumps to 92% and then starts to decrease again. In the third group, from ±5 km/sec to ±0.5 km/sec, the confidence levels are steady and significant. The results may appear at first to be ambiguous---both decreasing levels and steady significant ones. However, from what has been learned from earlier analysis we know the aberration values are systematically low and there are a significant number of them in the 60 to 200 km/sec range---just where the confidence levels are decreasing.
The dramatic drop in confidence levels from ±20 km/sec to ±10 km/sec and the equally dramatic rise between ±10 km/sec and ±5 km/sec tells us that there is a systematic problem with data which have error bars in that range. Kerr cell results provide 4 out of 6 of these data. It must be concluded that our earlier suspicions of systematic error in the Kerr cell measurements appear valid.
In order to further test for suspicious sequences that might be a product of experimenter expectations, a search was made for consecutive data where the points were both higher than the current value of c and at the same time decreasing with time. Taking into account that 58% of the data points are higher than the accepted value of c, we found that the occurrences of the above sequences were close to the accepted values. That is, there was no statistically significant deviation from the expected value.
Taking these various problems in the data into account it must be concluded that the decrease in the measurements of c cannot be attributed to the increase in the precision of the measurements.
One problem I'm seeing is that Montgomery took the absolute value of the difference-from-present-value of c without taking into account the error bars. The error bars can give us a good picture of the relative importance to put on the measurement errors.
***
I'd agree that string theory is pretty iffy, and many scientists have indeed said so, since right now it isn't making too many predictions which we can test on the energy scales currently available to us.
Intuitively speaking, VSL (variable speed of light) theories don't make sense to me simply because they talk about variation in a dimensional quantity (i.e. c) which just isn't physically meaningful. For example, if I measure time in years, and distance in light-years, then by definition the speed of light has never changed and will never change! What happens here is that dimensional quantities are really combinations of dimensionless quantities (such as the fine structure constant, the electromagnetic force / gravitational force ratio, etc.) so that some methods to measure dimensional quantities actually measure, say, the fine structure constant, while other methods to measure dimensional quantities measure the ratio of electron mass to proton mass. What I find unsettling is that Setterfield apparently has not read any of the literature on it because he explicitly states that the fine structure constant has not changed; and yet he states that "atomic time changes, while dynamical time has not"! The fine structure constant plays a large part (to the best of my knowledge) in subatomic transitions and surely if it were constant, so would the speed of light be constant as measured by atomic parameters (i.e. atomic clocks), and hence atomic methods of measuring the passing of time should also be constant i.e. radiometric dating, contrary to his claims. In fact, when scientists claimed that there were variations in the rate of nuclear decay measured in the Oklo natural reactor, they attributed it precisely to changes in the fine structure constant, not anything else. I'm afraid I can't really express this in the right formalisms (and if I could, nobody here would really be able to understand it, including the 19-year-old me!), but this theory just strikes me as being physically unsound.
I also need to learn up a bit on black hole collapse because it seems to me that a VSL theory would make some pretty interesting predictions (i.e. that it should have been much harder to form black holes in the past than in the present, since the Schwarzschild radius would have been a lot smaller due to a larger c.
6. Regarding the behavior of the fine structure constant with time: If there was any fractional variation in the fine structure constant this should be very obvious from the observations of spectral lines in distant galaxies.
Setterfield: The behaviour of the fine structure constant has recently been re-assessed in this model. In a major paper undergoing review at the moment, it is deduced that the fine structure constant will be marginally greater in a gravitational field. This results from an approach to the ZPE and gravitation that is consistent with SED thinking and may allow a test to be made to determine which theory of gravitation is more correct.
CONCLUSION:
There is a problem which needs to be mentioned in closing; a problem which is underlying much of the problem some are having with the work presented on these pages. Physics has currently seemed to reverse a sequence which should not have been reversed, and in doing so has made several wrong choices in the latter part of the twentieth century. Those that are underlying the reviewer's criticisms have to do with the permeability of space, a mistaken idea about frequency in terms of the behavior of light, and the equations of Lorentz and Maxwell. As mentioned in point 1, permeability was related to the speed of light early in the twentieth century, but divorced from it later and declared invariant. It was invariant by declaration, not by data, and this is the first backwards move which has influenced the reviewer's thinking here. Secondly, it has become accepted that the frequency of light is the basic quantity and that it is the wavelength which is subsidiary. Until about 1960 it was the wavelength that was considered the basic quantity for measurement. However since it had become easier to measure frequency with a greater degree of accuracy, the focus shifted from choosing wavelength as the basic quantity to using frequency in its stead, thus relegating wavelength to a subsidiary role. The data dictates something else, however. It is wavelength which remains constant and the frequency which varies when the speed of light changes. This latter point was made plain by experimental data from the 1930’s, and was commented on by Birge himself.
In a similar way, although both Lorentz and Maxwell formulated their equations before Einstein adopted and worked with them, it has become almost required to derive the formulas of both Lorentz and Maxwell in terms on Einstein’s work. Properly done, it should be the other way around, and the work of both earlier men should be allowed to stand alone without Einstein’s imposed conditions.
One final note: In the long run, it is the data which must determine the theory, and not the other way around. There are five anomalies cosmology cannot currently deal with in terms of the reigning paradigm. These are easily dealt with, however, when one lets the data go where it will. The original data are in the Report. As given in my lectures, the anomalies concern measured changes in Planck’s constant, the speed of light, changes in atomic masses, the slowing of atomic clocks, and the quantized redshift. Modern physics seems to be showing a preference for ignoring much of this in favor of current theories. That is not the way I wish to approach the subject.
The common factor for solving all five anomalies is increase through time of the zero point energy, for reasons outlined in “Exploring the Vacuum.”
http://www.setterfield.org/tworelativities.htmlHowever, there is an inconsistency here, because the competing claims of the Lorentzian approach have not been considered. These claims of LR are equally well supported, but they have several important differences, one of them being that there is no universal speed limit such as SR claims for lightspeed. SR claims that one point of reference, such as someone on a moving spacecraft somewhere out in the cosmos at position A, is indistinguishable from another frame of reference elsewhere in the cosmos at point B, even though they have a velocity relative to each other. The only thing the observers have in common is the fixed speed of light. The mathematical transformations that are done by the observer at A to work out what is happening at B will have their counterpart by the observer at B trying to work out what is happening at A. In other words, the relationships are reciprocal in SR as all frames of reference are equivalent, there being no preferred frame. This is one of the two very general postulates of SR, the other being the fixed speed of light.
But the Lorentz approach is different: it states that there is indeed a preferred frame of reference, so that the mathematical transformations only need apply one way, namely to the moving body, not reciprocally to the preferred frame at rest. It is important to note that General Relativity (GR) is built on SR using only these one-way mathematical transformations relative to the local gravitational field, the center-of-mass reference frame [3]. Since this is the basically same as the Lorentzian approach, then GR is just as consistent with LR as it is with SR [4].
In the long run, it is the data which must determine the theory, and not the other way around.
Are there any good papers that have examined this? The ones cited by Wikipedia seem to have a lot more to do with Arp's weird QSO theories than with redshift quantization per se.There are no quantised redshifts.
End of story.
The reason you don't see Tifft's stuff discussed much these days is because it is based on a flawed analysis (+ possible fraud) on 20 - 30 year old tiny data sets.
Check out papers by people like Salpeter, Terzian, Haynes and Newman.
This really is all much ado about absolutely nothing.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?