• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • CF has always been a site that welcomes people from different backgrounds and beliefs to participate in discussion and even debate. That is the nature of its ministry. In view of recent events emotions are running very high. We need to remind people of some basic principles in debating on this site. We need to be civil when we express differences in opinion. No personal attacks. Avoid you, your statements. Don't characterize an entire political party with comparisons to Fascism or Communism or other extreme movements that committed atrocities. CF is not the place for broad brush or blanket statements about groups and political parties. Put the broad brushes and blankets away when you come to CF, better yet, put them in the incinerator. Debate had no place for them. We need to remember that people that commit acts of violence represent themselves or a small extreme faction.

Shift on Red Shift

Status
Not open for further replies.

Assyrian

Basically pulling an Obama (Thanks Calminian!)
Mar 31, 2006
14,868
991
Wales
✟42,286.00
Faith
Christian
Marital Status
Married
We seem to have two choices here.

(1) Early experiments with very basic equipment were less accurate and contained more sources of error than the experimenters realised. The true value of c lay outside their estimated range of error.

(2) The error bars were accurate and reliable and are evidence, not of c decay, but of c fluctuating wildly.
 
Upvote 0

HSetterfield

Active Member
Dec 1, 2006
105
5
77
Oregon
Visit site
✟7,750.00
Faith
Christian
Marital Status
Married
your comments are both bizarre and ignorant of the data. It is clear to me you have no interest in the reality of what has been measured, only in your own program/aims.

To Assyrian: you have not looked at the data yourself, have you?

I'll be very sporadic from here on as I have a jammed up calendar for awhile. I do advise you both to pay attention to the data before making comments about what was done with it. You might also want to take a course in statistics and how data is handled. Another possibility is to read what the scientists themselves discussed regarding the data. But that would take work. You would actually have to go to a library and look up the old scientific journals. But you would find a lot. The decreasing speed of light was a frequently discussed topic in the first half of the twentieth century.
 
Upvote 0

shernren

you are not reading this.
Feb 17, 2005
8,463
515
38
Shah Alam, Selangor
Visit site
✟33,881.00
Faith
Protestant
Marital Status
In Relationship
I decided to have a little fun with spreadsheet software to simulate the effects of early, large-error measurements on "trends". It's something anybody can do at home.

1. I set up a column with an arithmetic series 0, 5, 10, ... , 250. I.e. 51 "measurements" were considered.
2. I set up another column with the "maximum error" each "year": y = 0.2e^(-.1x). Essentially, with each passing year the amount of "error" exponentially decreases.
3. I used a random number generator to simulate measurements being performed: z = (rand() - .5) * y + 0.5. I.e., the "speed of light" is 0.5, and a plus/minus error is tacked on according to the random number generated and the "maximum error" that year.

I got some pretty interesting graphs.
decays.gif


... in each case there was a visible trend (outlined in red), and it was not "fluctuations around a fixed mean" as expected. For each graph, if I removed a few "outliers" (circled red), about 10-20 but frequently less (i.e. 20-40% or less), the trend was even more "obvious". Attempt 5 was (and still is!) suspiciously tidy. (No, I did not keep generating graphs until a decay-ish curve showed up. I'd generated 5 simultaneously and then graphed them one by one; I had no way of knowing before I'd graphed no. 5 that it would look so eerily appropriate.)

And yet none of these graphs actually were generated by a non-constant trend. I would say (as a layman) that the earlier points contributed significantly to the trend, because the error in them when present was disproportionately large. And I have a feeling the same really applies to c-decay conclusions.
 
  • Like
Reactions: Mallon
Upvote 0

busterdog

Senior Veteran
Jun 20, 2006
3,359
183
Visit site
✟26,929.00
Faith
Christian
Marital Status
Married
We seem to have two choices here.

(1) Early experiments with very basic equipment were less accurate and contained more sources of error than the experimenters realised. The true value of c lay outside their estimated range of error.

(2) The error bars were accurate and reliable and are evidence, not of c decay, but of c fluctuating wildly.

I greatly disagree with this view of statistics.

If my stopwatch is off by 30 seconds, I can still get a very accurate average for my time in the mile. Even when I am practicing my Monty Python funny walk. The number of data points is the question, assuming that the measure device does not have a bias.

I took one stats course 20 years ago, and this st

All of ths is dealt with in Montgomery's paper, noted above. Montgomery carefully went through his basis for selecting data as well. The graphic presentation of the data could be done with any data set, if the scale is adjusted accordingly. That makes this presentation possibly a bit aggressive.

Maybe this is just too much stuff all at once. It seems to require a little more detail work but that takes a lot of time on both sides.

I have to feel that Helen's reluctance to engage is that the questions are already answered in the Montgomery paper.

What I am inclined to do at this point is finish the conversation in the creationist forum. Its a lot of stuff to pour over on an edgy topic. Maybe it just isn't the right kind of stuff for debate on a message board.
 
Upvote 0

busterdog

Senior Veteran
Jun 20, 2006
3,359
183
Visit site
✟26,929.00
Faith
Christian
Marital Status
Married
"Error bars" from the Montgomery paper:

http://www.ldolphin.org/cdkgal.html

To answer the third question, we analyzed the data by error bar size. If the decreases in the measured value of c are a result of increased measurement precision we would expect to see a decrease in the significance of the confidence levels as the true value of c is approached. The results are shown in Table 7.

cdktab6.gif
cdktab7.gif


Overall, 6 of 11 tests were significant at the 95% confidence level. There are three distinct subgroups in Table 7. The first, from ±1000 km/sec to ±100 km/sec, clearly shows decreasing confidence levels. In the ±50 km/sec to ±10 km/sec group the confidence level suddenly jumps to 92% and then starts to decrease again. In the third group, from ±5 km/sec to ±0.5 km/sec, the confidence levels are steady and significant. The results may appear at first to be ambiguous---both decreasing levels and steady significant ones. However, from what has been learned from earlier analysis we know the aberration values are systematically low and there are a significant number of them in the 60 to 200 km/sec range---just where the confidence levels are decreasing.
The dramatic drop in confidence levels from ±20 km/sec to ±10 km/sec and the equally dramatic rise between ±10 km/sec and ±5 km/sec tells us that there is a systematic problem with data which have error bars in that range. Kerr cell results provide 4 out of 6 of these data. It must be concluded that our earlier suspicions of systematic error in the Kerr cell measurements appear valid.
In order to further test for suspicious sequences that might be a product of experimenter expectations, a search was made for consecutive data where the points were both higher than the current value of c and at the same time decreasing with time. Taking into account that 58% of the data points are higher than the accepted value of c, we found that the occurrences of the above sequences were close to the accepted values. That is, there was no statistically significant deviation from the expected value.
Taking these various problems in the data into account it must be concluded that the decrease in the measurements of c cannot be attributed to the increase in the precision of the measurements.
 
Upvote 0

shernren

you are not reading this.
Feb 17, 2005
8,463
515
38
Shah Alam, Selangor
Visit site
✟33,881.00
Faith
Protestant
Marital Status
In Relationship
One problem I'm seeing is that Montgomery took the absolute value of the difference-from-present-value of c without taking into account the error bars. The error bars can give us a good picture of the relative importance to put on the measurement errors.

For example, let's say I set the present speed of light at 0 (funky.) and measurements give me 10, 5, 0. "Wow, a trend!" But let's say the error bars are respectively 50, 10, 5. Then in terms of error bars, the measurements are now 0.2 (error bars), 0.5 (error bars), and 0 (error bars). The trend evaporates and we can see that a far more likely cause is individual presence of systematic error in the experiments.

So I tried the same thing with the actual 120 points that Montgomery used (taking it on good faith that the other 73 points were rejected for valid reasons), and applying the corrections used by Lambert himself. (Dataset used: http://www.ldolphin.org/cdata.html ; allcdata17.xls was cross-referenced with Table II). And guess what?

cdiff.jpg


(the majorly off-the-chart point in the second graph is roughly 55). Whatever trend there was in the first graph evaporates when the relative magnitudes of the errors are taken.

This part of the paper also seems relatively suspect:

Norman and Setterfield also analyzed (in addition to values of c), measurements of the charge on the electron, e, the specific charge, e/mc, the Rydberg constant, R, the gyromagnetic ratio, the quantum Hall resistance, h/e2, 2e/h, and h/e, various radioactive decay constants, and Newton's gravitational constant G.
Three of these Norman and Setterfield quantities found to be constant, namely e, R, and G. These constants are either independent of time or independent of atomic processes. The other five quantities, which are related to atomic phenomena and which involve time in their units of measurement, they found to trend with the exception of the quantum Hall resistance.

hopefully this will answer the question I asked as to what dimensionless constants Setterfield proposes are changing.
 
  • Like
Reactions: random_guy
Upvote 0

busterdog

Senior Veteran
Jun 20, 2006
3,359
183
Visit site
✟26,929.00
Faith
Christian
Marital Status
Married
One problem I'm seeing is that Montgomery took the absolute value of the difference-from-present-value of c without taking into account the error bars. The error bars can give us a good picture of the relative importance to put on the measurement errors.

***

The major problems with error bars would be a demonstration of bias in the instrument and the number of data points. Montgomery dealt with both these issues.

Even in the more recent part of your graph, the trend is evident.

The question then becomes, what is the degree of significance? Clearly the data shows significance. But, a certain degree of probability is required. 95% is a lot of significance, understabably. Since I am not in charge of publishing on science or reviewing a doctoral thesis, I would find 75% interesting.

I am not sure where your reworking of the data would go in this respect or what dolphin found, though he reported that the trend remained every after including all the data.

That the curve approaches a sin wave, with us on the thin end, is maybe grounds for questions and the need to retest. I have a hard time finding that it is worthy of incredulity. There is a trend. As you can probably see from the website, many responses have been simply incredulous, if not nasty, and lacked the more even tone of your posts in this thread. SInce, this has been a continuing problem.

The essenence of what we are dealing with is a theory built upon a number of different bodies of evidence.

Planck's constant, light speed, red shift are part and parcel of the theory.

Part of the problem of dealing with this theory is its breath and the subject matter. In a prior post, you questioned Setterfield's model for the behavior or tachyon pairs, which is an elusive "substance." That's fine.

But, by the same token, physicists are quite tolerant of discussions about things like string theory. Most of what I have read about it is poetry. (Some of it Vogon poetry, I might add.) By the nature of its subject, it necessarily messes with the distinction between calculations and metaphors. What else would you expect of 4+ dimensional physics? I would submit that this is at least, no less crazy.

That the subject matter is larger (ie, alternate explanations for the appearance of all matter) is again the philosical discussion we visited about the Big Bang. I find enough inherent speculation in this field to seat many a theorist at the table.

My simple mathemtical understanding of how an average works also just brings me back to how we view error bars. Your argument may be a valid question of weight. And the size of the error bar may raise a question about bias in the instrument.

But, as Montgomery notes, apparently lots of different instruments have a similar bias. This does not eliminate your argument going to the weight of the data, but it does respond to it.

Quite honestly, I don't understand it well enough to accept it as proof of YEC. I think it has really opened up a lot of questions about conventional science and allowed us YEC guys to put a finer point on lots of our objections to TE and atheistic naturalism. That being said, it is the scientific model I like best and that I believe most. However, I take myself less seriously all the time and am thankful for the freedom to be able be that way.

As for the literal words of Gen. 1,2,3, I just accept that as reality, even the parts I don't understand.
 
Upvote 0

shernren

you are not reading this.
Feb 17, 2005
8,463
515
38
Shah Alam, Selangor
Visit site
✟33,881.00
Faith
Protestant
Marital Status
In Relationship
I'd agree that string theory is pretty iffy, and many scientists have indeed said so, since right now it isn't making too many predictions which we can test on the energy scales currently available to us.

Intuitively speaking, VSL (variable speed of light) theories don't make sense to me simply because they talk about variation in a dimensional quantity (i.e. c) which just isn't physically meaningful. For example, if I measure time in years, and distance in light-years, then by definition the speed of light has never changed and will never change! What happens here is that dimensional quantities are really combinations of dimensionless quantities (such as the fine structure constant, the electromagnetic force / gravitational force ratio, etc.) so that some methods to measure dimensional quantities actually measure, say, the fine structure constant, while other methods to measure dimensional quantities measure the ratio of electron mass to proton mass. What I find unsettling is that Setterfield apparently has not read any of the literature on it because he explicitly states that the fine structure constant has not changed; and yet he states that "atomic time changes, while dynamical time has not"! The fine structure constant plays a large part (to the best of my knowledge) in subatomic transitions and surely if it were constant, so would the speed of light be constant as measured by atomic parameters (i.e. atomic clocks), and hence atomic methods of measuring the passing of time should also be constant i.e. radiometric dating, contrary to his claims. In fact, when scientists claimed that there were variations in the rate of nuclear decay measured in the Oklo natural reactor, they attributed it precisely to changes in the fine structure constant, not anything else. I'm afraid I can't really express this in the right formalisms (and if I could, nobody here would really be able to understand it, including the 19-year-old me!), but this theory just strikes me as being physically unsound.

I also need to learn up a bit on black hole collapse because it seems to me that a VSL theory would make some pretty interesting predictions (i.e. that it should have been much harder to form black holes in the past than in the present, since the Schwarzschild radius would have been a lot smaller due to a larger c.
 
Upvote 0

busterdog

Senior Veteran
Jun 20, 2006
3,359
183
Visit site
✟26,929.00
Faith
Christian
Marital Status
Married
I'd agree that string theory is pretty iffy, and many scientists have indeed said so, since right now it isn't making too many predictions which we can test on the energy scales currently available to us.

Intuitively speaking, VSL (variable speed of light) theories don't make sense to me simply because they talk about variation in a dimensional quantity (i.e. c) which just isn't physically meaningful. For example, if I measure time in years, and distance in light-years, then by definition the speed of light has never changed and will never change! What happens here is that dimensional quantities are really combinations of dimensionless quantities (such as the fine structure constant, the electromagnetic force / gravitational force ratio, etc.) so that some methods to measure dimensional quantities actually measure, say, the fine structure constant, while other methods to measure dimensional quantities measure the ratio of electron mass to proton mass. What I find unsettling is that Setterfield apparently has not read any of the literature on it because he explicitly states that the fine structure constant has not changed; and yet he states that "atomic time changes, while dynamical time has not"! The fine structure constant plays a large part (to the best of my knowledge) in subatomic transitions and surely if it were constant, so would the speed of light be constant as measured by atomic parameters (i.e. atomic clocks), and hence atomic methods of measuring the passing of time should also be constant i.e. radiometric dating, contrary to his claims. In fact, when scientists claimed that there were variations in the rate of nuclear decay measured in the Oklo natural reactor, they attributed it precisely to changes in the fine structure constant, not anything else. I'm afraid I can't really express this in the right formalisms (and if I could, nobody here would really be able to understand it, including the 19-year-old me!), but this theory just strikes me as being physically unsound.

I also need to learn up a bit on black hole collapse because it seems to me that a VSL theory would make some pretty interesting predictions (i.e. that it should have been much harder to form black holes in the past than in the present, since the Schwarzschild radius would have been a lot smaller due to a larger c.

One of my projects is to go back in and try to get my head around the difference between Lorentz and Einstein in terms of relativity. I think Lorentz believed there were non-relative points of reference with respect to which other points of reference relative time would vary based upon the usual gravity, acceleration, etc. I believe Einstein was relatively more absolutist with respect to relativity and that all positions were relative. Something like that. Perhaps this variation would be more meaningful in relativity as viewed by Lorentz?

I am laboring with relativity and how it would fit with this theory.

A nice fellow, a chemist who knows Dolphin sent me an email suggesting that the difference in how time would be measured under Setterfield's way of thinking would change our view of what the six days of creation represents. Not sure exactly why or how that would be different from the classic view of relativity.

I will go in and have a look at the fine structure constant.
 
Upvote 0

busterdog

Senior Veteran
Jun 20, 2006
3,359
183
Visit site
✟26,929.00
Faith
Christian
Marital Status
Married
http://www.setterfield.org/criticalreview.htm

6. Regarding the behavior of the fine structure constant with time: If there was any fractional variation in the fine structure constant this should be very obvious from the observations of spectral lines in distant galaxies.

Setterfield: The behaviour of the fine structure constant has recently been re-assessed in this model. In a major paper undergoing review at the moment, it is deduced that the fine structure constant will be marginally greater in a gravitational field. This results from an approach to the ZPE and gravitation that is consistent with SED thinking and may allow a test to be made to determine which theory of gravitation is more correct.

CONCLUSION:
There is a problem which needs to be mentioned in closing; a problem which is underlying much of the problem some are having with the work presented on these pages. Physics has currently seemed to reverse a sequence which should not have been reversed, and in doing so has made several wrong choices in the latter part of the twentieth century. Those that are underlying the reviewer's criticisms have to do with the permeability of space, a mistaken idea about frequency in terms of the behavior of light, and the equations of Lorentz and Maxwell. As mentioned in point 1, permeability was related to the speed of light early in the twentieth century, but divorced from it later and declared invariant. It was invariant by declaration, not by data, and this is the first backwards move which has influenced the reviewer's thinking here. Secondly, it has become accepted that the frequency of light is the basic quantity and that it is the wavelength which is subsidiary. Until about 1960 it was the wavelength that was considered the basic quantity for measurement. However since it had become easier to measure frequency with a greater degree of accuracy, the focus shifted from choosing wavelength as the basic quantity to using frequency in its stead, thus relegating wavelength to a subsidiary role. The data dictates something else, however. It is wavelength which remains constant and the frequency which varies when the speed of light changes. This latter point was made plain by experimental data from the 1930’s, and was commented on by Birge himself.

In a similar way, although both Lorentz and Maxwell formulated their equations before Einstein adopted and worked with them, it has become almost required to derive the formulas of both Lorentz and Maxwell in terms on Einstein’s work. Properly done, it should be the other way around, and the work of both earlier men should be allowed to stand alone without Einstein’s imposed conditions.

One final note: In the long run, it is the data which must determine the theory, and not the other way around. There are five anomalies cosmology cannot currently deal with in terms of the reigning paradigm. These are easily dealt with, however, when one lets the data go where it will. The original data are in the Report. As given in my lectures, the anomalies concern measured changes in Planck’s constant, the speed of light, changes in atomic masses, the slowing of atomic clocks, and the quantized redshift. Modern physics seems to be showing a preference for ignoring much of this in favor of current theories. That is not the way I wish to approach the subject.

The common factor for solving all five anomalies is increase through time of the zero point energy, for reasons outlined in “Exploring the Vacuum.”

Edited to add:

However, there is an inconsistency here, because the competing claims of the Lorentzian approach have not been considered. These claims of LR are equally well supported, but they have several important differences, one of them being that there is no universal speed limit such as SR claims for lightspeed. SR claims that one point of reference, such as someone on a moving spacecraft somewhere out in the cosmos at position A, is indistinguishable from another frame of reference elsewhere in the cosmos at point B, even though they have a velocity relative to each other. The only thing the observers have in common is the fixed speed of light. The mathematical transformations that are done by the observer at A to work out what is happening at B will have their counterpart by the observer at B trying to work out what is happening at A. In other words, the relationships are reciprocal in SR as all frames of reference are equivalent, there being no preferred frame. This is one of the two very general postulates of SR, the other being the fixed speed of light.

But the Lorentz approach is different: it states that there is indeed a preferred frame of reference, so that the mathematical transformations only need apply one way, namely to the moving body, not reciprocally to the preferred frame at rest. It is important to note that General Relativity (GR) is built on SR using only these one-way mathematical transformations relative to the local gravitational field, the center-of-mass reference frame [3]. Since this is the basically same as the Lorentzian approach, then GR is just as consistent with LR as it is with SR [4].
http://www.setterfield.org/tworelativities.html
 
Upvote 0
S

Servant222

Guest
In the long run, it is the data which must determine the theory, and not the other way around.

A theory must also be consistent with all the data, and may need to be adjusted if new data becomes available. And one favoured theory doesn't mean that alternative explanations have been disproven, only that the current evidence best supports the given theory.
 
Upvote 0

shernren

you are not reading this.
Feb 17, 2005
8,463
515
38
Shah Alam, Selangor
Visit site
✟33,881.00
Faith
Protestant
Marital Status
In Relationship
There are no quantised redshifts.

End of story.
Are there any good papers that have examined this? The ones cited by Wikipedia seem to have a lot more to do with Arp's weird QSO theories than with redshift quantization per se.
 
  • Like
Reactions: busterdog
Upvote 0

HSetterfield

Active Member
Dec 1, 2006
105
5
77
Oregon
Visit site
✟7,750.00
Faith
Christian
Marital Status
Married
Tifft's results were so controversial that several groups of astronomers set out to prove that they were wrong by gathering data on red shifts more broadly and from a wider variety of galaxy types. To the surprise of the would-be disprovers, they found evidence for the same red-shift quantization that Tifft had reported. For example, a group of astronomers associated with the Royal Observatory at Edinburgh, Scotland, examined 89 spiral galaxies picked at random and found a periodic bunching of red shifts in their data that was similar to the 72 km/s intervals found by Tifft. The data they used came from many different observatories and many different telescopes, and it is therefore unlikely that some instrumental effects or systematic errors produce the observed red-shift quantization. The quantized red-shift phenomenon is not exclusively a property of the visible light spectrum of stars. Recent results from precision radio-telescope observations of spiral galaxies also appear to support Tifft's results. The quantized red-shift phenomenon won't go away. Astronomers are coming to accept it as a real phenomenon.
from http://www.npl.washington.edu/av/altvw68.html

Dr. Tifft's discussion of red-shift anomalies was published with seeming reluctance in the Astrophysical Journal in the mid 1980s with a rare editorial note pointing out that the referees "neither could find obvious errors with the analysis nor felt that they could enthusiastically endorse publication."
After Dr. Tifft's initial publication, several astronomers devised extensive experiments in attempts to prove him wrong. Among them two Scottish astronomers, Bruce Gutherie and William
Napier from the Royal Observatory in Edinburgh observed approximately 300 galaxies in the mid 1990s. They found to their surprise confirmation of quantum banding of red-shift data.
They also had difficulty publishing their data. It has been reported that the prestigious Journal of Astronomy: and
Astrophysics refused publication until an additional set of observations from 97 other spiral galaxies was included. A Fourier analysis of the 302 early data points, and the subsequent total of 399 data points strongly confirmed the quantum shifts.
from http://evolution-facts.org/Speed of Light.htm


The analysis of dwarf irregulars was revised and improved when an extensive 21-cm redshift survey of dwarf galaxies was published by J. Richard Fisher and R. Brent Tully. Once the velocity of the solar system was accounted for, the irregulars in the Fisher-Tully Catalogue displayed an extraordinary clumping of redshifts. Instead of spreading smoothly over a range of values, the redshifts appeared to fall into discrete bins separated by intervals of 24 km per second, just 1/3 of the original 72 km per second interval. The Fisher-Tully redshifts are accurate to about 5 km per second. At this small level of uncertainty the likelihood that such clumping would randomly occur is just a few parts in 100,000.
Large-scale redshift quantization needed to be confirmed by analyzing redshifts of an entirely different class of objects. Galaxies in the Fisher-Tully catalogue that showed large amounts of rotation and interval motion (the opposite extreme from the dwarf irregulars) were studied. Remarkably, using the same solar-motion correction as before, the galaxies' redshifts again bunched around certain specific values. But this time the favored redshifts were separated by exactly 1/2 of the basic 72 km per second interval. This is clearly evident. Even allowing for this change to a 36 km per second interval, the chance of accidentally producing such a preference is less than 4 in 1000. It is therefore concluded that at least some classes of galaxy redshifts are quantized in steps that are simple fractions of 72 km per second.
from http://www.cs.unc.edu/~plaisted/ce/redshift.html

Then there is also
http://www.astr.ua.edu/keel/galaxies/arp.html

Actually, there is a LOT more. Some of it gets really weird, but some is good solid science. I have not referenced my husband in the above, although one or two of the articles do, I think. But you wanted to know if there are other articles and yes, there are actually hundreds.
 
Upvote 0

KerrMetric

Well-Known Member
Oct 2, 2005
5,171
226
64
Pasadena, CA
✟6,671.00
Faith
Non-Denom
Marital Status
Married
Politics
US-Libertarian
Quite simply the Tifft data is a case of small number statistics and researcher incompetence in setting up an experiment.

When the galaxy pairs in Tifft's initial study were analysed with increased velocity accuracy the supposed periodicity decreased.

Tifft, Napier and Guthrie cannot even agree at various times in the literature on the peridocity and even at times have claimed periodicities at a level which is impossible given the velocity accuracy of the data.

Tifft's statistical procdure (flawed) even shows periodicities in random data whe tested by Newman, Haynes and Terzian.


And most importantly of all - in much larger surveys in recent years no such periodicities are present - and the one famous case of showing periodicities was shown to have been an artifact of the data processing procedure - i.e. poor choice of window function for the FFT's.

The reason you don't see Tifft's stuff discussed much these days is because it is based on a flawed analysis (+ possible fraud) on 20 - 30 year old tiny data sets.

Check out papers by people like Salpeter, Terzian, Haynes and Newman.


This really is all much ado about absolutely nothing.
 
  • Like
Reactions: Sphere
Upvote 0

HSetterfield

Active Member
Dec 1, 2006
105
5
77
Oregon
Visit site
✟7,750.00
Faith
Christian
Marital Status
Married
From 1976 onwards, Tifft pointed out that redshift differences between galaxies, or pairs of galaxies, were not smooth, but went in jumps, or were quantized. The Coma cluster of galaxies had bands of redshift running through the whole cluster. In several instances a quantum jump in redshift actually passed through some galaxies. The quantization that Tifft originally noted was 72.46 km/sec. And up to thirteen multiples of that quantity were found. Later work established that there was a more basic quantization.

Tifft's quantizations must not be confused with another effect which has been noted with quasars and very distant galaxies. The quantizations show that redshifts increase in a series of steps with constant redshifts between the steps. By contrast, distant quasars/galaxies show large scale clustering at preferred redshifts. Quasar periodicities were first noticed by the Burbidges in 1967 and given a formula by Karlsson from his observations up until 1977. Other astronomers generally agree that "if you looked at all quasars known, that preferred values of redshift were apparent." (Halton Arp, Quasars, Redshifts, and Controversies, Interstellar Media, Berkeley, 1987, p. 14)

In these cases, the redshift periodicity was very large scale. It peaked at 16,940 Km/sec. This is much greater than Tifft's basic step of 8/3 km/sec. It is important to distinguish between these two separate phenomena. The large scale periodicities simply represent large numbers of objects at nearly the same redshift, and so reflect clustering of galaxies and quasars. By contrast, the Tifft quantizations are a small scale effect involving differences in redshifts between galaxies in a cluster. It has nothing to do with numbers of galaxies at a given redshift. Quantization has everything to do with discrete jumps in redshifts and constant redshifts between jumps. This gives the appearance of bands of redshifts going through a whole cluster of galaxies, with quantum jumps of redshifts at the edge of each band. Because the redshift differences are small, it is hard to discern the Tifft quantizations at high redshifts, where techniques are less sensitive. Thus, redshift quantizations and periodicities are two separate things.

-------------

The above was quoted to me (by Barry) while I typed, from an article Barry has not yet had published.
 
Upvote 0

busterdog

Senior Veteran
Jun 20, 2006
3,359
183
Visit site
✟26,929.00
Faith
Christian
Marital Status
Married
The reason you don't see Tifft's stuff discussed much these days is because it is based on a flawed analysis (+ possible fraud) on 20 - 30 year old tiny data sets.

Check out papers by people like Salpeter, Terzian, Haynes and Newman.


This really is all much ado about absolutely nothing.

How far are you willing to go with that?

In my experience as an outsider, I see the field of discussion is often regarded as a private fight in which admission is selective. When I see harshly dismissive rhetoric, I usually find this "field" phenomenon.

If you are a member of the club, you are free to slug it out on basic premises. However, you cannot challenge a basic premise unless you are in the club.

So, what I would like to know is whether there is any dispute within the scientific community about such matters. Because, my next step is to go into the literature and find that indeed lots of accepted scientists are questioning the same basic premise.

Am I right?

And, perhaps you could also tell me if you have a definition of quantization and periodicity.
 
Upvote 0
Status
Not open for further replies.