The essence of David’s argument can be found in his paper in figure 2 and in figure 3:
Supernova type Ia light curves are well defined: It is the progress of the explosion from first light to peak brightness, usually about 10 to 15 days, then a rapid fade from the peak, normally fading one magnitude in 15 days then dimming more gradually over a period of several months. The shape of the curve is known, so that even though we general only observe the event no more than once a day, a the collection of a half dozen or so events over a few weeks allows the complete light curve to be constructed from points collected near the peak and during a few weeks of fading.
Supernova events near us are used to construct ‘rest frame’ light curves. These templates are then compared with the dotes collected during a new event, and the magnitude of the new event is estimated from the best light curve fit.
For cosmic light curves at redshifted distances, researchers normally correct for time dilation by shorten the time between the collected data points by a relativistic time dilation factor (1+z)^-1. So a light curve at a redshift of two is rescaled on the x-axis by a factor of ½. There is also a magnitude correction for time dilation, but since the magnitude of supernova events are considered distant standard, it is the distance to the event that is adjusted, not the magnitude. (Any error in the way the distance is calculated translates into an error in the size of the universe.)
In David’s paper, Figure 2 plots the light curve widths of supernova (type 1a) as they explode over time. David has used a templating process just like researchers in the field, but without correction for time dilation. What he has plotted is the light curve widths in multiple wavelengths verses time.
Notice that they are almost, but not quite, normally distributed about the x-axis, which is what you would expect to see if there was a small selection effects towards brighter events with increasing distance. But this is NOT the normal distribution one expects to see if redshifted space is also corrected for relativistic effects – this is the red line in David’s plot. If supernova events are consistent over time, the light curve widths should be normally distributed about the red line in David’s plot. David is arguing that it is unreal to accept the red line as the normalizing standard in cosmology when natural events fail to follow it – they are not even close.
There is no reason to look at the statistical significance of David ‘s plot: The events we have observed have failed to follow the ‘red line’ so dramatically that it is obvious there is a gross error in the way supernova are analyzed. Cosmologists such as Ned Wright are completely aware of this phenomenon, and to the best of my knowledge they are still trying to understand it.
Figure three is a plot of the calculated absolute magnitude of supernova events when their light curve widths are correlated with local events. Here again it is clear that without the time dilation included in the magnitude calculation, the distant supernova events have very near the same average intensity as local events, but when time dilation is included in the calculation of the intensity, the absolute magnitude appears to be decreasing dramatically if not absurdly as we look back into the time frame of these cosmic events.
Again, these are not new observations, just an extension of the ‘weirdness’ of supernova data that has persisted for two decades. A similar trend is apparent in gamma ray burst data, but it is currently to widely scattered to draw hard conclusions.