• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Eric Lerner is presenting his recent paper on a static universe on PhysicsForums

Michael

Contributor
Site Supporter
Feb 5, 2002
25,145
1,721
Mt. Shasta, California
Visit site
✟320,648.00
Gender
Male
Faith
Christian
FYI, Eric Lerner is presenting his most recent static universe paper over on PhysicsForums at this link:

https://www.physicsforums.com/threads/o ... as.943111/

Here's a link to his recently published and peer reviewed paper:

https://academic.oup.com/mnras/advance- ... 28/4951333

You can find a free copy of that paper on Arxiv:

[1803.08382] Observations contradict galaxy size and surface brightness predictions that are based on the expanding universe hypothesis

You can also find an earlier test, and slightly different presentation of Lerner's model here:

[1405.0275] UV surface brightness of galaxies from the local Universe to z ~ 5

A couple of people have pointed out to me here that Lerner's model was tested in this paper and failed, whereas Holushko's static universe/tired light model passes that same test with flying colors:

[1312.0003] Alcock-Paczynski cosmological test

Since they are both based on 'tired light" models, I originally assumed that Lerner's paper had also passed the same test that Holushko's model passed, but apparently I was mistaken because they're evidently using slightly different mathematical models of "tired light'. Lerner however does address some of the previous mistakes which have been made by the mainstream in such analyses in his latest paper

So far it's been an interesting and highly professional discussion at PhysicsForums. It's very informative and well worth checking it out if you have some time.

Lerner's paper concludes the following about the mainstream galaxy size-evolution models:

Eric Lerner said:
Predictions based on the size-evolution, expanding-universe hypothesis are incompatible with galaxy size data for both disk and elliptical galaxies. For disks, the quantitative predictions of the Mo et al theory are incompatible at a 5-sigma level with size data, as is any model predicting a power-law relationship between H(z) and galaxy radius. For ellipticals, a power law of H(z) does fit the data, but only with an exponent much higher than that justified by the Mo et al theory. All three mechanisms proposed in the literature-- “puffing up”, major and minor mergers—make predictions that are contradicted by the data, requiring either gas fractions or merger rates that are an order of magnitude greater than observations. In addition, any size evolution model for ellipticals leads to dynamical masses that, given the observed velocity dispersions, are smaller than stellar masses, a physical impossibility.
Contrary to some other analysis, we find that the r-z relationships for elliptical and disk galaxies are identical. The resolution-size effect must be taken into account for valid conclusions, and that effect is larger for disk galaxies that have smaller angular radii, either because they are observed at higher z or because they are observed at longer rest-frame wavelengths. The identical size evolution of disks and ellipticals appears as a very large and unexplained coincidence in the expanding-universe model.
In contrast, the static Euclidean universe (SEU) model with a linear distance-z relationship is in excellent agreement with both disk and spiral size data, predicting accurately no change in radius with z. The exact agreement of the SEU predictions with data could also only be viewed as an implausibly unlikely coincidence from the viewpoint of the expanding universe hypothesis. The contradictions with impossibly small dynamic masses are also eliminated with the non-expanding universe model.
 
Last edited:
  • Informative
Reactions: Floof

sjastro

Newbie
May 14, 2014
5,758
4,682
✟349,680.00
Faith
Christian
Marital Status
Single
FYI, Eric Lerner is presenting his most recent static universe paper over on PhysicsForums at this link:

https://www.physicsforums.com/threads/o ... as.943111/

Here's a link to his recently published and peer reviewed paper:

https://academic.oup.com/mnras/advance- ... 28/4951333

You can find a free copy of that paper on Arxiv:

[1803.08382] Observations contradict galaxy size and surface brightness predictions that are based on the expanding universe hypothesis

You can also find an earlier test, and slightly different presentation of Lerner's model here:

[1405.0275] UV surface brightness of galaxies from the local Universe to z ~ 5

A couple of people have pointed out to me here that Lerner's model was tested in this paper and failed, whereas Holushko's static universe/tired light model passes that same test with flying colors:

[1312.0003] Alcock-Paczynski cosmological test

Since they are both based on 'tired light" models, I originally assumed that Lerner's paper had also passed the same test that Holushko's model passed, but apparently I was mistaken because they're evidently using slightly different mathematical models of "tired light'. Lerner however does address some of the previous mistakes which have been made by the mainstream in such analyses in his latest paper

So far it's been an interesting and highly professional discussion at PhysicsForums. It's very informative and well worth checking it out if you have some time.

Lerner's paper concludes the following about the mainstream galaxy size-evolution models:
Lerner’s assumption of a linear distance-z relationship dₚ = cz/H₀ where dₚ is the proper distance, c the speed of light and H₀ Hubble’s constant is only valid for very small z.

Lerner must think Einstein’s Special Relativity is wrong.
Hubble’s law is v= H₀dₚ where v is the recession velocity.
Since Lerner’s Universe model is static, v is the recession velocity of galaxies moving in space while in the expansion model v is the expansion velocity of space-time which can exceed c.
Not only can objects moving in space not exceed c but as they approach c relativistic effects need to be taken into account and the distance- z relationship is no longer linear.

In fact the correct formula is dₚ ≈ c/H₀[z-0.5(1+q₀)z²] to second order z.
q₀ is the deceleration parameter and requires a course on Cosmology to explain and is outside the scope of this post.
It is not a constant and varies according to the distribution of galaxies in space-time.
For example in the LSC (Local Supercluster) q₀=-1 and the equation reduces to dₚ = cz/H₀ which is the only place in the Universe where the linear relationship can apply.

Taking for example q=-0.5 the correct formula deviates from the linear relationship as z increases as shown:

Lerner.jpg

The vertical and horizontal axes are distance and z respectively.
The solid line is the linear relationship, the short dashed line is the relevant plot to the equation dₚ ≈ c/H₀[z-0.25z²] for the case q=-0.5.
 
Last edited:
  • Like
Reactions: SelfSim
Upvote 0

sjastro

Newbie
May 14, 2014
5,758
4,682
✟349,680.00
Faith
Christian
Marital Status
Single
So are we saying that Lerner appears to have forgotten the constraints imposed by SR, which become relevant at higher z?
That would appear to be the case unless Lerner is proposing a new form of physics as has been queried by some posters in the PhysicsForum.

In Lerner's model z is simply a Doppler shift defined by the formula z = √(c+v)/√(c-v) - 1.
Even if assuming there were no constraints imposed by SR, if v=c then z = ∞ which is nonsensical.

I'll need to have a careful read of his paper.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
After reading some more of his paper, and his counter-arguments in physicsforums thread, I think he needs the fractal distribution argument to be so, in order for the linear relationship to hold at Cosmological distances(?)
(Ie: homogeneity is thrown aside for the fractal distribution in order to make his argument 'stick').
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,758
4,682
✟349,680.00
Faith
Christian
Marital Status
Single
After reading some more of his paper, and his counter-arguments in physicsforums thread, I think he needs the fractal distribution argument to be so, in order for the linear relationship to hold at Cosmological distances(?)
(Ie: homogeneity is thrown aside for the fractal distribution in order to make his argument 'stick').
Fractals are a red herring and are not even mentioned in either of Lerner’s papers.

I had a look at this paper and there are some serious flaws.
There is the so called RS (Resolution Size) effect for spiral or disk galaxies.
The surface brightness of a galaxy requires the telescope to be large enough in order to resolve and measure the angular dimensions.
If the galaxy is small enough to be at or near the resolution limit of the telescope, the angular dimensions are larger than they actually are.
This is the RS effect.

To remove this effect Lerner has introduced a cut off limit where if angular dimension of galaxies fall below a certain value they are discarded as they are too close to the resolution limit of the scope.
After removing the RS effects Lerner has then taken the low z-values from the Galex UV telescope and high z values from the HUDF (Hubble Ultra Deep Field).
According to Lerner since the ratio of the cut off limit of the Galex value to the HUDT value is 38, then the HST can resolve objects 1/38 smaller than Galex.
Since the assumption is made that distance and z are linear, Lerner has taken sample pairs from the Galex and HUDF data subject to the condition the ratio of mean z in the HUDF data to the mean z of the Galex sample is 38.
The difference in the corresponding angular dimensions for each pair is small and found to be constant over z.
This implies there is no need to for mainstream to consider a size evolution model.

As impressive as it looks Lerner has overlooked one important item.
The angular resolution of a telescope θ is defined by the equation.
θ = 1.220λ/D where λ is the wavelength of light and D the diameter of the telescope.
The smaller the value of θ the greater the resolution.
Lerner has failed to take into consideration that filters also effect the resolution.
If he had used the near IR data instead of NUV (near ultra violet) for the HUDF data, which uses a different filter, the wavelength is nearly doubled and HST can now only resolve objects about 1/20 smaller than Galex instead of 1/38.
Lerner provides no evidence if the change in resolution will produce the same result.

In the case of elliptical galaxy data since the angular sizes are well beyond the resolution limit the RS effect does not need to be considered.
The data however only extends out to z=2.75 and Lerner would need to demonstrate if still applies to much larger scales where according to the mainstream model the deviation from the linear law increases with increasing z.
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
Fractals are a red herring and are not even mentioned in either of Lerner’s papers.
Yes .. and his invoking this argument in the physicsforums discussion was also shown as having flaws.

sjastro said:
I had a look at this paper and there are some serious flaws.
There is the so called RS (Resolution Size) effect for spiral or disk galaxies.
The surface brightness of a galaxy requires the telescope to be large enough in order to resolve and measure the angular dimensions.
If the galaxy is small enough to be at or near the resolution limit of the telescope, the angular dimensions are larger than they actually are.
This is the RS effect.

To remove this effect Lerner has introduced a cut off limit where if angular dimension of galaxies fall below a certain value they are discarded as they are too close to the resolution limit of the scope.
After removing the RS effects Lerner has then taken the low z-values from the Galex UV telescope and high z values from the HUDF (Hubble Ultra Deep Field).
According to Lerner since the ratio of the cut off limit of the Galex value to the HUDT value is 38, then the HST can resolve objects 1/38 smaller than Galex.
Since the assumption is made that distance and z are linear, Lerner has taken sample pairs from the Galex and HUDF data subject to the condition the ratio of mean z in the HUDF data to the mean z of the Galex sample is 38.
The difference in the corresponding angular dimensions for each pair is small and found to be constant over z.
This implies there is no need to for mainstream to consider a size evolution model.

As impressive as it looks Lerner has overlooked one important item.
The angular resolution of a telescope θ is defined by the equation.
θ = 1.220λ/D where λ is the wavelength of light and D the diameter of the telescope.
The smaller the value of θ the greater the resolution.
Lerner has failed to take into consideration that filters also effect the resolution.
If he had used the near IR data instead of NUV (near ultra violet) for the HUDF data, which uses a different filter, the wavelength is nearly doubled and HST can now only resolve objects about 1/20 smaller than Galex instead of 1/38.
Lerner provides no evidence if the change in resolution will produce the same result.

In the case of elliptical galaxy data since the angular sizes are well beyond the resolution limit the RS effect does not need to be considered.
The data however only extends out to z=2.75 and Lerner would need to demonstrate if still applies to much larger scales where according to the mainstream model the deviation from the linear law increases with increasing z.
Hmm .. interesting ... It would seem that Lerner may not be familiar with handling astronomical measurement data .. the filter checking issue is, after all, a pretty fundamental one, eh?

.. Sounds a bit like a different variant of Crawford's data handling issues ..
 
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
sjastro said:
I had a look at this paper and there are some serious flaws.
sjastro, was your comment based on his "UV surface brightness of galaxies from the local Universe to z {approx} 5" analysis, or the "Observations contradict galaxy size and surface brightness predictions that are based on the expanding universe hypothesis" paper?
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,758
4,682
✟349,680.00
Faith
Christian
Marital Status
Single
Yes .. and his invoking this argument in the physicsforums discussion was also shown as having flaws.

It did come across as an ad hoc comment which has blown up in his face.

Hmm .. interesting ... It would seem that Lerner may not be familiar with handling astronomical measurement data .. the filter checking issue is, after all, a pretty fundamental one, eh?

.. Sounds a bit like a different variant of Crawford's data handling issues ..

The Crawford issue involved the use of data filters, in this case they are optical filters that allow light of different frequencies to pass through.
The HUDF is a colour image composed of combining monochromatic images using various filters.
I assume Lerner has used data from the F435W filter as this is the closest match to the Galex far and near ultra violet images.

The other point I should have raised is Lerner’s method of determining the resolution ratio using the Galex and HUDF data is bizarre to put it mildly and wrong.
The correct method is to use the formula mentioned in my previous post.
θ = 1.220λ/D
For Galex: λ= 152nm (centre value) D = 0.5 m, gives θ = 0.08 seconds of arc.
For Hubble: λ= 435nm (centre value) D = 2.4 m, gives θ = 0.05 seconds of arc.

So rather than Hubble resolving objects 1/38 smaller then Galex, it is the much more modest 5/8.
 
  • Agree
Reactions: HotBlack
Upvote 0

sjastro

Newbie
May 14, 2014
5,758
4,682
✟349,680.00
Faith
Christian
Marital Status
Single
sjastro, was your comment based on his "UV surface brightness of galaxies from the local Universe to z {approx} 5" analysis, or the "Observations contradict galaxy size and surface brightness predictions that are based on the expanding universe hypothesis" paper?
Both.
Lerner's latest paper gives scant details on how he arrived at the conclusion that Hubble provides better resolution by a factor of 38 over Galex.
He describes this in detail in his first paper.

Despite the hyperbole going on at Tbolts, Lerner isn't going to win the Nobel Prize.:doh:
 
Last edited:
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
Both.
Lerner's latest paper gives scant details on how he arrived at the conclusion that Hubble provides better resolution by a factor of 38 over Galex.
He describes this in detail in his first paper.

Despite the hyperbole going on at Tbolts, Lerner isn't going to win the Nobel Prize.:doh:
I haven't even bothered looking at the TBolts delusions yet ...

There is interest in the contents of your critique here, across the more influential science forums.
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,758
4,682
✟349,680.00
Faith
Christian
Marital Status
Single
Selfsim,

I've noticed Lerner has responded to your post.

Selfsim said:
Hi Eric; It appears that you may not have considered the impact on angular resolution which results from the dissimilar filters used in the HUDF and Galex datasets(?) Rather than Hubble resolving objects 1/38 smaller than Galex, our estimation is much more modest at about 5/8. If you had used the near IR data, instead of NUV (near ultra violet) for the HUDF dataset, the filter wavelength is nearly doubled and HST can now only resolve objects about 1/20 smaller than Galex (instead of the 1/38 you mention). Did you test the impact this will have and does it alter any of your conclusions? Cheers
Lerner response said:
Hi SelfSim, If you read our 2014 paper, we describe that we used the datasets themselves to determine the actual resolutions of the two scopes. In other words, we used the cutoff radius below which the images could not be distinguished from point images--had high stellarity. There was a sharp cutoff for both scopes.

These are the cutoff radius results from his 2014 paper.

For GALEX this cutoff is at a radius of 2.4 +/- 0.1 arcsec for galaxies observed in the FUV and 2.6 +/- 0.2 arcsec for galaxies observed in the NUV, while for Hubble this cutoff is at a radius of 0.066 +/- 0.002 arcsec, where the errors are the 1σ statistical uncertainty.
While the Hubble cut off 0.066 arcsec compares with theoretical resolution of 0.05 arcsec using the F435W filter, his Galex result of 2.4 arcsec is 30X higher than the theoretical value of 0.08 arcsec in FUV.

Unless the Galex optics were of catastrophically low quality in which case no useful science would be possible I'd say the error is with Lerner.
 
Last edited:
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
... While the Hubble cut off 0.066 arcsec compares with theoretical resolution of 0.05 arcsec using the F435W filter, his Galex result of 2.4 arcsec is 30X higher than the theoretical value of 0.08 arcsec in FUV.
Ya I agree .. 30X is just too much to swallow here ..

It'll also be interesting to see if Eric can confirm the F435W filter dataset was used ..
 
Last edited:
Upvote 0

sjastro

Newbie
May 14, 2014
5,758
4,682
✟349,680.00
Faith
Christian
Marital Status
Single
sjastro said:
Unless the Galex optics were of catastrophically low quality in which case no useful science would be possible I'd say the error is with Lerner.
Nothing wrong in the quality of the Galex optics after all.
There is a drop in the off axis performance which is a characteristic of the Ritchey-Chretien optical design at lower f/ratios. Galex uses an f/6 telescope.
Galex said:
To verify the fundamental instrument performance from on orbit data, some bright stars were analyzed individually, outside the pipeline. These results show performance that is consistent with or better than what was measured during ground tests. We have also verified the end-to-end performance including the pipeline by stacking images of stars from the MIS survey that were observed at different locations on the detector. The results of these composites are shown in Figures 9 and 10. Performance is reasonably uniform except at the edge of the field, where it is significantly degraded.
http://iopscience.iop.org/article/10.1086/520512/pdf
 
  • Agree
Reactions: SelfSim
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
Lerner has responded to our concerns. If I understand his response correctly, I still don't think he's acknowledging the difference in angular resolution which applies to each filter band(?)

PS: What I mean here is that his method for 'allowing the data to decide on the resolution' sort of throws the baby out with the bathwater and I think, if he followed the standard approach (as per sjastro's posts) then the matched pairs would look different and his conclusions would then also be different(?)

This would mean the ratio of the cutoff limit of '38' would need to change across the various filter bands in order to properly correlate the 'matching' HUDF and Galex pairs, no(?) (BTW: he's also using multiple HST filter bands ie: not only the F435W filter).

I'm not sure I see the relevance of the comment about the Galex pixel size in the issues we've raised also ..(?)
 
Last edited:
Upvote 0

sjastro

Newbie
May 14, 2014
5,758
4,682
✟349,680.00
Faith
Christian
Marital Status
Single
Lerner has responded to our concerns. If I understand his response correctly, I still don't think he's acknowledging the difference in angular resolution which applies to each filter band(?)

PS: What I mean here is that his method for 'allowing the data to decide on the resolution' sort of throws the baby out with the bathwater and I think, if he followed the standard approach (as per sjastro's posts) then the matched pairs would look different and his conclusions would then also be different(?)

This would mean the ratio of the cutoff limit of '38' would need to change across the various filter bands in order to properly correlate the 'matching' HUDF and Galex pairs, no(?) (BTW: he's also using multiple HST filter bands ie: not only the F435W filter).

I'm not sure I see the relevance of the comment about the Galex pixel size in the issues we've raised also ..(?)


Lerner said:
Selfsim, Not only the 435 filter. For each redshift range, we match the HST filter with either FUV or NUV filter to get the same wavelength band in the rest frames. So we actually have eight separate matched sets. All described in the 2014 paper. Also on GALEX I guess you used the Dawes formula but it is way off. Look at the GALEX descriptions on their website--the resolution is arcseconds, not a tiny fraction of an arcsecond. Their pixels are much bigger than your value. Why is this?--you have to ask the designers of GALEX. This is just a guess on my part, but GALEX is a survey instrument. If they made the pixels too small, they would end up with too small a field of view, given limits on how big the detector chip can be.

I’m afraid Lerner is grasping at straws.
The formula used was the Rayleigh criterion for resolution not Dawes.

The data from the Galex site indicates the pixel size is 1.5 arcseconds which is the angle as “viewed” by each individual pixel and is a measurement for CCD plate scale not resolution.
The pixel size in arcseconds depends on the focal length of the telescope used.

To find the physical size of the pixels used by the detector the following formula is used.

Physical size of pixel (microns) = [(pixel size in arcseconds) X (Focal length)]/206.3

Galex uses a 500mm size telescope at f/6 = 3000mm focal length.
(1.5X3000)/206.3 = 22 microns.

These are not large pixels.
By comparison the ACS/WFC camera used by Hubble for the HUDF is 15 micron pixels.

Since Lerner has obtained the same result for each HST filter, his calculation for the HUDT data is also wrong due to the wavelength dependence on resolution.
 
Last edited:
  • Agree
Reactions: SelfSim
Upvote 0

SelfSim

A non "-ist"
Jun 23, 2014
7,049
2,232
✟210,340.00
Faith
Humanist
Marital Status
Private
I’m afraid Lerner is grasping at straws.
The formula used was the Rayleigh criterion for resolution not Dawes.

The data from the Galex site indicates the pixel size is 1.5 arcseconds which is the angle as “viewed” by each individual pixel and is a measurement for CCD plate scale not resolution.
The pixel size in arcseconds depends on the focal length of the telescope used.

To find the physical size of the pixels used by the detector the following formula is used.

Physical size of pixel (microns) = [(pixel size in arcseconds) X (Focal length)]/206.3

Galex uses a 500mm size telescope at f/6 = 3000mm focal length.
(1.5X3000)/206.3 = 22 microns.

These are not large pixels.
By comparison the ACS/WFC camera used by Hubble for the HUDF is 15 micron pixels.

Since Lerner has obtained the same result for each HST filter, his calculation for the HUDT data is also wrong due to the wavelength dependence on resolution.
Thanks for that .. (we'll see what his response is .. his last response looked like an attempt to push the responsibility onto the Galex scope design .. which just looked dodgy to me (and from just about any other perspective I could think of). I think you were right .. he's overlooked the filter impact on resolution and then glossed over it assuming his method would sort it all out. But the application of his assumed cutoff limit of 38 in that method changes the outcome.

I think he should be requested to correct the error and then re-run his method(?)
Whaddyareckon?
 
Upvote 0

sjastro

Newbie
May 14, 2014
5,758
4,682
✟349,680.00
Faith
Christian
Marital Status
Single
Thanks for that .. (we'll see what his response is .. his last response looked like an attempt to push the responsibility onto the Galex scope design .. which just looked dodgy to me (and from just about any other perspective I could think of). I think you were right .. he's overlooked the filter impact on resolution and then glossed over it assuming his method would sort it all out. But the application of his assumed cutoff limit of 38 in that method changes the outcome.

To resolve the issue (pardon the pun) I've sent an E-mail to NASA via the Galex HelpDesk (yes it does exist:oldthumbsup:) if they can provide actual performance data on the angular resolution.

I think he should be requested to correct the error and then re-run his method(?)
Whaddyareckon?
:scratch:
I've noticed Lerner is becoming increasingly defensive with other posters and may no longer respond.
Let's wait until the info (hopefully) comes from NASA.
If ultimately he believes he is wrong then he should retract the paper from the Monthly Review of the Royal Astronomical Society.................... and the reviewers be reviewed.:amen:
 
Upvote 0