Yeah .. something's gone on in the background on that one, methinks. He didn't deserve being suspended. Interesting that they didn't lock it?
I think the moderators at the very least should explain their motives.
SelfSim said:
I'm a bit lost on this.
How does the sampling become 'critical'?
Isn't the sampling rate a parameter you control in the AIP4WIN software?
Sampling relates to the radius of the PSF of a star to the size of the pixels.
If the radius is too small relative to the pixel size then the image is said to be undersampled and is characterized by blocky or square looking star shapes.
If the radius is too large relative to the pixel it is said to be oversampled which is not a bad thing since images have a greater latitude for being post processed without the introduction of artefacts such as noise.
Sampling is critical when the radius is neither over nor undersampled and meets the
Nyquist requirement.
Pictures tell a thousand words.
The left hand side of the enlarged image is the single exposure of the M30 using the 814W filter.
It is clearly undersampled as the stars are blocky while the background is noisy.
The right hand side is a 5 image stack of M30 performed by Hubble using the same filter and camera.
In this case the individual images were sightly offset and then combined so that the PSF covers more pixels. The PSF was then reconstructed to produce a critically sampled image.
The FHWM measurements I performed on the single image are clearly suspect as the PSF should be round not square and the S/N ratio be reasonably high.
SelfSim said:
How does the FWHM figure become negative? (Or is that the point you're making about 'the dangers'?)
Cheers
The FWHMs can never be negative.
It reads "FWHM:-" "0.110 +/- 0.005 arcsec."
Not "FWHM:" "- 0.110 +/- 0.005 arcsec."
Sorry for any confusion.
The dangers of using undersampled noisy images particularly the Galex data can result in erroneous radius measurements.
I suspect this is what Lerner has done.