Among the great things about using a remote telescope is that you get access to great equipment at a great location, dark skies, lots of observing nights, and automation so that you don’t have to stay up late at night.

One of the benefits of using this great equipment is that you can get nice looking images in a relatively short period of time. Take M42, the Orion Nebula, as an example. With just one observation using the Slooh.com Canary One or Canary Two telescope, you can get a fantastic picture. Couple the great image with some post-processing and you can have an excellent image with just 250 or 100 seconds of exposure. Compare that short exposure time to backyard astronomers that capture upwards of an hour of exposure time (and often much longer) for similar results and you get an idea of the nature of the equipment you’re using.

So given that some astronomers collect hours of exposure time on an object, and we’re only collecting a few minutes, how do you know your image is good? In other words, how do you know you have enough exposure time on a target to make a good image? Is a few minutes exposure enough?

In this article, I show you a way to evaluate the quality of a given image by interpreting the image’s histogram. I show you an image of M42 that I captured in a single mission and compare it to an image that I captured over many missions and show you the differences in an objective way. I also show you what an under-exposed image looks like. You can use the skills you learn here to evaluate the quality of your images.

But first, some background on understanding image quality and its practical implications for us as amateur astronomers.

Understanding Signal To Noise Ratio

In my book, I discussed the Signal To Noise Ratio (SNR) in the section called “Understanding Stacking and Multi Luminance Processing”. In that section, I said the following:

“When creating astronomical images, the goal is to get the highest Signal to Noise ratio possible. What that means is our images are made up of two fundamental components: signal, which is the subject of the image itself, and noise, which is everything else we don’t want. “

So, our goal is to get as much exposure time on the target as we can. But the question is, given a target, how do you know how much exposure time you need? Is there a way to know in advance how much exposure time you’ll need on a target to get a good quality image?

The answer to this question is, yes, there is a way to know ahead of time; however, it requires a lot of information and a lot of math.

The information you need to calculate the SNR includes details about the telescope you’re using, details about the camera attached to it, seeing conditions, and information about the target like its size and magnitude. 

Here’s a formula for calculating the SNR for a given image:

Here are the values in the formula:

S = total signal 
B = total background signal 
D = dark current 
RN = read noise from bias frame 
n = number of sub-exposures

Now the question is, where do you get these values from, especially if you want to calculate this ahead of time? How do you get the value of S and B? Where does n come from? 

The calculations are involved, to say the least, and there’s no quick way of plugging in some values into a formula to tell you in advance how long to expose a target to get a good SNR. There’s software available that can help you figure this out, but it’s pretty specialized and may not be necessary, depending on your imaging goals.

So, that leaves us in a situation where we can’t know in advance how much exposure time we need on a target to get a good image. However, we do have a way of knowing whether the data we have is enough, and that’s what the rest of this article is about.

Measuring SNR In Your Images

There is a way to measure the SNR in your images, however, it is limited to measuring stars and requires that you own MaxIM DL. Although I own a copy of MaxIM DL, it’s pretty expensive and not everyone that’s reading this article owns it, so I wanted something more broadly available.

I looked at AstroImageJ to see if I could use it to measure SNR. The closest I could get was to measure the FWHM of stars. FWHM tends to vary quite a bit on processed images, so you need to measure the FWHM on images that have only been stacked without any additional processing. Moreover, FWHM turns out to not being a good indicator of SNR – it’s a completely different measure. (I discuss FWHM in my book around page 146).

Fortunately, there is a way to measure image quality using the histogram of your images.

Understanding A Histogram By Analyzing An Image of M42

Let’s take a look at an image of M42 that I captured using one mission of a total of 80 seconds of exposure time (4 x 20 sec, LRGB on Canary Two):

This image is very good because it shows a lot of detail in the nebula, the bright center is the nebula is not too over-exposed, and you can almost make out the stars in the bright region. 

Using PhotoPea (https://www.photopea.com/), I opened the image and checked the histogram:

I discussed the histogram in my book so here’s is a quick review of what it is and how to read it:

A histogram is a graph of the number of pixels at the values shown on the bottom of the graph. The bottom of the graph ranges from black on the left, to white at the far right. The vertical axis represents the number of pixels and there’s no upper limit to this value – it depends on the bit-depth of the image so sometimes the upper end can be as low as 255 or in the many thousands.

Looking at the histogram of M42, you can see that there are lots of pixels near the left, which is the black end of the spectrum. This makes sense because much of the image is made up of the dark background of space. The number of pixels starts to drop off significantly as you move further to the right. Finally, the number of pixels slowly drops to what looks like almost zero as you get over to the extreme right side: these are the stars. The center portion of the graph represents the pixels that make up the nebula itself. 

Now let’s look at another image of M42 that was exposed for a total of 24 minutes:

Comparing the two images, they actually look quite similar. The difference between the two images is in the details of the nebula, and the differences are very subtle, especially in these smaller JPG images which are compressed here to save space on your screen.

The key difference is in the image’s histogram, which is here:

This histogram has more pixels in the center and towards the right side, indicating that there’s more data there. Here’s an image of the superimposed histograms to help you compare them:

The red arrow is pointing to the region of the image where there are more pixels, and there continue to be more pixels as you move to the right. The circle on the histogram shows that there are a fair number of extra pixels in the upper range of pixel values too.

This histogram indicates that the image that was exposed for 24 minutes has more detail than the image that was exposed for just 80 seconds. And this should be obvious – the more exposure time, the more data you collect. In this case, we added more pixels to the details of the nebula with the additional exposure time.

Now that you have a better understanding of the histogram and the effect of additional exposure time, how do you know you have enough time on a target?

Understainding A Great Image

Let’s take a look at a practical example of M42, as imaged by the Hubble Space Telescope here: https://en.wikipedia.org/wiki/Orion_Nebula

You can download the image by clicking the image in the article to bring up the image on its own, right-click, and select Save As. Then open the image in PhotoPea and take a look at the histogram. Here it is for your reference:

This histogram is shifted to the right when you compare it to my histograms of M42. This shift to the right makes sense because the nebula makes up most of the image. Also, the number of pixels that make up the mid-range of tones is quite high (the mountain in the middle of the histogram). The trail off to white is very gradual, which indicates a lot of tonal variation in the image. 

And that’s the goal: to get a lot of tonal variation in your images, and with more exposure time you get more of that variation as you saw earlier in my examples. 

Let’s look at another example: Messier 15 which you can get here: https://en.wikipedia.org/wiki/Messier_15

Before you read any further, try to guess what the histogram of this image might look like. 

Based on all of the dark areas, I would expect there to be lots of pixels at the far left. All of the stars in the image suggest to me that the drop-off to white should be very steep and that there will be a fair number of pixels near the right side of the histogram.

Here’s the actual histogram as seen in PhotoPea:

What’s surprising about this histogram is that the top of the mountain on the left side of the histogram is not cut off and it is shifted slightly to the right. You’d think that with all that dark background there would be a lot of black pixels, but their number is relatively small. Also, the drop-off to white is very gradual, and the number of pixels along the way is quite high, indicating lots of tonal variation. This is clearly an excellent image, not only when you look at it, but the histogram indicates that it is quite balanced with lots of tonal variation.

Understanding Under-Exposed Images

Let’s get an understanding of what a really under-exposed image’s histogram looks like, so you have a better understanding of the other extreme of the range of images.

Here’s an image of the Trifid nebula as imaged using the MicroObservatory:

This image is made up of three one-minute observations, and processed using the JS9-4L image processor. I created the image using the directions in this article, so a log stretch has been applied and I colorized the image using RGB color mode.

Looking at the image, it appears reasonable considering that the exposure is just three minutes long and using a telescope with a small aperture. Let’s look at the histogram:

There are a few issues with this histogram. First of all, it has a lot of dark pixels as indicated by a large number of pixels on the left side of the graph. Next, the drop-off to white is very steep, and there are few pixels at the center of the histogram, indicating minimal tonal variation.

And here’s the big problem with this histogram: it’s missing lots of data. You can see the histogram is made up of spikes instead of a steady range of pixels as we have seen before. This effect is called combing because the spikes appear to be similar to the teeth of a comb. Combing is usually an indication of over-processing, specifically over doing it with Levels and Curves adjusments. However, the image here is just plain missing data since I didn’t adjust curves or levels.

So how do we fix this? The ideal solution is to add more data through more exposure time. But how do you know you have enough exposure time on this target using this telescope?

I gathered 15 minutes of exposure time on the nebula and processed the resulting image. This is the image I captured:

Clearly, there’s more detail and tonal variation in this image, and we could stop since the image now looks pretty good. However, the histogram tells a different story.

Here is the histogram of the preceding image:

This histogram still has some issues, but it is better than the three-minute exposure. The arrow is pointing at the region in the histogram where there are fewer black pixels, indicating more tonal variation. In addition, the drop-off to white isn’t as steep as before, and there are a few more pixels at the center of the histogram, which indicates more tones there too.

While there are more pixels in this image, the histogram indicates that there are still missing pixels as shown by the combing effect. 

Based on the histograms, I can see that I need more exposures and I know that there’s data missing. I wouldn’t have known this based only on looking at the images. Looking at the images alone, I could see that I need more data, but I don’t know what I need. Now that we’re able to analyze the histograms, we know what’s missing and what to do to correct it.

Conclusion

In this article, you gained an understanding of what SNR is, and gained an understanding of how to measure it by interpreting the histogram of images you capture. You gained an understanding of what makes an image very good by looking at the histograms of Hubble images and gained an understanding of what under-exposed images’ histograms look like.

In the next article, I explain how to get even better results by optimizing when you capture your images.