Note: This article discusses manually processing a Slooh.com image using PixInsight. Refer to this article for a detailed workflow that takes advantage of scripts to simplify the workflow yet still take advantage of PixInsight’s processing capability.

I recently posted an article about processing the Trifid Nebula, or Messier 20, using PixInsight. The process is pretty involved and I used a lot of different processes on it. I wanted to revisit processing this image again and wanted to find out if I could improve on the result while possibly reducing the number of steps involved.

This article goes though one possible way of processing M-20. If you want to follow along, download the FITS files of my observation of M-20. The FITS files are similar to what you would get from the Slooh.com Canary 1 telescope (I took the images using iTelescope.net because Slooh.com does not allow members to post their FITS files).

Result

Here is the final image I created:

The image has an even background and it has good color and detail.

Overview of Processing Steps

The rest of this article discusses the steps I took at a high-level, however, here are the names of the processes I used to create the image:

  • StarAlign
  • ImageIntegration
  • DynamicBackgroundExtractor
  • Star Mask
  • RangeSelection
  • DynamicPSF
  • Deconvolution
  • MultiscaleLinearTransform
  • PhotometricColorCorrection
  • ChannelCombination
  • LRGBCombination
  • TGVDenoise
  • ColorSaturation
  • ArcsinStretch
  • SCNR
  • FastRotate

The rest of this article discusses the processes at a high level.

Initial Processing

The images are binned at 1×1 for luminance and 2×2 for the R, G, and B images. This means that the resolution of the R,G, and B files will be lower than the resolution for the Luminance image. You can make the resolutions the same, while also aligning the images’ stars by using StarAlignment.

Choose a reference image, I recommend you choose one of the Luminance images, and use the StarAlignment process to create the aligned and up-sampled images.

Next, since there are three luminance images, you need to stack them to produce one primary (master) luminance image. Use the ImageIntegration process with the three luminance files and use the default settings.

Background Modelization

The linear images have significant background issues: there’s significant vignetting in the corners and the background sky is uneven. In addition, the background varies quite a bit when you compare the luminance image to the R, G, and B images.

I used the DynamicBackgroundExtracation process, or DBE, to model the background of the luminance, then the red and blue images, and finally the green image. Each group of images had its own DBE settings. I used a tolerance value of five and a sample size between 20 and 30 pixels.

RGB Combination

With the R, G, and B images having a good background, I combined them using ChannelCombination to create an RGB image.

Deconvolution

Deconvolution attempts to restore an image that’s been blurred by atmospheric disturbances during the observation. The results of the process are very subtle.

I create a star mask to mask only the very brightest stars – this mask will be used for the local support image in the deringing section of the Deconvolution process.

Next, create a Range Mask to mask the rest of your image. The idea with this mask is to protect the background sky from the sharpening effects of Deconvolution.

Create a PSF using the DynamicPSF process to select Moffat stars having low MAD values.

With the three key components in place, use the Deconvolution process to find settings that work for the image. In this case, I used 10 iterations with a Global Dark setting of 0.0210 and three wavelet layers.

Background Smoothing

The background needs to be smoothed and we do this on the Luminance image. Using MultiscaleLinearTransform, experiment an find values that work well for the image. Your goal is to smooth out the background without over smoothing.

I ended up using very conservative settings using just one iteration for each layer, and Amounts ranging between 0.12 for layer one, to 0.15 for layer 4 (I used 5 layers for this image)

Color Calibration

While the ColorCalibration process does a great job at balancing the colors in an image, you can get more accurate results using the PhotometricColorCalibarion process (PCC).

PCC works by comparing your image to images taken by sky surveys – you get more accurate star colors and a better overall result. You use the process by clicking the Search Coordinates button and enter M 20 in the search field and then click Get to populate the PCC fields.

Next populate the Focal Length and Pixel size boxes by clicking Acquire From Image if the FITS header contains that information. You can easily look up the details if the information is missing.

Dragging the new instance to your RGB image performs a plate solve and adjusts the color of your image. You may have to reapply the STF after you run PCC.

More Background Modelization

I found the background of the RGB image to have a few remaining artifacts like minor vignetting and some uneven sky. Use DBE to smooth out the background of the RGB image.

LRGB Combination

The LRGBCombination process at its default setting results in a washed out image. Used a Lightness value of 0.65 and left the Saturation at the default of 0.5.

TGVDenoise

The background could be smoothed some more. I used the TGVDenoise process using only the Lightness tab along with CIE Lab mode. This mode targets only the lightness component of your image for smoothing and does a very good job at it.

Final Adjustments

I used a very slight ColorSaturation process to bring out the reds and blues some more.

I then used ArcsinStretch to highlight the colors in the image while also darkening the background sky.

The SCNR process removes any residual green from the image and finally I used the FastRotate process to rotate the image.

Conclusion

This article discussed one possible means of processing an observation of Messier 20 using PixInsight.