Update: I have published a PixInsight workflow for processing Slooh.com images. The article discusses the workflow and has links to articles that discuss the details about each step.

I recently took an image of Messier 20 – the Trifid nebula and processed it using Jarmo Ruuth’s AutoIntegrate.js script for PixInsight and it produced a good result. Here’s the image:

From it you can see good detail in the nebula, and the image is generally free of artifacts.

I wanted to try my hand at processing the image manually to find out if I could improve on the automatic process by Autointegrate.js. Here’s the image I created:

At first glance there are a couple of key differences: the nebula has more definition and the stars are not as bright as in the Autointegrate version. On the left is the detailed view of the Autointegrate-created version of the image and on the right is the one I created:

The transitions between regions have more definition because the regions are sharper, also the stars are sharper. There’s also much more detail in the nebula itself:

The unmodified version of the image has significant vignetting at the corners of the image

Interestingly, the Autointegrate script managed to get rid of vignetting in the corners of the image rather effectively. Autointegrate uses AutomaticBackgroundExtractor at its default settings to even out the background. I thought I could improve on this using DnamicBackgroundExtraction, which allows you to select background samples; however, I found ABE to be sufficient in removing the vignetting.

The rest of this article walks you through processing this image. Along the way, I describe the details of the process we’re applying and explain some of the main settings. From this, you’ll gain a better understanding of PixInsight and will be able to make processing decisions on your own and be able to process your own images.

FITS Files and Download

I have made available the FITS files from the mission; however, these are not from Slooh.com. Slooh.com does not allow you to post your FITS files to a publicly accessible location; however, Slooh.com members can share Slooh.com FITS files with each other directly.

Instead of distributing Slooh.com FITS files, I captured M20 using itelescope.net using a telescope that is similar to Canary One. I produced the same number of FITS files and exposure times that Slooh.com creates from its Canary One telescope. So, while the image you get from Slooh.com won’t be exactly the same, it is close enough for this purpose.

In terms of binning, the Luminance images are binned at 1×1 and the R, G, and B images are binned at 2×2. Slooh.com uses a slightly different approach: luminance images are binned at 2×2 and R, G and B images are binned at 3×3. We’ll get results that are similar to Slooh.com for our purpose.

The following ZIP file contains three Luminance files, exposed at 50 seconds each, and one of each R, G, and B that were exposed for 50 seconds each – this is what you would get from Canary One.

Click here to download the ZIP file containing the FITS (116Mb):

Required PixInsight Knowledge

I assume you have very little knowledge of PixInsight beyond starting it and using the AutoIntegrate script.

Using AutoIntegrate.js

If you wish to try AutoIntegrate yourself before starting, review my article “PixInsight AutoIntegrate.js Processing Script” for details on how to use AutoIntegrate.js.

The settings I used for this script are pretty much the default settings, except that I selected “ABE before ChannelCombination” and I deselected “Skip ABE” – otherwise the settings were at their defaults.

Run AutoIntegrate and save the resulting file somewhere convenient so that you can come back to it. Remember to use the “Close all” button to close all of the windows AutoIntegrate.js creates for you as it processes your image.

Processing Workflow

Processing using PixInsight is generally divided between linear and non-linear processes.

In the linear stage, you use PixInsight’s ScreenTransferFunction (STF) to make your image visible on the screen and pixel values represent what was collected by the camera attached to the telescope.

Non-linear processing occurs after you use a stretch method like HistogramTransformation (HT) on your image. Once you use HT, the pixel values are no longer directly related to the values collected from the camera attached to the telescope. Instead, pixel values are ‘stretched’ based on the HT values and so you no longer need to use the STF to be able to view your image.

PixInsight has processes that are used optimally in both the linear and non-linear spaces and I discuss those details in this article. I try to explain the process we’re using, why you’re performing a particular process, along with the settings you use for a particular process.

Getting Started: Star Align and ImageIntegration

The very first thing we’ll have to do is align our images to a reference image. Aligning ensures your integrated images have stars in the same place across images so that when you stack the images, they are aligned and you don’t have double stars and other unwanted artifacts in your image.

Earlier, I said that Luminance images are binned at 1×1 and the R, G, and B images are binned at 2×2. The difference in binning results in a change in resolution. The R, G and B images have half of the resolution of the luminance image. Star alignment up-samples your binned images such that they are all the same resolution.

 The process we’ll use is called StarAlignment and it is found under the Process – Preprocessing menu option or under the Process – All Processes menu.

It’s always good practice to reset the tool you are about to use to set all parameters at their default values. Click the Reset button at the bottom right of the process window.

Click the arrow next to ‘View’ and select ‘File’. Now click the down arrow to the right and select a luminance file. The file I selected is IT-T31-erikwest-m 20-20200712-014232-Luminance-BIN1-W-050-001. StarAlignment will use this image as your reference image so you need to choose this file carefully. The AutoIntegrate.js process measures each of your files to determine which to use at the reference image; however, we can simply review the files ourselves to determine the best file to use. The Luminance images are roughly equivalent to one another – there were no problems with tracking and the telescope did not move due to wind between exposures, so feel free to choose another Luminance image.

From there, click Add Files and select the remaining images from the mission. Do not include the reference Luminance image in this listing of files.

Review the Output Images section of the process and note the settings. The Output directory is blank so PixInsight will use the same folder as the source files to output the aligned files. PixInsight will postfix files with “_r” to indicate the aligned file.

Leave the other settings at their default and click the Apply Global button (the closed circle). PixInsight will perform some work and in a few moments you will have several new files in your folder. The new files have an ‘xsif’ extension. You can now close the StarAlignment process.

Now we have our aligned files; however, we still have three luminance files and one of each  for R, G, and B files. We need to stack the Luminance file so that we have one primary Luminance file.

We’ll use the ImageIntegration process to integrate all three of the Luminance files into one primary file. We don’t have to do this for the R, G, and B files because we already have just one of each and we’ll consider these our primary R, G and B files.

Open the ImageIntegration process and click the Reset button to reset the process to its default settings.

Now we need to add the Luminance FITS file we used as our reference file for the StarAlign process and also add the new Luminance xsif files that are post fixed with “_r”. Leave the other settings at their defaults and click the Apply Global button (closed circle button at the bottom).

The process creates two new windows: one labeled “rejection” and one labeled “integration”. You can safely close the “rejection” window (this window contains the pixels that were rejected during the integration process – there are none so we can close this window).  You can also close the ImageIntegration process.

The “integration” window is probably just a black square because a stretch has not been applied to it to make the pixels visible. We can apply a temporary stretch by pressing CTRL+A on your keyboard.

This is our primary Luminance file – you can save it now in case you want to come back to it later.

Evening-out The Background

If you study this image, you’ll notice that there’s vignetting at the corners of the image and the background is not evenly illuminated – the nebula is bright and there are darker regions around the nebula. We can correct this with a process called AutomaticBackgroundExtractor or ABE, which is best used in the non-linear state and preferably before combining images.

Vignetting and the background you see here have been added to the image. The uneven background is not predictable in that it is not evenly illuminated. So to get rid of this effect, the correction we’ll apply is the opposite of addition, which is Subtraction. So select Subtraction for the Correction option near the bottom of the process window under Target Image Correction.

To apply the process to the image, drag the New Instance icon (the triangle icon) over the Luminance image.

After a few moments, two new windows appear: one is labeled “integration_ABE_background” and the other is “integration_ABE”. D not close the ABE process window.

Press CTRL+A for each of the two new windows and note the background model, in the image labeled “integration_ABE_background”. From it, you can see that ABE accurately captured the vignetting in the corners and the uneven illumination throughout.

You can close the window labeled “integration_ABE_background”.

The “integration_ABE” window will now be our Luminance image, so you can close the one labeled “integration”

Open each of the R, G, and B files that end in xsif (remember, these are the star aligned files). Press CTRL+A for each of the files to view them.

Correcting the Background on R, G, and B Files

The background of the R, G, and B files is also uneven. We need to address this before we combine the R, G, and B files because the uneven illumination will become worse one we combine them.

We’ll use the same ABE process on each of the R, G, and B files.

Start with the Red file and drag the ABE process New Instance icon to the Red file. Close the original file and background model.

Next, drag the ABE process New Instance icon to the Green file. Close the original Green file and background model.

Finally, drag the ABE process New Instance icon to the Blue file. Close the original Blue file and background model.

You should have four windows open now: one Luminance and one of each R, G, and B window.

Reducing Noise Using MultiscaleLinearTransformation

This process can be used in both linear and non-linear working modes. Its purpose is to reduce noise by breaking down an image into its constituent parts; I explain in more detail below.

If you zoom into the individual images using your mouse wheel, you’ll notice that the image has a fair bit of noise in it:

The noise is represented by the pattern the pixels make throughout the image. It’s best to address this noise before we combine all of the images into an LRGB primary image. Addressing noise earlier in the workflow is effective and produces good results. Here’s the same region of the image, with the noise smoothed-out:

To smooth out the noise, we use the MultiscaleLinearTransformation (MLT) process. This process works by diving your image into layers having different sized structures on them. The layers with the smallest structures are usually where noise resides. These layers are called Wavelet Layers.

You can visualize the Wavelet Layers this image is made up of by selecting Script – Image Analysis – Extract Wavelet Layers.

For the Target Image, select the Luminance labeled “integration_ABE” and click Ok. The process creates several new windows labeled from Layer00 to “residual”. You can view each by pressing CTRL+A. Note that Layer 00 and Layer 01 is where most of the noise resides. You can safely close these windows labeled from Layer00 to “residual”.

We’ll use our knowledge of what’s on each Wavelet Layer to smooth out the noise in the image.

Before we do that, we need to create a Mask. We use a mask to protect the high signal parts of our image and focus noise reduction on the background.

We’ll create a Range Mask to protect the high signal parts of our image.

Make the Luminance image active by clicking it.

From the menu, select Process – RangeSelection.

Click the Real-Time preview button – the open circle button. PixInsight will present you with an all white window. The RangeSelection process selects pixels between a lower an upper limit. At its default, RangeSelection selects all pixels, which is why you see a white image.

For now, type the value 0.002 into the lower limit field. We need to type this value because the slider is too coarse to be able to select this value. The Real-Time preview changes to show the nebula and stars – these will be the areas protected from noise smoothing.

Close the Real-Time Preview window and drag the New Instance (triangle) icon to the Luminance image to create the mask.

Make your Luminance window active again by clicking it and select from the menu, Mask – Select Mask. Select the “range_mask” and put a checkmark next to Invert Mask.

Your image now shows red for areas that will be protected from noise smoothing.

Next open the MultiscaleLinearTransformation process.

We need to fill in values for each of the wavelet layers as follows:

  1. Click layer one and put a checkmark next to Nose Reduction
  2. Enter or select 3 for threshold
  3. Enter or select 0.5 for Amount
  4. Enter or select 3 for iterations

Select the second layer, put a checkmark next to Noise Reduction, and fill in the three boxes with the following values: 2, 0.5, 2

Select the third layer, put a checkmark next to Noise Reduction, and fill in the three boxes with the following values: 1, 0.5, 2

Select the second layer, put a checkmark next to Noise Reduction, and fill in the three boxes with the following values: 0.5, 0.5, 1

Your screen should look like this now:

Drag the New Instance icon (the triangle icon) to the masked Luminance image. PixInsight will work for some time and finally return control back to you. From the menu, select Mask, Remove Mask.

Make the Red file active and select Mask – Select Mask. Select the “range_mask” and put a checkmark next to Invert Mask.

Drag the New Instance icon (the triangle icon) to the masked Red image. PixInsight will work for some time and finally return control back to you. From the menu, select Mask, Remove Mask.

Repeat the preceding two steps for the Blue and Green images.

Save each image by adding “_MLT” to the filename.

Now that we have smoothed noise, we can combine the images into a primary RGB image.

Combining the RGB Files into one RGB Image

The next step is to combine the R, G, and B images into a single RGB image.

Open the ChannelCombination process.

From the menu, select the appropriate R, G and B window that corresponds to the color listed. Click the Apply Global button to execute.

You now have a RGB primary file. You can close the other R, G, and B files and save this one so that it has an “_RGB” suffix.

Background Neutralization

The background needs to be evened out some more and for this we use the BackgroundNeutralization process. This process makes global color adjustments required to neutralize the background and is ideally applied in the non-linear space.

We can use the default settings, so drag the New Instance icon over the image  to execute it. Your image might change color, so press CTRL+A to fix the display.

Color Calibration

Next, we need to color calibrate our RGB image. From the menu, select Process – ColorCalibration.

There are two key fields that can be optionally filled in. The first field is the Reference Image and the second field is the background reference image. If you leave these at their defaults, PixInsight will use the entire image as the reference and background reference image.

You can select an area of background by pressing ATL+N on your keyboard and selecting an area that contains a white reference. Select it for the White Reference image.

For the background reference, you can press ALT+N again and select a patch of sky that represents your background. Select it for the background reference.

Drag the New Instance icon to the RGB image to execute the process.

ATrousWaveletTransform

Next up is more noise smoothing using ATrousWaveletTransform. The process works using Wavelet Layers you read about earlier and we attack the noise at the smaller scales.

If there are fine details in your image and you do not want them smoothed away, you would protect these areas using a mask, as we did earlier. In this case, the structures are large enough so that they don’t get smoothed away by ATrousWaveletTransform.

Fill in the process box as follows:

  • Layer 1: Threshold=3, Amount=0.25, Iterations=10
  • Layer 2: Threshold=2, Amount=0.25, Iterations=10
  • Layer 3: Threshold=0, Amount=0.25, Iterations=5
  • Layer 4: Threshold=0.5, Amount-0.12, Iterations=1

Drag the New Instance icon (the triangle) to the image to execute.

What we’re doing here is attacking the noise at layers one and two with more iterations of noise reduction.

We can now go into the non-linear state: we’ll apply a histogram stretch to both the luminance and RGB images and then will combine those images into one LRGB image.

Apply A Histogram Stretch

Now it’s time to go non-linear so that we can do some final processing on our image. We borrow the setting from the ScreenTransferFunction (STF) to get us started.

Open the ScreenTransferFunction process. Click the Track View icon (the checkmark at the bottom right) and make your Luminance image the active image.

Open the HistogramTransformation (HT) process. We’ll borrow the settings from the STF to guide our settings for HT.

  1. Reset the HT process.
  2. Enable the Real-Time Preview by clicking the open circle button at the bottom of the window.
  3. Next, drag the New Instance icon from the STF to the bottom of the HT window as shown:

Your Real-Time Preview will become all white because you have both the STF and HT settings in play. Disable the STF by pressing CTRL+F12.

  • Close the Real-Time Preview and drag the New Instance icon from the HT process to the Luminance image.

Repeat the preceding three steps for the RGB image.

Both of your images are now non-linear. Now we can combine the images to create a single LRGB image.

LRGBCombination

The LRGBCombination process is used to combine the Luminance with the RGB image to create a primary LRGB image.

  1. Open the LRGBCombination process
  2. Reset the process
  3. Uncheck the R, G, and B channels
  4. Select the Luminance image for the L channel
  5. Put a checkmark next to Chrominance Noise Reduction
  6. Drag the New Instance icon to the RGB image

You can now close the Luminance image. Save your RGB image with the suffix “_LRGB”

Remove the Green Color Cast

The image has a slight green color cast to it that is making the image look a little washed-out. There’s a process call SCNR that removes the green color cast – it is very easy to use:

  1. Open the SCNR process
  2. Ensure Green is selected in the Color To Remove field
  3. Drag the New Instance icon to the LRGB image

Next, we further reduce noise.

Boost Color Saturation

We need to give the colors in the image a little boost.

  1. Open the ColorSaturation process
  2. Click and drag a point over the green area and pull it down toward the bottom
  3. Click and drag a point over the red area and drag it upwards
  4. Drag the New Instance icon to the LRGB image

Your screen should look something like this:

TGVDenoise

Total Generalized Variation Denoise, or TGVDenoise, has a powerful and very good smoothing algorithm. We’ll use it on this image to further smooth away noise.

  1. Start the TGVDenoise process
  2. Select CIE Lab mode at the top (this is recommended for non-linear images)
  3. Select the Lightness Tab
  4. Enter 0.5 in the Strength field
  5. Enter 250 in the Iterations field
  6. Select Automatic Convergence
  7. Select the Chrominance tab
  8. Enter 3.0 in the strength field
  9. Enter 250 in the Iterations field
  10. Select Automatic Convergence

Now we need to find a value to enter for the Edge Protection fields on the Lightness and Chrominance tabs. We find this value by selecting an area of background sky and then finding the standard deviation from it. Here’s how to find the value:

  1. From the menu, select Process – Statistics
  2. Zoom into your LRGB image and find a region of background that is devoid of any stars – it doesn’t have to  be a very big region which is why you zoom in
  3. Press ALT+N on your keyboard
  4. Click and drag your mouse to draw a box around your selected region of background sky
  5. Back in the  Statistics process, select the Preview you created in the preceding step
  6. Click the wrench icon on the right side
  7. Select Standard deviation from the Available Statistics
  8. Click Ok
  9. Note the value of stdDev in the R column

    In my case, the value is 2.11569e-02

  10. Enter the decimal value in the second field for the Edge Protection value and select -2 over on the right side
  11. Do the same for the Chrominance tab
  12. Drag the New Instance icon to the LRGB image

This process can take a long time to complete, so be patient.

Sharpening Only The Nebula

The next step is to restore some of the detail in our nebula. We’ll use the MultiScaleMedianTransform process to sharpen the image; however, we want to apply sharpening only to the nebula not the whole image. MMT can cause ringing around stars so we want to protect them along with the background from sharpening.

To do this we’ll need a special mask that takes into consideration the nebula as well as the stars. We’ll reuse our range_mask that we created earlier in this process.

  1. Select Process – StarMask
  2. Enter 4 in the Large Scale Field
  3. Drag the New instance icon to your LRGB image
  4. Clone the new star_mask by dragging the image’s tab elsewhere on the screen
  5. Open the PixelMath process
  6. Enter the following in the RGB/K expression:

    range_mask – star_mask

  7. Drag the new instance icon to the cloned star mask (labeled “star_mask_clone”)

We now have a mask that focuses mostly on the nebula (some stars are along for the ride but we’ll take care of them shortly).

Make your LRGB image active and select Mask – Select Mask and select “star_mask_clone”

Open the MultiscaleMedianTransform process and reset it.

What we’ll do now is use the Bias setting to sharpen the image.

Do the following:

  1. Select Layer 2
  2. Put a checkmark next to Detail Layer 1/4
  3. Enter 0.25 in the Bias field
  4. Select Layer 3
  5. Put a checkmark next to Detail Layer 1/4
  6. Enter 0.25 in the Bias Field
  7. Select Layer 4
  8. Put a checkmark next to Detail Layer 1/4
  9. Enter 0.12 in the Bias field
  10. Drag the New Instance icon over to the LRGB image

Once the process completes, select Mask – Remove Mask

Reducing The Field of Stars

The stars in the image make up a lot of the image and we want to highlight the nebula. We can reduce the effect the stars have on the image by making them less prominent. The MorphologicalTransformation process reduces the prevalence of stars.

Do the following:

  1. Open the MorphologicalTransformation process
  2. Click Manage
  3. Select 5 x 5 Circular Structure under the Available Structures option
  4. Click Pick
  5. Enter 0.5 in the Amount field
  6. Drag the New Instance icon over to the LRGB image

Rotate Your Image

The image has north to  the left, so we need to rotate it:

  1. Open the Rotation process
  2. Enter 90 in the angle field
  3. Select Clockwise
  4. Drag the New Instance icon to the LRGB image

Final Save and Export

Save your file by selecting File – Save As and give it an appropriate name.

Export your image to TIFF format so that you can then use something like Photoshop or Photopea to add the Slooh.com logo, or do whatever processing you wish.

  1. Select File – Save As
  2. Select TIFF in the Save As Type field
  3. Click Save
  4. Click Ok
  5. Select 16-bit and click Ok to save

Conclusion

That was a long process, but we ended up with a nice image in the end. You learned about using PixInsight to process an image of Messier 20 and along the way learned about processes, how to invoke them, how to set parameters and a range of other skills. I hope that this process boosts your confidence in using PixInsight to process your own images. While AutoIntegrate.js certainly does a good job, you can get better results in some cases by processing images on your own.

Update

I published a series of articles that describe a modern, complete PixInsight workflow for processing Slooh.com images. The main article describes the steps and linked articles describe each step in detail.