The Steps

Before We Begin

Top

This page is designed to walk you through reducing the data provided from the SALT pipeline (the "product" data) into wavelength calibrated spectra. Due to the nature of the SALT telescope, absolute flux calibration is not possible. This guide supposes you have science images, flat fields, and an arc lamp image.

 

We're going to run the reduction sequence on some sample data from November 11th, 2011, which I have in a folder called "2011-3-DC-005.20111122". Your data may be found in a similarly named folder. The object we'll be looking at is the obscured quasar SDSS J024940.2-082804.6, which is at a redshift of z = 0.27. At this redshift, since we'll be reducing an optical spectrum, we are focusing on the spectral range that includes the [OIII]5007 emission line.

 

First, it's always important to look at the Astronomer's Log (which is found in the /doc folder). Here are the relevant lines to the data we will be reducing:

00:45 Pointing to SDSS J024940 from 2011-3-DC-005

S433 - field view

S435 - slit view

S436-676 - 3s, for Petri and Alexei to check stability of guidance

P28-30 - 988s, GR1300, 1.25"

P31 - 30s, Ne

P32-36 - 3s, flat, QTH2.

 

You can see that they took an image of the field, then an image of the slit, then some guiding stability images (these will be proceeded by "s" if they are provided as raw data), and then took three spectra with the GR1300 grating, and a 1.25" slit. Finally, they took a 30 second Neon arc lamp, and then five flat fields.

And then, let's see if this is true in the Observation Sequence:

The science exposures were:

file object ra2000 dec2000 ins md filter bin gn sp grat gr-ang ar-ang exp

P201111220028 SDSS J024940 0 -08:28:04 RSS IM PC04600 2x2 FA SL PG1300 26.38 52.76 988.2

P201111220029 SDSS J024940 0 -08:28:04 RSS IM PC04600 2x2 FA SL PG1300 26.38 52.76 988.2

P201111220030 SDSS J024940 0 -08:28:04 RSS IM PC04600 2x2 FA SL PG1300 26.38 52.76 988.2

And the arcs/flats are:

file object ra2000 dec2000 ins md filter bin gn sp grat gr-ang ar-ang exp

P201111220031 ARC 0 +00:00:00 RSS IM PC04600 2x2 FA SL PG1300 26.38 52.76 30.2

 

P201111220032 FLAT 0 +00:00:00 RSS IM PC04600 2x2 FA SL PG1300 26.38 52.76 3.2

P201111220033 FLAT 0 +00:00:00 RSS IM PC04600 2x2 FA SL PG1300 26.38 52.76 3.2

P201111220034 FLAT 0 +00:00:00 RSS IM PC04600 2x2 FA SL PG1300 26.38 52.76 3.2

P201111220035 FLAT 0 +00:00:00 RSS IM PC04600 2x2 FA SL PG1300 26.38 52.76 3.2

P201111220036 FLAT 0 +00:00:00 RSS IM PC04600 2x2 FA SL PG1300 26.38 52.76 3.2

 

Correct! If we want to look at these images, we'd could do this...

> displ mbxgpP201111220028.fits[sci,1] 1 fill+

...because these images have fits extensions. There are no other extensions in these images, so this can be annoying. Step one will be to put things into a format that's easier to view/analyze. Also, take a look at all of the images in the set, and you'll notice that the Flat 0034 wasn't mosaiced correctly, and since we have four other perfectly good flats, we can ignore this one.

 

I've copied my raw data over to a folder that I tend to call "raw_data", and I generally run the reduction in a folder just above raw_data.

 

Step 1: Getting the Data Into a Readable Format

Top

Since the images are in an extension format, our first step is to use imcopy to put the images into a format that makes the reduction easier.

 

So, let's first create a file that I'll call "imcopy.cl":

imcopy raw_data/mbxgpP201111220028.fits[sci,1] sdss.sci.28.fits

imcopy raw_data/mbxgpP201111220029.fits[sci,1] sdss.sci.29.fits

imcopy raw_data/mbxgpP201111220030.fits[sci,1] sdss.sci.30.fits

imcopy raw_data/mbxgpP201111220031.fits[sci,1] sdss.arc.31.fits

imcopy raw_data/mbxgpP201111220032.fits[sci,1] sdss.flat.32.fits

imcopy raw_data/mbxgpP201111220033.fits[sci,1] sdss.flat.33.fits

imcopy raw_data/mbxgpP201111220035.fits[sci,1] sdss.flat.35.fits

imcopy raw_data/mbxgpP201111220036.fits[sci,1] sdss.flat.36.fits

Again, notice that I've skipped 34 because of the improper mosaicing. So, now, we can run imcopy:

> cl < imcopy.cl

Now, we've got all of the data we care about in one place, in a format where we don't have to specify the fits extension.

Step 2: Imcombine the Flats

Top

We are going to want to combine our flat images. Let's start by making lists:

> ls *.flat.*.fits > flat.list

And now, let's run imcombine with these parameters:

> imcombine @flat.list flat.med.fits combine=median reject=none scale=mean

Notice that we're using a median combine, in order to remove the cosmic rays, and we're scaling by the mean so that this makes the median combination make sense. Take a look at the resulting image. It should look pretty good, and should have no cosmic rays.

Step 3: Make an Illumination Flat

Top

We are going to need to take our newly created median flat, and make an illumination flat for the purposes of dividing by the flat field. We'll need to use the iraf task mkillumflat, which is in the imred and then ccdred packages.

> mkillumflat input=flat.med.fits output=flat.med.i.fits ccdtype="" xboxmin=3 yboxmin=3 xboxmax=5 yboxmax=5

Take a look at the result. If it looks all right, we'll move on. You can see in this example that there's a feature near the bottom right, but as it's nowhere near the central trace that we care about, it's not a huge deal.

Step 4: Flat Reduce Each Science Image

Top

So, we'll want to first normalize our flat, so that the counts for the flat-reduced science image aren't crazy. This step is not super critical, but I like to do it nonetheless. First, let's find the average count level for the newly created illumination flat:

> iterstat flat.med.i.fits

flat.med.i.fits: mean=12898.45 rms=3270.105 npix=6485262 median=13218. mode=13232.82

flat.med.i.fits: mean=12788.9 rms=2442.818 npix=6464897 median=13082.32 mode=13078.77

flat.med.i.fits: mean=13185.01 rms=753.947 npix=6257798 median=13155.63 mode=13151.74

flat.med.i.fits: mean=13196.89 rms=223.2817 npix=6213399 median=13186.12 mode=13185.72

flat.med.i.fits: mean=13193.95 rms=110.1243 npix=6168310 median=13189.3 mode=13189.1

flat.med.i.fits: mean=13193.4 rms=95.42803 npix=6137521 median=13191.44 mode=13191.47

flat.med.i.fits: mean=13193.39 rms=92.49007 npix=6124122 median=13192.52 mode=13192.82

flat.med.i.fits: mean=13193.39 rms=91.84988 npix=6120715 median=13190.12 mode=13189.36

flat.med.i.fits: mean=13193.39 rms=91.69975 npix=6119888 median=13192.58 mode=13191.85

flat.med.i.fits: mean=13193.39 rms=91.66516 npix=6119696 median=13193.15 mode=13192.35

This is just running an iterative statistics program on the image, and the most important number I've put in bold. This is the mean for the image. We can use imarith to divide by this value:

> imarith flat.med.i.fits / 13193.39 flat.med.i.norm.fits

And now, we can use imarith to divide our science target by the flat:

> imarith science/sdss.sci.28.fits / flats/flat.med.i.norm.fits science/sdss.sci.28.f.fits

> imarith science/sdss.sci.29.fits / flats/flat.med.i.norm.fits science/sdss.sci.29.f.fits

> imarith science/sdss.sci.30.fits / flats/flat.med.i.norm.fits science/sdss.sci.30.f.fits

As always, take a look at the resulting flat reduced science image. It should look pretty good.

Step 5: Trimming

Top

For our purposes here, we don't really care much about the spectrum outside of a certain wavelength range. We also don't care much about a lot of the portion of the image well above or below the spectrum. So, we should use imcopy to trim out only the portion we care about. We will have to do this to both our image and to the arc lamp, in the same way, or else everything won't line up correctly.

> imcopy sdss.arc.31.fits[50:1300,500:1500] sdss.arc.31.trim.fits

> imcopy science/sdss.sci.28.f.fits[50:1300,500:1500] science/sdss.sci.28.f.trim.fits

> imcopy science/sdss.sci.29.f.fits[50:1300,500:1500] science/sdss.sci.29.f.trim.fits

> imcopy science/sdss.sci.30.f.fits[50:1300,500:1500] science/sdss.sci.30.f.trim.fits

Take a look at the resulting images.

Step 6: Identify

Top

So, now we're going to do the only difficult portion of this reduction procedure, the part where we identify emission lines in the arc lamp spectrum for the purposes of wavelength calibration. The way in which this works is that we use an iraf task called identify, which first plots the center of the arc image, showing the peaks, and then allows the user to provide the wavelengths of those peaks. From here, a fit will be performed between the pixel values and the wavelength values for the peaks near the center of the image, and you will move on to the task reidentify.

For this redshift and grating configuration, the SALT astronomers have used the Neon arc lamp, so we'll need to download the file Ne.salt to the location where we're reducing this data. A whole bunch of line lists are found here. With this file downloaded, we'll also probably want some sort of image of the optical Neon spectrum to compare to. You can find a bunch in this pdf. I happen to think that these spectra are really horrible, so for Neon, I have been using this gif here.

Anyway, let's run identify on the arc image:

> identify images=sdss.arc.31.trim.fits coordlist=Ne.salt function=chebyshev order=5 fwidth=6. cradius=6

There are a few important things to notice:

First, we need to make sure that the coordlist is set to the file that you should have downloaded corresponding to that particular arc lamp. It's a pain to input the exact-to-the-decimal wavelength value for each and every line you want to fit, so a coordlist is used. When you input a wavelength value, identify will go through the coordinates list and grab the nearest wavelength in the list, which makes everyone's lives easier. Also, once you input a few correct wavelength values and create a rough fit, identify can be told to look through the coordlist and just find the peaks that are closest in rough wavelength to the lines in the list, which speeds things up considerably.

Secondly, the order for the fit specified here is 5, but you want to go with the lowest order that produces results you think look good. For some of the ThAR arc spectra, I have to use order=3 or else reidentify breaks as such a high order fit ends up not being well constrained. It's not a huge deal in the grand scheme of things, but do what you want.

Now, when you run that identify line, you'll be presented with the spectrum of the emission features from the center of the image.

Your job is to go through and identify the lines you see in your sample arc spectrum in the one you've measured here, and mark them. You line up the cursor on a line you think you know the wavelength for and press "m", and then type in the wavelength of the feature as well as you know it.

If the coordlist exists in the folder of interest, identify will go through the list and find which line you were referring to. Do this for a few lines across the image. Now, press "f" to fit your polynomial to these individual points.

Ignore how this fit looks, because it's trying to fit an order N polynomial to just those few points. Press "q" to go back to the spectrum, and press "l" to have the program automatically find lines from the coordlist, using the first pass fitting you just ran.

Now, press "f" to fit all of these new lines. The fit should look ok, but you might have to go through and trim out poorly fitted points. Line the cursor up over these points and press "d" to delete these points. Then press "f" to re-fit.

Watch that rms drop!

Eventually, you'll be happy with the fit, and you can press "q" from the spectrum, and the IRAF window will ask you if you want to write the feastures to the database. Choose "yes."

Step 7: Reidentify

Top

This step is fairly easy. The program reidentify is going to go through every row in the image and re-run identify using the initial guess fitting you just created, and effectively trace the curvature of the lines in the spatial direction. It's run in this way:

> reidentify reference=sdss.arc.31.trim.fits images=sdss.arc.31.trim.fits interactive=no newaps=yes override=no refit=yes nlost=20 coordlist=Ne.salt verbose=yes

Note that this is run non-interactively, which is fine. The reference image is the same as the image we are working on. So, this will spit out the resulting lines from the individual runs of identify:

Image Data              Found   Fit   Pix Shift User Shift Z Shift RMS

sdss.arc.31.trim[*,491] 15/15 15/15    0.0992    0.0653    1.06E-5 0.0278

sdss.arc.31.trim[*,481] 15/15 15/15     0.17     0.112     1.81E-5 0.0219

sdss.arc.31.trim[*,471] 15/15 15/15    0.245     0.161     2.61E-5 0.0188

sdss.arc.31.trim[*,461] 15/15 15/15    0.331     0.218     3.52E-5 0.0118

...

....and you can see the rms as you move down the image. In some situations, some of the various lines will not return a good fit. This is an indication that your run of identify was not great. Also, you may not fit each of the lines that you selected with identify, which you can see with the "Found" column.

Step 8: Fitcoords

Top

This program will create the transformation function that will be applied to the image in order to estimate the wavelength solution and rectify, which will be done using transform in the next step. This will be run using this line:

> fitcoords images=sdss.arc.31.trim interactive=yes combine=no functio=legendre xorder=5 yorder=3

This step is run interactively, and you'll be presented with a window showing the fit.

Each one of the bundles of points that you see corresponds to a fitting of an arc emission line. The fit will try to go through all of the points, and you can go through and remove outliers in a similar way to how it was done in identify. However, instead of just deleting individual points, you are also given the option to delete all of the points in the same x, y, or z coordinate. I mostly try to focus on lowering the rms by changing the xorder or yorder (by typing :xor X, where X is the new order, and the yorder can be changed in a similar way) and deleting points or emission features.

Remember after each point deletion, press "f" to re-fit the function to the remaining points. When you're happy, press "q", and then say yes when the program asks to write the coordinate map to the database.

Step 9: Cosmic Ray Zapping and Bad Pixel Mask Creation

Top

So, as you have noticed, the science data have cosmic rays. There are a few schools of thought on this. If your data has enough individual images, such as in the example here, you can use median combining to get rid of the cosmic rays. In this way, you will be using the actual science data to create your combined image.

However, you can run a cosmic ray zapping algorithm, which will create both cosmic ray free images as well as cosmic ray masks that can be used when combining data. Ideally, you should run cosmic ray zapping before you transform the data, but then you will have to transform your mask as well. The IRAF task transform can perform this.

There are many ways that you can create bad pixel masks for an image. I am using an IRAF program called xzap for this optical data. For data with a lot of sky counts, you may want to run background, and then do the cosmic ray zapping to make the mask. For the SALT data, I get pretty good results by just running xzap directly. For xzap, I've found that the parameters that have worked best are:

PACKAGE = user

TASK = xzap

inlist =          Image(s) for cosmic ray cleaning

outlist =         Output image(s)

(zboxsz =         8) Box size for zapping

(nsigma =         9.) Number of sky sigma for zapping threshold

(nnegsig=         0.) Number of sky sigma for negative zapping

(nrings =         2) Number of pixels to flag as buffer around CRs

(nobjsig=         2.) Number of sky sigma for object identification

(skyfilt=         15) Median filter size for local sky evaluation

(skysubs=         1) Block averaging factor before median filtering

(ngrowob=         0) Number of pixels to flag as buffer around objects

(statsec= [850:950,400:500]) Image section to use for computing sky sigma

(deletem=         no) Delete CR mask after execution?

(cleanpl=         yes) Delete other working .pl masks after execution?

(cleanim=         yes) Delete working .imh images after execution?

(verbose=         yes) Verbose output?

(checkli=         yes) Check min and max pix values before filtering?

(zmin =         -32768.) Minimum data value for fmedian

(zmax =          32767.) Minimum data value for fmedian

(inimgli=         )

(outimgl=         )

(statlis=         )

(mode =         ql)

The cosmic ray zapping will take a little while, but when it's done, take a look at the resulting files. First, look at the cosmic ray mask, which will be named something like: "crmask_*.pl". Compare the cosmic ray mask to the actual science image:

Notice that the cosmic ray mask should get the cosmic rays, but not the actual spectrum in the middle of the image. This is important! If you're not happy with the way the mask looks, you can go back and re-run the program with different xzap parameters.

Step 10: Transform

Top

Ok! Now, let's transform our science image using what we just made:

> transform input=science/sdss.sci.28.f.trim.fits output=science/sdss.sci.28.ft.trim.fits fitnames=sdss.arc.31.trim interptype=linear flux=yes blank=INDEF x1=INDEF x2=INDEF dx=INDEF y1=INDEF y2=INDEF dy=INDEF

> transform input=science/sdss.sci.29.f.trim.fits output=science/sdss.sci.29.ft.trim.fits fitnames=sdss.arc.31.trim interptype=linear flux=yes blank=INDEF x1=INDEF x2=INDEF dx=INDEF y1=INDEF y2=INDEF dy=INDEF

> transform input=science/sdss.sci.30.f.trim.fits output=science/sdss.sci.30.ft.trim.fits fitnames=sdss.arc.31.trim interptype=linear flux=yes blank=INDEF x1=INDEF x2=INDEF dx=INDEF y1=INDEF y2=INDEF dy=INDEF

Notice that we're specifying our arc lamp in the "fitsnames" parameter. This program takes a bit depending on the size of the image, and near the end it will ask for the dispersion axis, which in this case is (1), along the lines. When it runs, take a look at the result. The sky lines should be straight up and down, or else something has gone wrong.

If you run cosmic ray zapping, and have masks, you can transform the masks along with the data by using the transform "minput" and "moutput" keywords:

> transform input=science/sdss.sci.28.f.trim.fits output=science/sdss.sci.28.ft.trim.fits minput=crmask_sdss_sci_28_f_trim.pl moutput=crmask_sdss_sci_28_ft_trim.pl fitnames=sdss.arc.31.trim interptype=linear flux=yes blank=INDEF x1=INDEF x2=INDEF dx=INDEF y1=INDEF y2=INDEF dy=INDEF

This mask will be put into the image header for your final reduced images, such that imcombine can use them.

Step 11: Background Subtraction

Top

Now, we'll just go and run the actual background subtraction on the images. we're going to use the iraf task background to do this.

> background input=sdss.sci.28.ft.trim output=sdss.sci.28.ftb.trim axis=2 interactive=yes naverage=1 function=chebyshev order=2 low_rej=2 high_rej=1.5 niterate=5 grow=0

> background input=sdss.sci.29.ft.trim output=sdss.sci.29.ftb.trim axis=2 interactive=yes naverage=1 function=chebyshev order=2 low_rej=2 high_rej=1.5 niterate=5 grow=0

> background input=sdss.sci.30.ft.trim output=sdss.sci.30.ftb.trim axis=2 interactive=yes naverage=1 function=chebyshev order=2 low_rej=2 high_rej=1.5 niterate=5 grow=0

We want to do this interactively. You'll start by typing in a range in columns same range to collapse the 2D image: I tend to use a range that has a strong science signal, to differentiate science from background.

600 950

Now, you'll see the spectrum as the peak in the very center, and the background to the left and right. You'll also see a variety of peaks that represent the cosmic rays. The fit is set at order=2, which means it tries to fit a linear relation to what looks like a non-linear background. You can change the order by typing :or 3 (or 4, or 5 or whatever you think fits best). After typing that, re-fit by typing "f", and see if the dashed line goes through the background points. As with most of these types of plots, you can zoom in an a region of interest by typing "w" and then "e" on the bottom left and top right corners of the zoom in region.

If the fit is to your liking, press "q" and then press the enter key.

The background subtraction will take a little while, but when it's done, take a look at the result. The sky lines should be gone, and the background level should be noisy, but zero-ish. You'll still see the cosmic rays, but you have a transformed mask which covers them:

You can use implot to look and see that the background is actually zero, by typing:

> implot sci.med.ftb.trim.fits

You can just plot a column of interest by typing something like :c 780 960, which will plot columns 780 to 960 collapsed, and should very similar to what you saw when you were background subtracting, but the background levels have been subtracted out.

You can check the image header (by using the IRAF header command) to see if the "BPM" keyword has the correct mask file in it. If you want to update the image headers to reflect the correct path:

hedit sdss.sci.28.ftb.trim.fits BPM "crmask_sdss.sci.28.ft.trim.pl"

hedit sdss.sci.29.ftb.trim.fits BPM "crmask_sdss.sci.29.ft.trim.pl"

hedit sdss.sci.30.ftb.trim.fits BPM "crmask_sdss.sci.30.ft.trim.pl"

You have to make sure that the hedit parameters hedit.addonly and hedit.update are set to "yes", or else this process won't work.

Now, let's imcombine the data.

Step 12: Imcombine Science Images

Top

Here, we are going to use imcombine to create a combined science image, as well as an error image. Before we combine the images, you should decide whether or not you want to use the bad pixel masks. I'll assume you do, but I'll also indicate what you should do if you didn't do any cosmic ray zapping.

This may be overkill, but I like to make sure that if there are any slight up/down offsets in the data (maybe the object moved along the slit slightly), we can take care of that. I load each *.ftb* image into ds9, and then use the IRAF task imexamine to find the centroid for some bright source, by mousing over the source and typing "a":

> imexamine science/sdss.sci.28.ftb.trim.fits

> imexamine science/sdss.sci.29.ftb.trim.fits

> imexamine science/sdss.sci.30.ftb.trim.fits

The most important thing are the first lines from the output, which are the X and Y positions of the centroid for, in this case, the OIII 5007 feature:

# COL LINE COORDINATES R MAG FLUX SKY PEAK E PA BETA ENCLOSED MOFFAT DIRECT

818.44 496.00 818.44 496.00 30.63 12.07 149154. 4.099 511.2 0.93 0 3.00 15.23 10.29 10.29

818.51 496.93 818.51 496.93 32.88 11.90 173516. 3.944 525.4 0.92 0 1.89 15.82 11.48 11.09

818.95 495.14 818.95 495.14 36.99 12.19 133031. 3.169 329.1 0.53 5 2.63 18.15 13.27 13.40

Let's just shift them all to the first image, so we'll take and find the offsets in the X and Y direction with respect to 818.44, and 496.00, rounded to the nearest integer:

0 0

0 -1

0 1

And save this as a file (I call it "offsets.list"). Now, let's take the science images and put them into a file:

> ls science/*.ftb*fits > sci.ftb.list

And finally, let's run imcombine:

> imcombine @sci.ftb.list sci.ftb.med.fits sigma=sci.ftb.med.sig.fits offsets=science/offsets.list combine=median scale=mean masktype=goodvalue

The imcombine.masktype value should be set as "goodvalue" which will use the bad pixel mask listed under the header keyword BPM in order to run imcombine. If you don't want to run any masking, then leave out the masktype entry in the command. We also want to create a sigma image, which will be the basis for our error estimation.

Take a look at the resulting images!

(The ringing comes from both the transformation, and the fact that the sigma image has had a standard deviation used in its estimation)

Step 13: Make The Variance Image

Top

This is a straightforward step. We're going to take the recently created sigma image, and turn it into a variance image. We'll use this to extract error variance spectra, which we can turn into sigma spectra:

> imcalc sci.ftb.med.sig.fits sci.ftb.med.sig.sqr.fits "im1**2"

> imcalc sci.ftb.med.sig.sqr.fits sci.ftb.med.var.fits "im1/3"

> imdel sci.ftb.med.sig.sqr.fits

So, here, we are taking the square of the sigma image, and then dividing by the number of images that went into making it, and finally deleting the squared image. Simple enough! When we extract a spectrum from the variance image, we're going to want to take the square root of it to get the sigma spectrum.

Step 14: Primary Aperture Extraction

Top

For aperture extraction, we'll be using the iraf program apall, which is part of the apextract package. I have put a .cl script here, which you should download for extracting our first spectrum.

What we're going to do is first extract the central aperture by modifying this script, and we'll make sure to trace the aperture along the wavelength direction. This will be important if we decide we want to extract apertures in the spatial direction, as we can use the trace that we define in this step in those subsequent steps. So, open up apall.sci.cl in your favorite editor. First, we're going to want to change apall.input to the name of the reduced fits file we just created, and change the output to be whatever we want.

apall.input="sci.ftb.med.fits"

apall.nfind=1

apall.output="sci.ftb.med.ms"

apall.apertures=""

apall.format="multispec"

apall.references=""

apall.profiles=""

Next we'll want to make sure that a series of flags are set correctly for what we want to do on the first pass. We want to do this interactively, and we want the program to find the central aperture. We don't need to recenter, or resize, but we'd like to edit the apertures (so we can look at them). We want to trace the aperture in the velocity direction, and fit to that trace, and finally, we want to extract the aperture.

apall.interactive=yes

apall.find=yes

apall.recenter=no

apall.resize=no

apall.edit=yes

apall.trace=yes

apall.fittrace=yes

apall.extract=yes

apall.extras=no

apall.review=no

Now, these next lines are important. apall.line should be set to the central row/column pixel value of the feature you care about, since you want to define your trace using your strongest feature, although if you have a booming continuum, it's not as important. apall.nsum is how many lines/columns you want to sum around that central pixel value. apall.lower and apall.upper are the size of the aperture on both sides of the center. Generally they're set to the same size.

So. If you want a ten pixel wide aperture, and you want to look for the signal from 650 - 900, you'd set things this way:

apall.line=775

apall.nsum=250

apall.lower=-5

apall.upper=5

Leave the next bit set as it is, as it's stuff relating to the trace, and weights, but change the final line:

apall sci.ftb.med.fits

(You're going to have to change this file a few times for each aperture in the spatial direction that you extract, but only the output file name.)

So, now, we can run the script:

> cl < apall.sci.cl

The program is going to ask you if you want to Find apertures, and how many, and whether you want to edit them, and since you've set things in the script, you can probably press "enter" to go through this all. You'll eventually get to the a screen showing the aperture.

Now we should zoom in, so press "w" (for window), move the cursor to the bottom left corner of the zoom box you want, and press "e", and then move to the top right corner of the box you want and press "e".

Take note of the aperture center here (496.51), as this will be important if we decide to move off of line center and extract in different regions along the spatial direction.

This aperture looks pretty good, and it's lined up, and it's wide enough. If you want to widen it at all, you can move the cursor to the position along the curve that represents the width you want and press "y", but this shouldn't really be that important. If you want to delete the aperture, you can press "d" while over the aperture, and then press "n" to select a new aperture at the cursor position, but for now, unless you've trimmed poorly, the automatic aperture should be just fine. Press "q".

The program will ask if you want to trace the apertures, so type "yes" or press "enter" if yes is already selected. You will be asked if you want to fit it interactively, and we want to, so type "yes". You'll be asked if you want to fit a curve to aperture 1, and again, type "yes".

This fit is pretty bad, but that's only because we're using too low of an order to fit it. You know the drill, type :or 2, then "f" to fit a second order fit to the aperture.

This is a little better. But you may want to go through and use "d" to delete outlier points, or even fit with a higher order fit, depending on how you want things to fit. When you are happy, type "q", and when asked if you want to write the apertures to the database, type "yes", and then when asked if you want to extract the aperture, yes, so type "yes".

Now, take a look at that fit. You can use splot to look at it.

This is a pretty handsome spectrum. Again, you can go and redo this to your hearts content if you're not happy with how that worked. The program has created a file in the folder "database", which in the case of the example is called "apsci.ftb.med". We're going to be using this file a bunch, so if you do want to redo the full extraction and tracing, delete this file first, so that the version we edit only has a record of one extraction, the last one that you did.

Now, we're going to want to go and extract an error spectrum. It's pretty straightforward. What you want to do first is copy over your apall.sci.cl script to another version that we'll call apall.sci.err.cl, for lack of a better name. We'll need to change a few things here:

apall.input="sci.ftb.med.var"

apall.nfind=1

apall.output="sci.ftb.var.ms"

apall.apertures=""

apall.format="multispec"

apall.references="sci.ftb.med"

apall.profiles=""

 

apall.interactive=no

apall.find=no

apall.recenter=no

apall.resize=no

apall.edit=no

apall.trace=no

apall.fittrace=no

apall.extract=yes

apall.extras=no

apall.review=no

...

apall sci.ftb.med.var.fits

 

Notice a few things. First, you are using a reference trace, which in this case is the trace you just extracted on the actual data. Also, you want to run this is noninteractive mode, you don't want to find, recenter, resize, edit, or fit to the trace. You do want to extract the spectrum, though! This will run the same exact aperture extraction on the variance image.

If you do extract a variance spectrum, and you want to compare it to the science spectrum, you can run:

> sarith sci.ftb.med.var.ms.fits sqrt "" sci.ftb.med.sig.ms.fits

Which takes the square root of the variance image and produces the sigma image, which is the error spectrum.

Step 15: Extracting in the Spatial Direction

Top

As a bonus, for long-slit spectra where we see evidence for extended emission away from line center, I will demonstrate how I extract off-center spectra using the same trace that we just used. You're going to want to first decide how you want to split up the spectrum into spatial bins for extraction. How wide, in pixels, do you want your bins, ow far do you want to go outward away from line center?

What you want to do is copy over your apall.sci.cl script to another version that we'll call apall.sci.spatial.cl, for lack of a better name. We'll need to change a few things here:

apall.interactive=no

apall.find=no

apall.recenter=no

apall.resize=no

apall.edit=no

apall.trace=no

apall.fittrace=no

apall.extract=yes

apall.extras=no

apall.review=no

Now that we've found an aperture and traced it, we don't need to do that again, so if you set these flags in this way, it will just extract the aperture based on the database file we're going to edit. You also might want to change the name of the output file so we don't overwrite the handsome spectrum you just made. I tend to name the output files after their central wavelength.

We also want to change these lines:

apall.lower=-2

apall.upper=2

Change them depending on how big you want the aperture to be. Here, it's four pixels wide. Ok, now, let's open up the database file I discussed at the end of the last section. Here's the beginning of it:

# Sun 21:56:41 19-Aug-2012

begin aperture sci.med.ftb.trim 1 775. 496.51

image sci.med.ftb.trim

aperture 1

beam 1

center 775. 496.51

low -774. -5.0

high 476. 5.0

background

...

...

Don't change the rest of the file, which contains the results of tracing the aperture, and some information about the "background" that wasn't used at all, so don't worry about it. But here, the most important thing for you is that you're going to want to change the values in bold. "496.51" should be changed to the central pixel value that you want to extract along. "-5.0" and "5.0" are the size of the bin that was used, so if you want your aperture to be four pixels wide, you should change this to -2 and 2.

Now that the database file has been changed, use your new apall script:

> cl < apall.sci.spatial.cl

And it should run really quickly and create your new spectrum along whichever pixel value you gave it. So, if I wanted to extract along 488.51, with pixel values of -2 and 2, the spectrum (448.51.ms.fits) would look like:

It's noisy, but that's because we're extracting off line center.