External SPECROAD: A User's Guide to Reducing Hectospec Data using E-SPECROAD

What is E-SPECROAD? | Installing E-SPECROAD | User's Guide | Version History

Available Downloads: Size Last Updated
hectoshell_MacOSX_20130415.tgz 253 K Thu., Nov. 19, 2015
hectoshell_Linux_20130415.tgz 252 K Thu., Nov. 19, 2015
idhenear.600gpm_4800.dat 523 Bytes Thu., Nov. 19, 2015
idhenear.600gpm_4800_best.dat 532 Bytes Thu., Nov. 19, 2015
Hectospec_HeNeAr_600gpm_4800.pdf 499 K Thu., Nov. 19, 2015
Hectospec_HeNeAr_Spectrum.pdf 2430 K Thu., Nov. 19, 2015

This is a quickly composed user's guide for using External SPECROAD (E-SPECROAD) to reduce Hectospec data. A guide to actually installing all the packages necessary to run E-SPECROAD is available here.

While E-SPECROAD is a derivative of the SPECROAD package used at SAO for reducing Hectospec data, there have been enough modifications that I felt it appropriate to document them. Despite these differences, the goal is to have E-SPECROAD provide the same data pipeline that SPECROAD does. It goes through most of the same steps that the original SPECROAD does. As such, you may want to review the documentation to the original SPECROAD available at

http://tdc-www.harvard.edu/instruments/hectospec/specroad.html

TECHNICAL SUPPORT NOTE:

While E-SPECROAD is designed to behave similarly to SPECROAD, The Smithsonian Astrophysical Observatory is not responsible for E-SPECROAD and they have no obligation to support it. If you have any questions, I ask that you please address your complaints/suggestions to me at . I will attempt to provide support for E-SPECROAD as time allows.

Preparing your Data Reduction

  1. Backup your Original Data: E-SPECROAD is reasonably safe, but it does change the FITS files it is working on. These changes are not necessarily reversible. Always make a backup of your original data directories before processing them with E-SPECROAD.

  2. Data Organization: If you have downloaded the Hectospec data the way Bill Wyatt suggests, you should have each night's data in a separate directory.
    1. If the night's run included changes to the grating, you should review the observing logs and make sure to separate all the different grating configurations into different directories. e.g. – If the night's data includes data shot on the 270gpm grating and data shot with the 600gpm grating at two different central wavelengths (say 4800Å and 5400Å), you must create three directories,one with data from the 270 gpm grating, one with data from the 600gpm grating centered at 4800Å, and one with the data from the 600gpm grating centered at 5400Å. When you run the pipeline, the HPRESCREEN script will check for accidental inclusion of data from multiple grating setups, but I would not rely on HPRESCREEN to be smarter than you are.
    2. Remove any *skycam*.fits files you may have, they are the images from the skycam and not spectra to reduce. You can be lazy and let HPRESCREEN tackle this if you prefer... but I prefer to be proactive.
    3. Remove any quick focus (qfocus.fits or focus.fits) files you may have. Again, you can let HPRESCREEN tackle this if you prefer.
    4. Remove any multispec spectrum files (*.ms.fits) that you may have created while at the mountain when doing quick looks at your spectrum. Yet again, you can let HPRESCREEN tackle this if you prefer.

  3. Go to the Directory you will process in the command line: E-SPECROAD is designed to process an entire directory's data in one shot. As such, you will need to choose the directory you are going to work with and go to that directory in the command line.

  4. If you want to use a custom comparison spectrum line list, copy it into the data directory now! When working with the high dispersion spectra, I found many of the lines in the default idhenear.dat file were not suitable for fitting (especially with the 600 gpm grating). As such, I created my own line list, I must copy this idhenear.dat file into the data directory before processing (technically, before running specroadcal).

Running the E-SPECROAD pipeline

Running E-SPECROAD data pipeline just involves running three scripts which automate most of the processing. These scripts are called specroad, specroadcal, and specroadobj. In fact, if things go well, the only command you will have to run is specroad, since as each script finishes, the next one begins unless you intervene.

  1. Running specroad: In the directory containing the data you want to process, type specroad on the command line to launch the E-SPECROAD data reduction script. The specroad script is tasked with performing the processing on the dome flats, sky flats, and comparison spectra images before the calibration is to be performed.

    POTENTIAL EXECUTABLE FILENAME CONFLICT WARNING: The specroad script will run into trouble almost immediately if the X-window column command is called instead of the column command from starbase. You can recognize this if hscreen immediately fails with some error about the column command being called incorrectly. You can confirm it by checking which version of column you have running using the command line
          which column
    
    If it is not the column in the starbase bin/ directory, you know you have a problem. I would suggest modifying your PATH so the starbase binaries come first. See the installation notes section on how to "Tweak your command line environmental settings" for notes on modifying your ~/.tcshrc to have the proper PATH environmental variable setting.

    1. Initially specroad attempts to determine where you store your IRAF parameters (this is why you should declare a environmental variable called UPARM that is the full path of your IRAF "uparm" directory). It then tries to determine if you are running MacOS X, Linux, or SunOS/Solaris and tries to determine the number of CPUs you have installed. This process is non-interactive.

    2. HPRESCREEN runs. This is a routine unique to E-SPECROAD (and was not present in SPECROAD). It attempts to check for a couple of common problems in running the specroad scripts before proceeding much further. Specifically it looks for
      • any *skycam*.fits files you may have forgotten to delete.
      • any quick focus (qfocus.fits or focus.fits) files you may have forgotten to delete.
      • any multispec spectrum files (*.ms.fits) that you may have forgotten to delete (except the comp.ms.fits file).
      If any of these files are found, you are offered a chance to delete them. So if you see a

      Delete all these files? (yes or no, or ^C to break):

      during the HPRESCREEN I would review the list and assuming it consists of files that should NOT be processed, I would answer 'yes'. If only some of the files need to be removed, answer 'no' and you will be prompted to

      Iterate on the list? (yes or no, or ^C to break):

      You can answer 'yes' at this stage if you want to remove only a few of the files and you will go through each file on the list one by one.

      Once this check for troublesome image files is done, HPRESCREEN will look to see if more than one dispersion grating setup exists for the non-bias, non-dark FITS images. If it detects more than one grating setup, HPRESCREEN will pause to display the warning that

      WARNING: More than one grating setup detected in non-bias, non-dark images!!!
      (Hit <enter> to continue)

      This is a strong hint that you should pay special attention during the HSCREEN stage for possible trouble files.

    3. HSCREEN runs. It attempts to screen FITS files to find the "target" images (by which I mean FITS files containing the targets you want spectra for) for further analysis. It will basically find all the files that are going to be processed and will then present the list for you to prune (if you desire).
      1. It will first present a list of all the "target" images remaining in the directory. If you did not receive a warning from HPRESCREEN that more than one grating setup was used, then you can probably very quickly review the list of files just to see all your "target" images appear and no extra junk images are listed. If that is true, I generally answer "no" to

        Delete all these files? (yes or no, or ^C to break):

        because typically what it proposes to delete are files I actually want to process. In the worst case, I don't want to remove all the "target" images it identified, only a few, so I still answer 'no'.

      2. Assuming no issues with the list and that I want to keep all the files, I then answer "no" to

        Iterate on the list? (yes or no, or ^C to break):

        However, if HPRESCREEN did give you a warning about more than one grating setup being detected, review the 'DISPERSE' column listed for these files, they should all be the same. If not, you can iterate through the list and prune those target images that are not shot with the grating you will be processing at this time.

      3. I generally answer "yes" to

        Delete all *.cfg and *.cat files? (yes or no, or ^C to break):

      4. I then answer "yes" to

        OK to proceed with these files? (yes or no, or ^C to break):

        However, if HPRESCREEN did give you a warning about more than one grating setup being detected, review the 'DISPERSE' column listed for these files, they should all be the same. If not, say 'no' and manually delete the offending files before proceeding.

    4. MAPCHECK is run to check for the presence and integrity of *_map files using thescript . This portion is non-interactive.

    5. HCALIBPROC is called to combine all the biases into a single Zero.fits file and all the darks into a single Dark.fits file using the hproc command in IRAF. hproc runs preampfix to clean up the overscan region of the image, it then calls ccdproc to bias-subtract, trim, and remove bad pixels from all amplifiers, and then it calls gaincorr to correct for gain variations between the amplifiers. The original images are saved in the Raw/ subdirectory, and images which have only had preampfix run on them are saved in the Unproc/ subdirectory.

      Initially HCALIBPROC checks the counts on your bias files to see if they are ok. If not, it will inform you some of the bias files should be rejected and it will list the suspect bias files and then ask you:

      Remove these files (yes or no):

      I typically answered "yes" to removing the suspect biases unless I had a reason not to. At this stage, you should see it combining all the bias.*.fits files into one Zero.fits file,

      However, if it turns out hcalibproc rejected all the biases (I have seen this happen on rare occasions, when it seems the biases for a particular night are behaving very strangely), then you will see a prompt that says:

      File Zero.fits was not created (most likely all the biases were rejected)!
      Continue without zero corrections?
      (yes or no, or ^C to break):

      If you want to stop the script here, enter 'no'. If you do so, the script will exit and you can choose to move all the rejected biases from Uncombined/ subdirectory back into the main directory and rerun specroad if you want to chance using them (by not rejecting them this time). Otherwise, you may choose to proceed without bias corrections, especially if flux calibration is not critical to your work (The flux calibration with oddly behaving biases is probably suspect anyway).

      If a Zero.fits file was created,it then asks you:

      ccdproc.zerocor=? (yes or no, or ^C to break):

      Answering "yes" will make the script use the (newly created) Zero.fits file to apply the bias correction to all subsequent images processed. Answering "no" means no bias corrections will be applied to any of the images processed in this directory. Deciding to apply the zero correction from the bias frames is a personal choice.
      Susan Tokarz at the CfA notes that she has never subtracted the combined bias, instead using the overscan region (done automatically in the HPROC program). She has found, in the past with other instruments, that the bias level can vary over time whereas the overscan should reflect the electronic pedestal level at the time the observation was made. The combined biases, Zero.fits, might be subtracted after removing the overscan if there was obvious pattern noise, but that has not been the case with Hectospec. The decision to use the biases is ultimately yours.

      Then HCALIBPROC should process all the darks, combining them into a single Dark.fits file, at which point it prompts:

      ccdproc.darkcor=? (yes or no, or ^C to break):

      Here, you want to be more careful. The dark current can be pretty low on the Hectospec, so low in fact that the mean value of the Dark.fits file is on the order of the standard deviation. In essence, the values of the dark current are ill-defined. If you answer "yes," hproc will apply a dark current correction based on the Dark.fits file to all the remaining images we process. The script will dump out the statistics of the Dark.fits file to help you decide if you want to use it or not. If no Dark.fits file could be created (e.g. – if all the darks got rejected), the script should switch to not applying the dark corrections automatically, but I have never run into this situation.

      Where input files go? This script calls the IRAF process hproc to do the heavy lifting of processing each input bias and dark image. As such, the original images are saved in the Raw/ subdirectory, and images which have only had preampfix run on them are saved in the Unproc/ directory. Once the Zero.fits and Dark.fits file are created, the uncombined original bias and dark images are placed in the Uncombined/ subdirectory.


    6. MULTIHPROC is called to process all the remaining comparison spectra (comp.*.fits), sky flats (sflat.*.fits), and dome flats (domeflat.*.fits) through hproc in IRAF. As before, the original images are saved in the Raw/ subdirectory, and images which have only had preampfix run on them are saved in the Unproc/ subdirectory. This process is non-interactive.

      Note: This is also the first of the "multi-" scripts you will see. All the "multi-" scripts try to increase the efficiency of the IRAF tasks by splitting up the job across multiple processors on systems with more than one CPU.

      Where input files go? When hproc is run, the original images are saved in the Raw/ subdirectory, and images which have only had preampfix run on them are saved in the Unproc/ directory.


    7. MULTIHMERGE is called to to merge the 4 amps into 1 image for all non-object, non-bias, non-dark FITS files using the hmerge script in IRAF. Copies of the unmerged files are saved in the Unmerged/ subdirectory before processing. This process is non-interactive.

      Where input files go? When hmerge is run, the original images are saved in the Unmerged/ subdirectory.


    8. MULTICOMBINE is called to combine domeflats into a single domeflat.fits file, skyflats into sflat.fits file, and all comparison spectra into a single comp.fits . This process is non-interactive.

      Where input files go? Copies of the uncombined files are saved in the Uncombined/ subdirectory before processing.


    9. At this point, specroad gives you a chance to make a backup of the current data directory. It shows you the command it will use to make the backup directory (which has the same name as the current data directory, except with the suffix _POST_SPECROAD attached) and the copy command. It will then show you the disk space used by the current data directory and the free disk space on the drive. You are then asked:

      Should SPECROAD perform this backup now?
      (yes or no, or ^C to break):

      If there is enough disk space, you should say "yes", since this gives you the option of backing up to this point if you need to.

      THERE IS NO CHECK TO SEE IF IT IS ACTUALLY POSSIBLE TO PERFORM THE BACKUP, so please check these numbers to make sure enough drive space exists to allow the backup to occur.


    10. Once the backup is completed (or skipped), you are then asked if you want to run the next reduction script, specroadcal. You are asked:

      OK to do specroadcal? (yes or no, or ^C to break):

      Assuming everything has gone OK, answer "yes". You will be shown the command that is called to launch specroadcal. Something like:

      specroadcal -j 8 -u /Users/juan/iraf/uparm/ -s 11:16:03_2008-06-03

      If you answer "no," you will be shown the same specroadcal command necessary to continue the data reduction, but it will not be executed. This might be necessary if you have to backup the data to another drive (something specroad doesn't do).

      In either case, you should make note of the command that should be used, in case you need to relaunch specroadcal manually (although if you have followed my instructions about setting up your user environment, just typing specroadcal should work).


  2. Running specroadcal:Either when launched by specroad or manually, specroadcal is the script that takes you from raw calibration spectra images to creation of calibrated spectra, creation of the dispersion function, and creation of the throughput correction of the fibers from the flats.
    1. APFLATTEN:
      1. specroadcal checks to see if a "database/aplast" file exists. If it does, it assumes you you have already defined the aperture mask for this field and skips the remaining APFLATTEN steps. Otherwise, you will be asked

        OK to do apflatten?

        Answer "yes" to create "a pixel to pixel normalization file which also removes fringing" from the domeflat.fits file.

      2. specroadcal quickly sets the values of some parameters for IRAF procedures that will be called during its calls on apflatten.

      3. specroadcal forks an xgterm window running an IRAF session to interactively construct an aperture mask from domeflat.fits.

        In the IRAF session, you should be asked the following questions. Answer with the defaults (shown below):

        Find apertures for domeflat? (yes):
        Number of apertures to be found automatically (300):
        Resize apertures for domeflat? (yes):
        Edit apertures for domeflat? (yes):

        At this point you are dropped into IRAF apedit script. If you haven't used apedit, I strongly suggest you read the helpfile on this IRAF command (within IRAF, type "help apedit") to learn how to properly flag the apertures.The graphics window that appears shows you a cross-section of all the spectra on the domeflats.fits image. You should also see all 300 apertures labeled. You will likely have to use "windowing" commands to zoom in to sections of the graphics window to make sure all the apertures are correctly identified. I have generally found apedit was correct in its aperture identifications off the bat, but you need to review the spectra. NOTE: The aperture betwen 150 and 151 is blank, I believe this is correct (its located at roughly 2050 pixels).

        X-WINDOWS BUG WARNING: I don't know if it is a bug in X-windows or IRAF, but on both MacOS X and Linux, attempts to resize the graphical window showing the apertures (and later on, the spectra) result in what can best be described as the window suddenly shrinking to a very tiny size and repeatedly resizing itself smaller as you attempt to expand the window. I strongly suggest not resizing the graphical Xwindow, but if you must, use the "maximize" button in the corner of the window instead of trying to drag the corners.

        Once you have inspected the apertures, hit 'q' to quit while in the graphics window and accept the identified apertures.

      4. You now have to fit the individual aperture shapes. Essentially you have to tell IRAF the shape of each of the 300 apertures on the image. When you quit apedit, you will be asked (in the graphics window):

        Trace apertures for domeflat? (yes):

        Followed by the two questions below:

        Fit traced positions for domeflat interactively? (yes):
        Fit curve to aperture 1 of domeflat interactively (yes):


        You should answer "yes" to all three of these questions. At this point you will be kicked into a graphics window showing the shape of aperture 1 in the domeflat and a dashed line showing the functional fit of this aperture. If you are lucky, the IRAF procedure aptrace (which has called apfit to perform the fit) has defaulted to using a 3rd order Legendre function to fit the profiles (to be consistent with the SAO procedure). If the function isn't set up that way, change it by typing the following commands in the graphics window (the leading colons must be typed):

        :func legendre
        :order 3


        Read the instructions on apfit in IRAF in order to understand how to set this function, delete points, refit the function, etc. When you are done with the fit, type "q" to quit and accept the fit to this aperture.

        You will then be asked:

        Fit curve to aperture 2 of domeflat interactively (yes):

        You should probably say "yes" and inspect this fit. If you are in a hurry, after inspecting a few apertures, you could probably say "NO" to interactive fitting to let the software progress on its own, but that's your risk to take.

      5. Once you have finished fitting the shape of all 300 apertures, you need to fit the shape of the domeflat profile in all 300 apertures. You will be asked the following:

        Write apertures for domeflat to database? (yes):
        Flatten apertures in domeflat? (yes):
        Fit spectra from domeflat interactively? (yes):
        Fit spectrum for aperture 1 for domeflat.fits interactively? (yes):


        Answer "yes" to all these questions. At this point you are presented with the spectrum in each aperture of the dome flat to fit with a 70 point cubic spline. If the first aperture looks good,you can accept the fit by hitting 'q'. If not, refit with a different function (you are in the IRAF procedure apfit so that help page will contain what you need to know).

        WARNING: Be careful here. You might need to delete some points in the spline fit near any strong lines/artifacts to avoid them distorting the assumed flat field function.

        Once done, it will process for a bit, then the xgterm session will quit and you are done with apflatten. A file called flat.fits has been created containing the normalization image.


    2. MULTIHFLAT is called to execute the IRAF process hflat on multiple CPUs in parallel. hflat is run on all the dome and sky flats as well as the spectral calibration images. it divides the original image by the normalization image you created in apflatten in order to remove pixel to pixel variations and fringing. The pre-hflat-processed images are moved to the Nonorm/ subdirectory.

      When this part of specroadcal starts, unless the extracted spectra files already exist (their names are domeflat.ms.fits, sflat.ms.fits, and comp.ms.fits), you are asked:

      Proceed to multihflat? (yes or no, or ^C to break):

      answer 'yes' and there should be no more interaction from this portion of the script.

      Where input files go? Copies of the original, unflattened input images are saved in the Unflat/ subdirectory before processing.


    3. HEXTRACT At this point, the specroadcal script opens an xgterm window running the IRAF procedure hextract to extract the spectra from domeflat.fits files, storing the extracted spectra in a smaller FITS file named domeflat.ms.fits. This file is smaller because each spectra is extracted from the multifiber spectral image and stored as a single row of data in the FITS file.

      The xgterm window pops up asking you two questions:

      Resize apertures for domeflat? (yes):
      Edit apertures for domeflat? (yes):


      Say 'yes' to both. For reasons I am not clear on, you will be asked to fit the apertures again as you did during the apflatten stage a bit earlier.

      Fit traced positions for domeflat interactively? (yes):

      I answered 'yes'. Then it asks:

      Fit curve to aperture 1 of domeflat interactively (yes):

      Interestingly, this time around it may default to another function. I reset the function to a 3rd order Chebyshev polynomial fit and fit the shape of the apertures. Again, when you are done with the fit, type "q" to quit and accept the fit to this aperture. At this point you are asked...

      Fit curve to aperture 2 of domeflat interactively (yes):

      and you will continue to be asked about each aperture. You can answer 'NO' after the first few apertures in which case it will automatically fit the rest (again, you do this at the risk of missing some problem aperture).


    4. MULTIHEXTRACT should run without user interaction to extract spectra from skyflats and comparison spectra into their correspondng *.ms.fits files (unless they already exist).


    5. HCAL must now be run in order to determine the dispersion function for all the apertures from the calibration image. The IRAF procedure hcal calls identify to allow the user to identify calibration lines in both the odd and even apertures. Then hcal will continue running freidentify combining the data into a single database/idcomp.ms file. dispcor is run after the spectra are calibrated to apply the dispersion correction. The pre-dispersion corrected files are moved to the Nodisp/ subdirectory.

      specroadcal looks for database/idcomp.ms file. If it finds it, it skips hcal altogether. If not, it checks to see if the coordinate list file, idhenear.dat, exists in this directory. If not, it will prompt you:

      No user coordinate list (./idhenear.dat) found - Retrieve default list? (yes or no, or ^C to break):

      You should answer 'yes' if you want to retrieve the default idhenear.dat file from the hectospec IRAF script library. If you have a custom coordinate list file and you forgot to copy it into this directory, you can answer 'no' and open up an other terminal and copy that idhenear.dat file into this directory before answering the next question.

      At this point, the script asks

      OK to do hcal? (yes or no, or ^C to break):

      Assuming you answer "yes", you will now be presented with instructions for running hcal in a separate IRAF session which will look like

      >> You now need to go to a separate IRAF window (in an xgterm):
           1) Within IRAF, please switch to this directory:
               cd /Users/juan/iraf/hectodata/2007.0514
           2) Load the hectospec package (if you don't do so in login.cl):
               hectospec
           3) Launch hcal and identify all the calibration lines:
               hcal comp.ms
           When you have completed that task, return to this window.
      
       Did hcal run successfully? (yes or no, or ^C to break):
      
      Now open IRAF in a seperate xgterm window (make sure you are in an xgterm and not an xterm or you may get unpleasant results) and do as the instructions indicate (NOTE: This approach of requiring a separate IRAF session is necessary to get around a problem in using the IRAF identify procedure if it is launched by the specroadcal script directory which essentually makes identify not accept keyboard input). Once hcal starts, you will almost immediately start using the IRAF procedure identify to identify all the lines in the odd aperture spectrum.

      HCAL HINTS: Here are some hints based on my experience in identifying the lines.

      • HCAL crashes: If HCAL crashes with a "ERROR: Out of space in image header." it means you do not have enough memory allocated in your IRAF setup. Please open your login.cl configuration file in a text editor and change the line reading

        set min_lenuserarea = 64000

        to read

        set min_lenuserarea = 200000

        and that should make this crash go away. Thanks to Megan Kiminki for reporting this problem.

      • If you have never used the IRAF procedure identify, I would suggest reading the IRAF documentation on the procedure to learn of its capabilities.

      • Since the hectospec images have the bluer portion of the spectra at the top of the images, the spectra as presented is initially shown with wavelength increasing to the left. As such, it can be very helpful to start by flip the horizontal axis using the windowing commands 'w' followed by 'f' to flip the x-axis. Use the windowing 'r'ight and 'l'eft commands to scroll the spectrum right and left. Once you have an initial dispersion solution, you can flip it back.

      • Look at the background as well as the lines! This is a good opportunity to check the spectrum to make sure no disaster occurred in the initial application of the darks and biases. I have found you can see a bad dark correction (due to ill-defined dark current values) sometimes because the background level of the comparison spectrum will be negative! That should not happen and if it does, I would restart the data reduction from the beginning (with a new copy of your raw data).

      • Make sure you are fitting a high-order polynomial function (say a 5th order Legendre, which is what SAO uses) and not a cubic spline since you expect the dispersion correction to be a smooth function. identify uses the IRAF procedure icfit, so you can only change the fitting function after hitting 'f' to fit the spectral lines.

      • Use the appropriate line lists!: Keep in mind the higher dispersion of the hectospec spectrum at 600gpm versus the 270gpm grating results in many blended lines that can be used for dispersion corrections at 270gpm being resolved if you are using the 600gpm grating. I believe the default line list provided with the hectospec software is for the 270gpm grating.

      • The desired RMS of the dispersion correction is, of course, a matter for the observer to decide. That said, with the 600gpm grating centered at 4800Å (and using the line list I mention below), I was able to get the RMS of the dispersion correction down to 0.05Å with some careful pruning of lines and 0.1Å with very little pruning. Clearly your mileage may vary, especially if you use the lower-resolution 270gpm spectra. I'll note that according the Hectospec software manual the 270gpm grating has a resolution of 5.2-4.5Å FWHM and the 600gpm grating has a resolution of 2.2-1.9Å FWHM.

      • Helium leak: If you are using the HeNeAr lamp as your calibration spectra, you need to be aware that the Helium leaked out quite some time ago (as of 2007) and the spectra looks different than you might expect based on a a HeNeAr spectral Atlas (for example, the strong blue helium lines at 3888Å, 4471Å, or 5015Å are not present). In August 2008, I was informed new HeNeAr lamps were added, but that the Helium lines were still fairly weak.

      • Some Useful Spectral Atlases: As noted by Megan Kiminki (U Arizona)
        The comparison spectra taken on Hectospec can be confusing. For one, the helium-neon-argon (HeNeAr) lamps produce spectra that do not look like the the typical HeNeAr atlas spectra, primarily because of various issues with the helium lamps being too weak or too strong. In addition, medium-resolution spectra often use the PenRay mercury-neon-argon (HgNeAr) lamps, on their own or with the helium lamps also turned on, and HgNeAr spectral atlases are practically non-existent.
        In addition to noting the helium leak I mentioned, she summarizes the problem, the comparison spectra are not well documented. To help alleviate that, I have tried to collect some information here about the comparison spectra.

      Once done with running identify on that one (odd) aperture (aperture 89), you will be asked:

      Write feature data to the database (yes)?

      Say "yes". Now you have to repeat the identifications for the even aperture using freidentify, which will use the information from the first odd aperture fit to do a fit on the next odd aperture (aperture 87). The

      compodd.ms - Ap 87 21/21 31/31 0.197 -0.11 -2.4E-5 0.331
      Fit dispersion function interactively? (no|yes|NO|YES) (yes):


      The last number in the first line above is the RMS. If it is low enough, you can skip the interactive fit (answering "no"), otherwise you can perform an interactive fit ("yes"). The capitalized version of those answers applies to all the remaining apertures. Once you are done processing all the odd apertures, hcal will have you repeat everything you just did for the odd apertures for the even apertures. HINT: If you have one consistently bad point, if you set the number of iterations to 2 or above during a fit, it will often eliminate the bad points automatically.

      Once you have finishing fitting all the apertures, the dispersion correction will be applied with the IRAF procedure dispcor, which will burp the minimum and maximum wavelength in each aperture (called w1 and w2, probably in reverse order, since blue wavelength are on the top of the image) and the stepsize (dw, which is negative). If you are using the 600gpm grating, you should make a note of these numbers, you will need them in the next part of the script. Once you are done identifying all the spectral lines in the calibration spectrum, return to your original specroadcal window

      Did hcal run successfully? (yes or no, or ^C to break):

      Assuming it did, enter "yes" to continue. Users of the 600 gpm grating need to make note of the final dispersion solutions since you will need to know the minimum and maximum wavelengths of these solutions as well as the wavelength step per pixel in order to properly run the next step in the pipeline.


    6. HLIN is run after hcal in order to dispersion correct and rebin the sky flat spectra (it automatically kicks over to dome flats if those are the only flats available). The pre-dispersion corrected files are moved to the Nodisp/ subdirectory. The IRAF process hlin runs and then calls sumspec to rebin all of the spectra to the same linear dispersion. This procedure starts by determining the wavelength range and average step size for the calibration spectrum you just processed through hcal, so you will see an output to the screen that looks something like

      Configuring HLIN ...
      
        Automatically determined wavelength ranges and step size settings are
         wl1: 3500   wl2: 6050   dwl: 0.545
        If you don't accept these defaults, you will be allowed to change them.
      
        Do you accept these settings? (yes or no, or ^C to break):
      
      If you decide not to accept these default wavelength ranges and step sizes for hlin, you will be presented with prompts similar to the following requesting the minimum wavelength, maximum wavelength, and step size to use.

      Enter minimum wavelength (observed: 3530.564 Angstroms):
      Enter maximum wavelength (observed: 6042.803 Angstroms):
      Enter wavelength step per pixel (observed: 0.545 Angstroms/pixel):


      You will need to correctly answer the minimum and maximum wavelengths covered in your spectrum as well as the wavelength step per pixel.

      HLIN CHANGES: The part of the specroadcal script that calls hlin in the original SPECROAD package is hardcoded with values of
      wl1=3700.0
      wl2= 9150.0
      dwl=1.21

      which were made assuming a 270gpm grating. This script makes no such assumptions.


      Once you have answered the questions, the IRAF procedure hlin is called unless the script detects a previously generated tpc_domeflat.ms.fits file in the directory. It requires no further user interaction.

      Where input files go? Copies of the original, unlinearized or dispersion corrected input images are saved in the Nodisp/ subdirectory before processing.


    7. MAKETRAN is the final IRAF procedure called by specroadcal. maketran creates a fiber transmission multispec file named tpc_*.ms.fits. It takes the mean of apertures, divides every aperture by that mean,and then uses the IRAF procedure sumspec to fit a low-order function to each aperture (currently set to be a 8 point cubic spline fit). This throughput measurement will be used later by htran to correct for throughput variations in the apertures. At this stage, specroadcal will make the following two queries:

      Do you want to apply the redleak correction? (yes or no, or ^C to break)
      Do you want to correct for sky absorption? (yes or no, or ^C to break)


      The "redleak" here is the brightening on the red end of the Hectospec spectra due to "a leak of light from a positioning LED" (the correction is applied by the IRAF task hredleak1) and is only significant redward of 7000Å.

      The "sky absorption" correction divides out the atmospheric water absorption bands 6870-6955Å and 7680-7730Å using the IRAF process habsky. It does this using the skyab.ms.fits "skyfile" in the hectospec IRAF scripts library directory, hectospec$lib/skyabs.ms.fits in IRAF's notation).

      Since the redleak and sky absorption line corrections are not an issue at the blue end of the spectrum where I was working, I made them optional. You should answer these questions based on your project's needs.


    8. Rechecking MAKETRAN results to avoid pain: Before running specroadobj (or before backing up the results of this directory in the next step), I would examine the tpc_*.ms.fits file that was created by MAKETRAN using a procedure like splot in IRAF. I found that sometimes the solution fit for the transmission went a bit wild and actually indicated negative transmissions! If this is the case, you might need to manually re-run maketran on the original file stored in Notran/. As a reminder, specroadcal will prompt you with a:

      Have you reviewed the results from MAKETRAN? (yes or no):

      Just answer truthfully (you will be allowed to continue if you answer 'no' with only a mild reprimand).


    9. Like specroad, specroadcal gives you a chance to make a backup of the current data directory when it is finished. It will show you the commands it is about the execute (copying the entire directory to one with the suffix _POST_SPECROADCAL) and the disk space needed and available. You are then asked:

      Should SPECROADCAL perform this backup now?
      (yes or no):

      If there is enough disk space, you should say "yes", since this gives you the option of backing up to this point if you need to.


    10. Once the backup is completed (or skipped), you are then asked if you want to run specroadobj. You are asked:

      OK to do specroadobj? (yes or no, or ^C to break):

      Assuming all has gone well, answer "yes". You'll see the command used to launch specroadobj. Something like:

      specroadobj -j 8 -u /Users/juan/iraf/uparm/ -s 11:16:03_2008-06-03

      If you answer "no," you'll see the same command printed out, but it will not be executed. You should make note of the command that should be used, in case you need to relaunch specroadobj manually (although, again, if you have followed my instructions about setting up your user environment, just typing specroadobj should work).


  3. Running specroadobj: The last of the E-SPECROAD scripts run, this script reduces all the object data files to completion. There is almost no user interaction during a typical specroadobj run except near the very end.

    1. MULTIHPROC is run on all the object files. It applies the preampfix, flat field, and zeropoint corrections.

      Where input files go? As with previous hproc calls, the original images are saved in the Raw/ subdirectory, and images which have only had preampfix run on them are saved in the Unproc/ subdirectory.


    2. MULTIHMERGE is run on all object images to to merge the 4 amps into 1 image.

      Where input files go? When hmerge is run, the original images are saved in the Unmerged/ subdirectory.


    3. MULTIHFLAT runs on all the object images dividing those images by the normalization image you created in apflatten in order to remove pixel to pixel variations and fringing.

      Where input files go? The pre-hflat-processed images are moved to the Nonorm/ subdirectory.


    4. MULTIHCOSMIC runs on all the object images. It calls the IRAF hcosmic procedure, which in turn calls the hcosmed procedure to make a median file from the input images. hcoslim is used to an image of the by pixel range limits for cosmic ray detection, and hcosbad is used to create bad pixel images where comparison of the two images shows exceptionally high flux. The areas of bad pixels are grown by 1.5 pixels and fixpix is run to replace those bad pixels with the linear interpolation along lines or columns using the nearest good pixels.

      Where input files go? Pre-hcosmic-processed images are copied to the Nocosmic/ subdirectory.



    5. MULTIHEXTRACT is run on all the object images, extracting the spectra into smaller *.ms.fits files.

    6. MULTIHLIN is run on all the object image files that have been processed through hextract to dispersion correct, shift, and linearize the spectra. Before MULTIHLIN is run, we must make sure the proper sky line is chosen.

      The original SPECROAD package is hardcoded to use the OH8399 sky line. ESPECROAD has been modified to automatically select an appropriate sky line given the wavelength range of the data (determined by checking the wavelength range of the sflat.ms.fits or domeflat.ms.fits file). So before running MULTIHLIN you will see a notice stating
        NOTE: Automatically chose sky line O5577 using a sky standard
              deviation of 0.40.

      You will need to modify this script if these values are not acceptible.

      OK to proceed with object data reduction? (yes or no):

      to indicate which sky line and stardard deviation are being used. If you are so blue that the O5577 sky line can't be used, the script will quit before calling hlin since it can't continue to process the data with no sky lines to fit. If you believe the sky line has been choosen properly, you answer "yes" to continue with processing the object data reduction.

      Where input files go? Copies of the original, unlinearized or dispersion corrected input images are saved in the Nodisp/ subdirectory before processing.



    7. MULTIHSCOMBINE is run on all the object image files that have been processed through hextract to combine all wavelength-linearized spectra of same field into one. If you shoot spectra of the same field on different nights, the hscombine routine is the one you would use to combine the linearized spectra, assuming you used the same wavelength limits on all the input spectra. During this stage in the script, you will be asked two questions about how you want the images combined:

      How do you want to combine images (average|median|sum)? :

      Which method should be used to reject pixels? (none|minmax|ccdclip|crreject|sigclip|avsigclip|pclip):

      It is your choice as to which method of combining images works best for you. The defaults used at SAO are "average" and "ccdclip" respectively.

      Where input files go? The pre-hscombine-processed images are copied to the Notcombined/ subdirectory (The IRAF help for the hscombine command claims its the Uncombined/ subdirectory, but the code for the command shows the files are placed in Notcombined/).


    8. MULTIHTRAN calls the IRAF process htran to use the fiber transmission multispec file you created at the end of the MAKETRAN call near the end of specroadcal to correct the spectra in all the fibers for
      1. variation in projected sky area each fiber sees (corrected with hgeom1).
      2. variations in fiber throughput is corrected using a file made by maketran (using the IRAF procedure htran1)
      3. removal of two sky absorption features (using IRAF procedure habsky1) [OPTIONAL: Based on earlier inputs]
      4. application of the red leak correction (using hredcorr1)[OPTIONAL: Based on earlier inputs]

        Where input files go? When maketran calls htran the input images files are saved after each of the various IRAF procedures it uses are called:

        1. Copies of the input images before the geometrical corrections of hgeom1 are applied are saved in Nogeom/
        2. Copies of the input images before the fiber transmission corrections of htran1 are applied are saved in Notran/
        3. Copies of the input images before the sky absorption corrections of habsky1 are applied are saved in Noskyab/
        4. Copies of the input images before the red leak corrections of hredcorr1 are applied are saved in Noredcorr/



    9. MULTIHSPLIT is used to split all the object image files into individual 1-dimensional spectra FITS files, all placed in 1d.field_name/ subdirectories, where "field_name" is the name of the field in the original multispec image file.


    10. MULTIHSKYPROC is now run all the individual 1-dimensional spectra files to remove the background from the fiber spectra. It calls the IRAF routine hskyproc which is a re-write of the original skyproc used by SAO into an IRAF native form that is much more portable. This is one point in the E-SPECROAD pipeline where it differs in the original SPECROAD code used at SAO in that the code called to process the images. hskyproc was written by Jessica Mink based on the original ksh code in skyproc and has been observed to produce identical results by Jessica Mink.

      When HSKYPROC finishes, you are asked:

      Proceed to doxcsao? (yes or no, or ^C to break):

      This prompt is here to give you an opportunity to review the results of HSKYPROC. Occasionally HSKYPROC will reject all the sky fibers for a given night, leaving you with no way to do sky subtraction and causing DOXCSAO to output complete nonsense. After reviewing the results from HSKYPROC to make sure skies exist, you can answer 'yes' to proceed to DOSKYXCSAO or 'no' if you feel there is a need to address poor HSKYPROC results first.


    11. Finally, DOSKYXCSAO is called to perform a xcsao call to cross-correlate various spectral templates with the 1-D spectra in order to obtain radial velocity estimates (which are stored in the FITS headers). The spectral file templates used are listed in the templatelist file in the /usr/local/hectospec directory. templatelist is copied to your data directory during each run of specroadobj. You could edit /usr/local/hectospec/templatelist to select different templates by default if you wish. You could certainly edit the specroadobj script to not run this if you prefer

At this point, you should have completely reduced your night's data using E-SPECROAD.

FINAL HINTS: Here are some final hints I can provide regarding hectospec data reduction.

  • The FITS header keywords specific to hectospec are documented at
    http://tdc-www.harvard.edu/instruments/hectospec/keywords.html

  • Remember any "multi-" script corresponds to an IRAF procedure that could be run manually if necessary.

  • If your final spectra look distorted near the blue or red end of the spectra, look to see if the tpc_*.ms.fits file used for the fiber transmission corrections has bad values in it using the IRAF procedure splot to view the file.

  • Jessica Mink suggests that to combine observations of the same field done with different pointings (say on different nights or at two widely-spaced times on the same night), look at the IRAF Hectospec package tool called hsmix.

Funding: The creation of E-SPECROAD was supported by National Science Foundation Grant AST-0729989 ("RUI: Collaborative Research to Map the Asymmetric Thick Disk").