SHARC II Data Analysis

Overview

Like most (if not all!) sub-mm cameras, data from SHARC-II is not trivial to analyze. Each 10 minute file is roughly 30 Mb in size, and many targets (particularly faint point sources) require 3 hours of integration time (that's 0.5 GB of data). In addition, estimating the sky background is difficult and requires care on the part of the user. The purpose of this page is to provide a ROUGH outline on how to go from raw data to calibrated map. The rest of the documentation provides more detailed notes.

Data reduction takes place in three steps:

  1. Preliminary
    • check pointing
    • check tau
    • check for gaps
  2. Mapmaking
    • Using CRUSH to convert raw data to maps
    • Co-adding separate maps
  3. Calibration
    • Reduce calibrators
    • Perform aperture or PSF photometry on calibrator
    • Use known flux to calculate a scaling factor
    • Scale science maps.

Preliminary steps

At this stage, you want to use your logs to identify what scan numbers correspond to which sources and calibrators. If you don't have them handy, you can generate logs using one of the Sharc utilities. This program takes as input a range of scan numbers, then reads the headers to produce a useful log.

The logs are useful because they keep track of the optical depth (τ) and pointing offsets (FAZO, FZAO). If significant pointing changes were made throughout the night, you will have to account for that in the data reduction (by supplying shifts). The optical depth is useful, but we generally use polynomial fits to the optical depth measurements from a whole night.

Why are the polynomial fits useful? Keep in mind that the scaling factor between τ(225 GHz) and τ(350 µm) is about 25. Thus any uncertainty in the optical depth is amplified by that large factor. Meaurement error on the 225 GHz tipper is not insignificant, and hence a polynomial fit to the optical depth (τ) readings over the course of a night gives a better estimate of the atmospheric opacity. This is also standard procedure with SCUBA at the JCMT.

Finally, you want to run “sharcgap” on your files to see if any suffer from timing gaps. They occured when buffers were overwritten with data, thus losing a small chunk (on the order of a few seconds) of data. Some files with gaps showed no ill effects, though others suffered from a de-syncronization of antennae and science data. This resulted in the signals being associated with the wrong place on the sky, rendering the data difficult (if not impossible) to analyze. These used to show up in data taken before 2004, but haven't since. Still, it is worth checking. CRUSH also checks for gaps in the files.

Mapmaking

For almost all applications, you will be using CRUSH, the primary SHARC-II data reduction package. There is also a package called SHARCSOLVE, but its use is reserved almost exclusively to observations done in CHOPPING mode. CRUSH also supports such data. CRUSH is public, but you should contact one of us if you want to use SHARCSOLVE for chopping observations.

In both cases, the software takes a list of scan numbers (as well as configuration options), and produces a FITS file with 4 images in it. The images are: signal, noise, weight, and signal/noise ratio.

Calibration

This is the step that is probably the most important, and is the source of many of the questions that the SHARC-II group receives. In general you want to reduce your calibration scans, and then apply an appropriate scaling factor to correct the science maps.

How you do this depends on whether you will use PSF photometry or Aperture photometry. Whatever method you use on your science frames has to be the same on your calibration frames. For simplicity, I will assume you use a fixed aperture for the rest of this discussion. Take your calibration frame and measure the flux within your chosen aperture. Call this the “INSTRUMENTAL FLUX”. Now, look up the true 350 µm flux of the calibrator. Many calibrators have a flux constant in time (such as Arp220), but objects in the solar system do not. We have provided a recipe for calculating the true 350 µm flux of such objects on the calibration part of the web page.

You can use the instrumental flux and true flux to derive a scaling factor. This can then be applied to the science frame using sofware from our utilites page (a simple program that reads a map and multiplies it by the scaling factor).

Then you can apply the aperture to the science frame to derive calibrated fluxes of your targets.

Calibration with CRUSH

  1. CRUSH creates maps that are roughly calibrated, typically to 20–30% rms, provided the 'correct' optical depth (τ) values are supplied. But it is possible to improve calibration to 10% rms or better by determining appropriate calibration corrections.
  2. It is important to understand the difference between peak flux and aperture fluxes. Integrating fluxes inside apertures is the more general way of measuring fluxes, and you should use aperture fluxes for anything resolved or extended. (Apertures always need to be larger than a circle with r = FWHM, where FWHM is the image resolution including any smoothing. If unsure, use the show tool to see the effective image resolution). Peak fluxes offer higher precision photometry for point sources, especially on beam-smoothed maps.
  3. A nice feature of CRUSH is that calibration is independent of reduction options, at least for point sources. No matter how you reduce a point source (default, -faint, -extended, -deep, apply spatial filtering or not), the peak flux will be the same (within noise). This has two practical consequences:
    1. You can reduce a calibrator with one set of options (e. g., Mars with bright) and the science target with other options (e. g., with -deep) and you can still compare these without worries if your source is compact or point-like.
    2. For extended sources, there may be some dependence fn the calibration on reduction options, as different options (e. g., faint, deep, default) filter structures on different scales. You should never use deep for reducing extended structures and you will not need to worry for calibrating apertures up to about 1/2 FoV in size. For larger apertures, the process gets a bit complicated and I defer any recommendations to a future document on the matter…
  4. Provide the best τ value at the time of reduction. By default, CRUSH will contact the 'MaiTau' server for daily polynomial τ fits. However, these fits are usually provided a few weeks to a few months after the run (and may not be available in the future at all). Check the console output when the scan is read to see if a MaiTau value was obtained for your data. If yes, you are probably OK. If MaiTau lookup failed, you should provide your best guess of the τ value at reduction. To specify a 225 GHz zenith τ, your command line would look like:
    > crush sharc2 […] -tau.225GHz=0.053 -tau=225GHz <scans> …
    The first options defines a 225 GHz τ value, and the second instructs crush to use the 225 GHz value as its reference. Similarly, you may provide a 350 µm tipper value as:
    > crush sharc2 […] -tau.350um=1.116 -tau=350um <scans> …
    The values you provide will apply to all scans listed after, until a new τ value (or reference sources) until new values are specified.
  5. Try to calibrate against a known source at similar elevation (± 5°), if possible. This way you do not have to worry as much about the elevation-dependence of the calibration.
  6. Compare apples with apples and oranges with oranges. If you will use apertures of some size to measure fluxes on your science target, you should determine a calibration corrections with the same aperture size on your calibrator source. If you use peak fluxes on your science target, use only point-like or compact calibrators, and determine calibration corrections based on peak fluxes.
  7. Suppose you observe a calibrator with a true flux Ftrue (for your appropriate aperture, or smoothed peak value!). You made sure CRUSH reduced this calibrator with the correct tau value, and you measure a flux Fobs. Then your calibration corrections is
    C = Ftrue / Fobs
    You can apply this correction to crush via the 'scale' option. E.g.,
    > crush […] -scale=1.15 <scans> -scale=0.83 <scans> …
  8. If your calibrator is not completely point like (i. e., ≥ 4″ FWHM) make sure you understand what the 'true' flux of your calibrator is, under the given smoothing and/or aperture size. The SHARC-2 calibrator table generally assumes beam-smoothed peak fluxes.
  9. Always prefer primary calibrators (Mars, Uranus, Neptune). Then well-calibrated stable secondary calibrators (e.g. Arp220, compact HII regions), then asteroids with temperature models. You can also use variable secondary calibrators (e.g. evolved stars) as long as you have calibrated these against one of the stable calibrator sources during the run.
  10. Finally, focus is an important issue that should not be neglected. Calibration is strongly dependent on z-focus. Therefore, you should always make sure you observe with optimum focus, and always calibrate your data against a calibrator obtained with the same or similar focus value or quality.

Data location

Use sftp to transfer data to your own computer. On a good night, SHARC II will generate 2–3 GB of raw data.

The data are stored at
kilauea:/halfT/sharcii/data2_YYYYmmm,
where YYYYmmm is the observing date, for example 2014mar. The most recent data are linked to
kilauea:/halfT/sharcii/current. The data file names are
sharc2-NNNNN.fits,
where NNNNN is the exposure number. For example, sharc2-014282.fits is exposure 14282.

Image (fits) files produced by CRUSH are stored at
kilauea:~sharc/src/data.
The filenames are composed of the source name and the exposure numbers.

For security, two local copies of the data are kept at all times. A third copy at Caltech is generated within a few days. If for some reason the data seems to be missing, please contact the CSO staff to remedy the problem.

cso/instruments/sharcii/data_analysis.txt · Last modified: 2015-09-28 20:50 by admin
 
Recent changes RSS feed Donate Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki