204.1. Calibration frames¶
204.1. Calibration frames¶
Data Release: Data Preview 1
Container Size: large
LSST Science Pipelines version: r29.1.1
Last verified to run: 2025-06-23
Repository: github.com/lsst/tutorial-notebooks
Learning objective: To understand and access the calibration images (bias, dark, and flat frames).
LSST data products: bias
, dark
, and flat
Packages: lsst.daf.butler
, lsst.afw.display
Credit: Originally developed by the Rubin Community Science team. Please consider acknowledging them if this notebook is used for the preparation of journal articles, software releases, or other notebooks.
Get Support: Everyone is encouraged to ask questions or raise issues in the Support Category of the Rubin Community Forum. Rubin staff will respond to all questions posted there.
1. Introduction¶
The process of Instrument Signature Removal (ISR; also called "image reduction") uses bias, dark, and flat field calibration frames as part of the process to transform raw images into visit images.
Bias images: An exposure obtained with zero exposure time to measure the pedestal level of counts applied during readout.
Dark frames: An exposure obtained with a nonzero exposure time but with no illumination (shutter closed) to measure the detector's response to the thermal energy in the camera.
Flat fields: An exposure taken with even illumination across the field to measure pixel response variations.
Related tutorials: See the 200-level tutorial on visit images, and the 100-level tutorials on how to use the Butler.
1.1. Import packages¶
Import the Rubin data Butler
and the afw.display
package to retrieve and display calibration images.
from lsst.daf.butler import Butler
import lsst.afw.display as afwDisplay
import numpy as np
1.2. Define parameters and functions¶
Instantiate the Butler.
butler = Butler("dp1", collections="LSSTComCam/DP1")
assert butler is not None
Set the display backend to Firefly.
afwDisplay.setDefaultBackend('firefly')
Define the visit and detector identifiers to obtain calibration frames for.
my_visit = 2024120700527
my_detector = 0
2. Data access¶
The calibration frames are only accessible via the Butler.
Show the Butler dimensions for each calibration frame type.
Notice that only the flat
frames are by band (by filter).
for frame_type in ['raw', 'bias', 'dark', 'flat']:
print(' ')
print(frame_type)
print(butler.get_dataset_type(frame_type))
print('Required dimensions: ', butler.get_dataset_type(frame_type).dimensions.required)
raw DatasetType('raw', {band, instrument, day_obs, detector, group, physical_filter, exposure}, Exposure) Required dimensions: {instrument, detector, exposure} bias DatasetType('bias', {instrument, detector}, ExposureF, isCalibration=True) Required dimensions: {instrument, detector} dark DatasetType('dark', {instrument, detector}, ExposureF, isCalibration=True) Required dimensions: {instrument, detector} flat DatasetType('flat', {band, instrument, detector, physical_filter}, ExposureF, isCalibration=True) Required dimensions: {instrument, detector, physical_filter}
2.1. Retrieve and display frames¶
Retrieve the raw exposure, and the bias, dark, and flat frames, all for the same visit and display them in separate frames with Firefly.
raw = butler.get("raw", exposure=my_visit, detector=my_detector)
afw_display = afwDisplay.Display(frame=1)
afw_display.image(raw.image)
bias = butler.get("bias", visit=my_visit, detector=my_detector)
afw_display = afwDisplay.Display(frame=2)
afw_display.image(bias)
dark = butler.get("dark", visit=my_visit, detector=my_detector)
afw_display = afwDisplay.Display(frame=3)
afw_display.image(dark)
flat = butler.get("flat", visit=my_visit, detector=my_detector)
afw_display = afwDisplay.Display(frame=4)
afw_display.image(flat)
3. Pixel data¶
The bias, dark, and flat frames have their image plane and a variance plane.
There is also a mask plane but it is unpopulated. The mask plane of the raw
image is used in processing.
3.1. Image plane¶
Pixel data in units of ADU (analog-digital units).
Print the minimum, maximum, mean, and standard deviation of the pixel values from the image planes of the bias, dark, and flat frames.
print('image min max mean std ')
print('----------------------------------------------------')
print(f"bias {np.min(bias.image.array):7.2f} {np.max(bias.image.array):9.2f} \
{np.mean(bias.image.array):7.2f} {np.std(bias.image.array):7.2f}")
print(f"dark {np.min(dark.image.array):7.2f} {np.max(dark.image.array):9.2f} \
{np.mean(dark.image.array):7.2f} {np.std(dark.image.array):7.2f}")
print(f"flat {np.min(flat.image.array):7.2f} {np.max(flat.image.array):9.2f} \
{np.mean(flat.image.array):7.2f} {np.std(flat.image.array):7.2f}")
image min max mean std ---------------------------------------------------- bias -46.09 120480.11 0.41 51.03 dark -17.15 8557.23 0.02 9.15 flat -0.33 8.82 1.01 0.02
3.2. Variance plane¶
Pixel data in units of ADU^2 (analog-digital units squared).
Print the minimum, mean, and maximum pixel value from the image planes of the bias, dark, and flat frames.
print('image min mean max ')
print('-----------------------------------------')
print(f"bias {np.min(bias.variance.array):7.2f} \
{np.mean(bias.variance.array):7.2f} {np.max(bias.variance.array):11.2f}")
print(f"dark {np.min(dark.variance.array):7.2f} \
{np.mean(dark.variance.array):7.2f} {np.max(dark.variance.array):11.2f}")
print(f"flat {np.min(flat.variance.array):7.2f} \
{np.mean(flat.variance.array):7.2f} {np.max(flat.variance.array):11.2f}")
image min mean max ----------------------------------------- bias 0.20 4.46 2585303.25
dark 0.00 0.25 185063.16 flat 0.00 0.00 0.24
3.3. No mask frame¶
Show that the mask frame is unpopulated (all zero values).
print('bias ', np.min(bias.mask.array),
np.mean(bias.mask.array), np.max(bias.mask.array))
print('dark ', np.min(dark.mask.array),
np.mean(dark.mask.array), np.max(dark.mask.array))
print('flat ', np.min(flat.mask.array),
np.mean(flat.mask.array), np.max(flat.mask.array))
bias 0 0.0 0 dark 0 0.0 0 flat 0 0.0 0
metadata = bias.getMetadata()
# metadata = dark.getMetadata()
# metadata = flat.getMetadata()
Option to display the long list of metadata.
# metadata
Convert the metadata to a python dictionary.
md_dict = metadata.toDict()
Show any metadata key that contains the string 'UNIT' or 'MJD'.
temp = 'UNIT'
# temp = 'MJD'
for key in md_dict.keys():
if key.find(temp) >= 0:
print(key)
BUNIT LSST ISR UNITS LSST ISR OVERSCAN SERIAL UNITS LSST ISR READNOISE UNITS LSST ISR OVERSCAN PARALLEL UNITS
Print the 'BUNIT'.
md_dict['BUNIT']
'adu'
Clean up.
del md_dict, metadata
4.2. Bounding box¶
The bounding box defines the extent (corners) of the image.
Get the bounding box; use the bias frame for this example.
bbox = bias.getBBox()
bbox
Box2I(corner=Point2I(0, 0), dimensions=Extent2I(4072, 4000))
Print the start and end pixels in the X and Y dimensions.
print(bbox.beginX, bbox.beginY)
print(bbox.endX, bbox.endY)
0 0 4072 4000
Clean up.
del bbox