Cubes¶
Pyspeckit can do a few things with spectral cubes. The most interesting is the spectral line fitting.
Cube
objects have a
fiteach()
method that will fit each
spectral line within a cube. It can be made to do this in parallel with the
multicore
option.
As of version 0.16, pyspeckit cubes can be read from SpectralCube objects:
>>> pcube = pyspeckit.Cube(cube=mySpectralCube)
Otherwise, they can be created from FITS cubes on disk:
>>> pcube = pyspeckit.Cube(filename="mycube.fits")
or from arrays:
>>> mycube = np.random.randn(250,50,50)
>>> myxaxis = np.linspace(-100,100,250)
>>> pcube = pyspeckit.Cube(cube=mycube, xarr=myxaxis, xunit='km/s')
The most interesting features of the
Cube
object are the
fiteach()
method, which fits a model
spectrum to each element of the cube, and
mapplot
, which plots up various
projections of the cube.
Cube.mapplot
will create an interactive plot window.
You can click on any pixel shown in that window and pull up a second window
showing the spectrum at that pixel. If you’ve fitted the cube, the associated
best-fit model will also be shown.
This interactive setup can be a bit fragile, though, so please report bugs
aggressively so we can weed them out!
The interactive viewer has a few button interactions described here
.
Cubes¶
Tools to deal with spectroscopic data cubes.
Some features in Cubes require additional packages:
The ‘grunt work’ is performed by the cubes
module
- class pyspeckit.cubes.SpectralCube.Cube(filename=None, cube=None, xarr=None, xunit=None, errorcube=None, header=None, x0=0, y0=0, maskmap=None, **kwargs)[source] [github] [bitbucket]¶
Bases:
Spectrum
A pyspeckit Cube object. Can be created from a FITS file on disk or from an array or a
spectral_cube.SpectralCube
object. If an array is used to insantiate the cube, thexarr
keyword must be given, specifying the X-axis units- Parameters
filename : str, optional
The name of a FITS file to open and read from. Must be 3D
cube :
np.ndarray
,spectral_cube.SpectralCube
, orastropy.units.Quantity
The data from which to instantiate a Cube object. If it is an array or an astropy Quantity (which is an array with attached units), the X-axis must be specified. If this is given as a SpectralCube object, the X-axis and units should be handled automatically.
xarr :
np.ndarray
orastropy.units.Quantity
, optionalThe X-axis of the spectra from each cube. This actually corresponds to axis 0, or what we normally refer to as the Z-axis of the cube, but it indicates the X-axis in a plot of intensity vs wavelength. The units for this array are specified in the
xunit
keyword unless aQuantity
is given.xunit : str, optional
The unit of the
xarr
array ifxarr
is given as a numpy arrayerrorcube :
np.ndarray
,spectral_cube.SpectralCube
, orQuantity
, optionalA cube with the same shape as the input cube providing the 1-sigma error for each voxel. This can be specified more efficiently as an error map for most use cases, but that approach has not yet been implemented. However, you can pass a 2D error map to
fiteach
.header :
fits.Header
or dict, optionalThe header associated with the data. Only needed if the cube is given as an array or a quantity.
x0, y0 : int
The initial spectrum to use. The
Cube
object can be treated as apyspeckit.Spectrum
object, with all the associated tools (plotter, fitter) using theset_spectrum
method to select a pixel from the cube to plot and fit. However, it is generally more sensible to extract individual spectra and treat them separately using theget_spectrum
method, so these keywords MAY BE DEPRECATED in the future.maskmap :
np.ndarray
, optionalA boolean mask map, where
True
implies that the data are good. This will be used for both plotting usingmapplot
and fitting usingfiteach
.kwargs : dict
passed to
load_fits
, which are passed toSpectralCube.read
- copy(deep=True)[source] [github] [bitbucket]¶
Create a copy of the spectral cube with its own plotter, fitter, etc. Useful for, e.g., comparing smoothed to unsmoothed data
- crop(x1, x2, unit=None, **kwargs) [github] [bitbucket]¶
Replace the current spectrum with a subset from x1 to x2 in current units
Fixes CRPIX1 and baseline and model spectra to match cropped data spectrum
- property data_quantity¶
- downsample(dsfactor) [github] [bitbucket]¶
Downsample the spectrum (and all of its subsidiaries) without smoothing
- Parameters
dsfactor : int
Downsampling Factor
- property error_quantity¶
- fiteach(errspec=None, errmap=None, guesses=(), verbose=True, verbose_level=1, quiet=True, signal_cut=3, usemomentcube=None, blank_value=0, integral=False, direct_integral=False, absorption=False, use_nearest_as_guess=False, use_neighbor_as_guess=False, start_from_point=(0, 0), multicore=1, position_order=None, continuum_map=None, prevalidate_guesses=False, maskmap=None, skip_failed_fits=False, **fitkwargs)[source] [github] [bitbucket]¶
Fit a spectrum to each valid pixel in the cube
For guesses, priority is use_nearest_as_guess, usemomentcube, guesses, None
Once you have successfully run this function, the results will be stored in the
.parcube
and.errcube
attributes, which are each cubes of shape[npars, ny, nx]
, where npars is the number of fitted parameters andnx
,ny
are the shape of the map.errcube
contains the errors on the fitted parameters (1-sigma, as returned from the Levenberg-Marquardt fit’s covariance matrix). You can use the attributehas_fit
, which is a map of shape[ny,nx]
to find which pixels have been successfully fit.- Parameters
use_nearest_as_guess: bool :
Unless the fitted point is the first, it will find the nearest other point with a successful fit and use its best-fit parameters as the guess
use_neighbor_as_guess: bool :
Set this keyword to use the average best-fit parameters from neighboring positions with successful fits as the guess
start_from_point: tuple(int,int) :
Either start from the center or from a point defined by a tuple. Work outward from that starting point.
position_order: ndarray[naxis=2] :
2D map of region with pixel values indicating the order in which to carry out the fitting. Any type with increasing pixel values.
guesses: tuple or ndarray[naxis=3] :
Either a tuple/list of guesses with len(guesses) = npars or a cube of guesses with shape [npars, ny, nx]. NOT TRUE, but a good idea in principle: You can also use a dictionary of the form {(y,x): [list of length npars]}, where (y,x) specifies a pixel location. If the dictionary method is used, npars must be specified and it sets the length of the first parameter axis
signal_cut: float :
Minimum signal-to-noise ratio to “cut” on (i.e., if peak in a given spectrum has s/n less than this value, ignore it)
blank_value: float :
Value to replace non-fitted locations with.
errmap: ndarray[naxis=2] or ndarray[naxis=3] :
A map of errors used for the individual pixels of the spectral cube. 2D errmap results in an equal weighting of each given spectrum, while a 3D array sets individual weights of each channel
verbose: bool :
verbose_level: int :
Controls how much is output. 0,1 - only changes frequency of updates in loop 2 - print out messages when skipping pixels 3 - print out messages when fitting pixels 4 - specfit will be verbose
multicore: int :
if >1, try to use multiprocessing via parallel_map to run on multiple cores
continuum_map: np.ndarray :
Same shape as error map. Subtract this from data before estimating noise.
prevalidate_guesses: bool :
An extra check before fitting is run to make sure the guesses are all within the specified limits. May be slow, so it is off by default. It also should not be necessary, since careful checking is performed before each fit.
maskmap :
np.ndarray
, optionalA boolean mask map, where
True
implies that the data are good. This will be used for both plotting usingmapplot
and fitting usingfiteach
. IfNone
, will useself.maskmap
.integral : bool
If set, the integral of each spectral fit will be computed and stored in the attribute
.integralmap
direct_integral : bool
Return the integral of the spectrum (as opposed to the fitted model) over a range defined by the
integration_limits
if specified orthreshold
otherwiseskip_failed_fits : bool
Flag to forcibly skip failed fits that fail with “unknown error”. Generally, you do not want this on, but this is the ‘finger-over-the-engine-light’ approach that will allow these incomprehensible failures to go by and just ignore them. Keep an eye on how many of these you get: if it’s just one or two out of hundreds, then maybe those are just pathological cases that can be ignored. If it’s a significant fraction, you probably want to take a different approach.
- property flux¶
The data in the spectrum (flux = data, for compatibility with astropy’s Spectrum1D object).
- classmethod from_hdu(hdu) [github] [bitbucket]¶
Create a pyspeckit Spectrum object from an HDU
- classmethod from_spectrum1d(spec1d) [github] [bitbucket]¶
Tool to load a pyspeckit Spectrum from a specutils object
Examples
>>> # grab many spectra from a multiextension FITS file >>> spectra = specutils.io.fits.read_fits_spectrum1d('AAO.fits') >>> sp = pyspeckit.Spectrum.from_spectrum1d(spectra[0])
>>> # open a single spectrum that could have been opened directly with pyspeckit >>> spectrum = specutils.io.fits.read_fits_spectrum1d('gbt_1d.fits') >>> sp = pyspeckit.Spectrum.from_spectrum1d(spectrum)
- get_apspec(aperture, coordsys=None, method='mean', **kwargs)[source] [github] [bitbucket]¶
Extract an aperture using cubes.extract_aperture (defaults to Cube pixel coordinates)
- aperture [tuple or list] (x, y, radius)
The aperture to use when extracting the data
- coordsys [ ‘celestial’ | ‘galactic’ | None]
the coordinate system the aperture is specified in None indicates pixel coordinates (default)
- wunit [str]
arcsec, arcmin, or degree
- get_modelcube(update=False, multicore=1)[source] [github] [bitbucket]¶
Return or generate a “model cube”, which will have the same shape as the
.cube
but will have spectra generated from the fitted model.If the model cube does not yet exist, one will be generated
- Parameters
update : bool
If the cube has already been computed, set this to
True
to recompute the model.multicore: int :
if >1, try to use multiprocessing via parallel_map to run on multiple cores
- get_spectrum(x, y)[source] [github] [bitbucket]¶
Very simple: get the spectrum at coordinates x,y
(inherits fitter from self)
Returns a SpectroscopicAxis instance
- getlines(linetype='radio', **kwargs) [github] [bitbucket]¶
Access a registered database of spectral lines. Will add an attribute with the name linetype, which then has properties defined by the speclines module (most likely, a table and a “show” function to display the lines)
- interpnans() [github] [bitbucket]¶
Interpolate over NAN values, replacing them with values interpolated from their neighbors using linear interpolation.
- load_fits(fitsfile, **kwargs)[source] [github] [bitbucket]¶
- load_model_fit(fitsfilename, npars, npeaks=1, fittype=None, _temp_fit_loc=(0, 0))[source] [github] [bitbucket]¶
Load a parameter + error cube into the
.parcube
and.errcube
attributes. The models can then be examined and plotted using.mapplot
as if you had run.fiteach
.- Parameters
fitsfilename : str
The filename containing the parameter cube written with
write_fit
npars : int
The number of parameters in the model fit for a single spectrum
npeaks : int
The number of independent peaks fit toward each spectrum
fittype : str, optional
The name of the fittype, e.g. ‘gaussian’ or ‘voigt’, from the pyspeckit fitter registry. This is optional; it should have been written to the FITS header and will be read from there if it is not specified
_temp_fit_loc : tuple (int,int)
The initial spectrum to use to generate components of the class. This should not need to be changed.
- load_spectral_cube(cube)[source] [github] [bitbucket]¶
Load the cube from a spectral_cube.SpectralCube object
- measure(z=None, d=None, fluxnorm=None, miscline=None, misctol=10.0, ignore=None, derive=True, **kwargs) [github] [bitbucket]¶
Initialize the measurements class - only do this after you have run a fitter otherwise pyspeckit will be angry!
- momenteach(verbose=True, verbose_level=1, multicore=1, **kwargs)[source] [github] [bitbucket]¶
Return a cube of the moments of each pixel
- Parameters
multicore: int :
if >1, try to use multiprocessing via parallel_map to run on multiple cores
- moments(unit='km/s', **kwargs) [github] [bitbucket]¶
Return the moments of the spectrum. In order to assure that the 1st and 2nd moments are meaningful, a ‘default’ unit is set. If unit is not set, will use current unit.
Documentation imported from the moments module:
Returns the gaussian parameters of a 1D distribution by calculating its moments. Depending on the input parameters, will only output a subset of the above.
Theory, from first principles (in the absence of noise): integral(gaussian) = sqrt(2*pi*sigma^2) * amp sigma = integral / amp / sqrt(2*pi)
In the presence of noise, this gets much more complicated. The noisy approach is inspired by mpfit
- Parameters
Xax : np.ndarray
The x-axis for computing the 1st and 2nd moments
data : np.ndarray
The data from which to compute the various moments
estimator : function
A function to estimate the “height” or “background level” of the data, e.g. mean or median. If masked arrays are being used, use the np.ma versions of the numpy functions
negamp: bool or None :
Force the peak negative (True), positive (False), or the sign of the peak will be “autodetected” (negamp=None)
nsigcut: float or None :
If specified, the code will attempt to estimate the noise and only use data above/below
n
-sigma above the noise. The noise will be estimated from the data unless the noise is specified withnoise_estimate
noise_estimate: float or None :
Guess for the noise value. Only matters if
nsigcut
is specified.vheight : bool
Include an estimate of the background level?
- Returns
(height, amplitude, x, width_x) :
height : float
is the background level
amplitude : float
is the maximum (or minimum) of the data after background subtraction
x : float
is the first moment
width_x : float
is the second moment
- parse_hdf5_header(hdr) [github] [bitbucket]¶
HDF5 reader will create a hdr dictionary from HDF5 dataset attributes if they exist. This routine will convert that dict to a pyfits header instance.
- parse_header(hdr, specname=None) [github] [bitbucket]¶
Parse parameters from a .fits header into required spectrum structure parameters
- parse_text_header(Table) [github] [bitbucket]¶
Grab relevant parameters from a table header (xaxis type, etc)
This function should only exist for Spectrum objects created from .txt or other atpy table type objects
- plot_apspec(aperture, coordsys=None, reset_ylimits=True, wunit='arcsec', method='mean', **kwargs)[source] [github] [bitbucket]¶
Extract an aperture using cubes.extract_aperture (defaults to Cube coordinates)
- Parameters
aperture : list
- A list of aperture parameters, e.g.
For a circular aperture, len(ap)=3: +
ap = [xcen,ycen,radius]
For an elliptical aperture, len(ap)=5: +
ap = [xcen,ycen,height,width,PA]
coordsys : None or str
The coordinate system of the aperture (e.g., galactic, fk5, None for pixel)
method : ‘mean’ or ‘sum’
Either average over parellel spectra or sum them.
- plot_fit(x, y, silent=False, **kwargs)[source] [github] [bitbucket]¶
If fiteach has been run, plot the best fit at the specified location
- Parameters
x : int
y : int
The x, y coordinates of the pixel (indices 2 and 1 respectively in numpy notation)
- plot_spectrum(x, y, plot_fit=False, **kwargs)[source] [github] [bitbucket]¶
Fill the .data array with a real spectrum and plot it
- set_apspec(aperture, coordsys=None, method='mean')[source] [github] [bitbucket]¶
Extract an aperture using cubes.extract_aperture (defaults to Cube coordinates)
- set_spectrum(x, y)[source] [github] [bitbucket]¶
- property shape¶
Return the data shape (a property of the Spectrum)
- show_fit_param(parnumber, **kwargs)[source] [github] [bitbucket]¶
If pars have been computed, display them in the mapplot window
- Parameters
parnumber : int
The index of the parameter in the parameter cube
- show_moment(momentnumber, **kwargs)[source] [github] [bitbucket]¶
If moments have been computed, display them in the mapplot window
- slice(start=None, stop=None, unit='pixel', preserve_fits=False, copy=True, update_header=False)[source] [github] [bitbucket]¶
Slice a cube along the spectral axis (equivalent to “spectral_slab” from the spectral_cube package)
- Parameters
start : numpy.float or int
start of slice
stop : numpy.float or int
stop of slice
unit : str
allowed values are any supported physical unit, ‘pixel’
update_header : bool
modifies the header of the spectral cube according to the slice
- smooth(factor, **kwargs)[source] [github] [bitbucket]¶
Smooth the spectrum by factor
factor
.Documentation from the
cubes.spectral_smooth
module:
- stats(statrange=(), interactive=False) [github] [bitbucket]¶
Return some statistical measures in a dictionary (somewhat self-explanatory)
- Parameters
statrange : 2-element tuple
X-range over which to perform measures
interactive : bool
specify range interactively in plotter
- property unit¶
- property units¶
- write(filename, type=None, **kwargs) [github] [bitbucket]¶
Write the spectrum to a file. The available file types are listed in spectrum.writers.writers
type - what type of file to write to? If not specified, will attempt to determine type from suffix
- write_cube()[source] [github] [bitbucket]¶
- write_fit(fitcubefilename, overwrite=False)[source] [github] [bitbucket]¶
Write out a fit cube containing the
.parcube
and.errcube
using the information in the fit’s parinfo to set the header keywords. ThePLANE#
keywords will be used to indicate the content of each plane in the data cube written to the FITS file. All of the fitted parameters will be written first, followed by all of the errors on those parameters. So, for example, if you have fitted a single gaussian to each pixel, the dimensions of the saved cube will be[6, ny, nx]
, and they will be the amplitude, centroid, width, error on amplitude, error on centroid, and error on width, respectively.To load such a file back in for plotting purposes, see
SpectralCube.load_model_fit
.- Parameters
fitcubefilename: string :
Filename to write to
overwrite: bool :
Overwrite file if it exists?
- class pyspeckit.cubes.SpectralCube.CubeStack(cubelist, xunit='GHz', x0=0, y0=0, maskmap=None, **kwargs)[source] [github] [bitbucket]¶
Bases:
Cube
The Cube equivalent of Spectra: for stitching multiple cubes with the same spatial grid but different frequencies together
Initialize the Cube. Accepts FITS files.
x0,y0 - initial spectrum to use (defaults to lower-left corner)
MapPlot¶
Make plots of the cube and interactively connect them to spectrum plotting. This is really an interactive component of the package; nothing in here is meant for publication-quality plots, but more for user interactive analysis.
That said, the plotter makes use of APLpy, so it is possible to make publication-quality plots.
- author
Adam Ginsburg
- date
03/17/2011
- class pyspeckit.cubes.mapplot.MapPlotter(Cube=None, figure=None, doplot=False, **kwargs)[source] [github] [bitbucket]¶
Bases:
object
Class to plot a spectrum
See
mapplot
for use documentation; this docstring is only for initialization.Create a map figure for future plotting
- circle(x1, y1, x2, y2, **kwargs)[source] [github] [bitbucket]¶
Plot the spectrum of a circular aperture
- click(event)[source] [github] [bitbucket]¶
Record location of downclick
- copy(parent=None)[source] [github] [bitbucket]¶
Create a copy of the map plotter with blank (uninitialized) axis & figure
- [ parent ]
A spectroscopic axis instance that is the parent of the specfit instance. This needs to be specified at some point, but defaults to None to prevent overwriting a previous plot.
- makeplane(estimator=<function nanmean>)[source] [github] [bitbucket]¶
Create a “plane” view of the cube, either by slicing or projecting it or by showing a slice from the best-fit model parameter cube.
- Parameters
estimator : [ function | ‘max’ | ‘int’ | FITS filename | integer | slice ]
A non-pythonic, non-duck-typed variable. If it’s a function, apply that function along the cube’s spectral axis to obtain an estimate (e.g., mean, min, max, etc.). ‘max’ will do the same thing as passing np.max ‘int’ will attempt to integrate the image (which is why I didn’t duck-type) (integrate means sum and multiply by dx) a .fits filename will be read using pyfits (so you can make your own cover figure) an integer will get the n’th slice in the parcube if it exists If it’s a slice, slice the input data cube along the Z-axis with this slice
- mapplot(convention='calabretta', colorbar=True, useaplpy=True, vmin=None, vmax=None, cmap=None, plotkwargs={}, **kwargs)[source] [github] [bitbucket]¶
Plot up a map based on an input data cube.
The map to be plotted is selected using
makeplane
. Theestimator
keyword argument is passed to that function.The plotted map, once shown, is interactive. You can click on it with any of the three mouse buttons.
- Button 1 or keyboard ‘1’:
Plot the selected pixel’s spectrum in another window. Mark the clicked pixel with an ‘x’
- Button 2 or keyboard ‘o’:
Overplot a second (or third, fourth, fifth…) spectrum in the external plot window
- Button 3:
Disconnect the interactive viewer
You can also click-and-drag with button 1 to average over a circular region. This same effect can be achieved by using the ‘c’ key to set the /c/enter of a circle and the ‘r’ key to set its /r/adius (i.e., hover over the center and press ‘c’, then hover some distance away and press ‘r’).
- Parameters
convention : ‘calabretta’ or ‘griesen’
The default projection to assume for Galactic data when plotting with aplpy.
colorbar : bool
Whether to show a colorbar
plotkwargs : dict, optional
A dictionary of keyword arguments to pass to aplpy.show_colorscale or matplotlib.pyplot.imshow
useaplpy : bool
Use aplpy if a FITS header is available
vmin, vmax: float or None :
Override values for the vmin/vmax values. Will be automatically determined if left as None
.. todo: :
Allow mapplot in subfigure
- plot_spectrum(event, plot_fit=True)[source] [github] [bitbucket]¶
Connects map cube to Spectrum…
- refresh()[source] [github] [bitbucket]¶
cubes.py¶
From agpy, contains functions to perform various transformations on data cubes and their headers.
- pyspeckit.cubes.cubes.aper_world2pix(ap, wcs, coordsys='galactic', wunit='arcsec')[source] [github] [bitbucket]¶
Converts an elliptical aperture (x,y,width,height,PA) from WCS to pixel coordinates given an input wcs (an instance of the pywcs.WCS class). Must be a 2D WCS header.
- pyspeckit.cubes.cubes.baseline_cube(cube, polyorder=None, cubemask=None, splineorder=None, numcores=None, sampling=1)[source] [github] [bitbucket]¶
Given a cube, fit a polynomial to each spectrum
- Parameters
cube: np.ndarray :
An ndarray with ndim = 3, and the first dimension is the spectral axis
polyorder: int :
Order of the polynomial to fit and subtract
cubemask: boolean ndarray :
Mask to apply to cube. Values that are True will be ignored when fitting.
numcores : None or int
Number of cores to use for parallelization. If None, will be set to the number of available cores.
- pyspeckit.cubes.cubes.blfunc_generator(x=None, polyorder=None, splineorder=None, sampling=1)[source] [github] [bitbucket]¶
Generate a function that will fit a baseline (polynomial or spline) to a data set. Either
splineorder
orpolyorder
must be set- Parameters
x : np.ndarray or None
The X-axis of the fitted array. Will be set to
np.arange(len(data))
if not specifiedpolyorder : None or int
The polynomial order.
splineorder : None or int
sampling : int
The sampling rate to use for the data. Can set to higher numbers to effectively downsample the data before fitting
- pyspeckit.cubes.cubes.coords_in_image(fitsfile, lon, lat, system='galactic')[source] [github] [bitbucket]¶
Determine whether the coordinates are inside the image
- pyspeckit.cubes.cubes.extract_aperture(cube, ap, r_mask=False, wcs=None, coordsys='galactic', wunit='arcsec', debug=False, method='mean')[source] [github] [bitbucket]¶
Extract an aperture from a data cube. E.g. to acquire a spectrum of an outflow that is extended.
Cube should have shape [z,y,x], e.g. cube = fits.getdata(‘datacube.fits’)
Apertures are specified in PIXEL units with an origin of 0,0 (NOT the 1,1 fits standard!) unless wcs and coordsys are specified
- Parameters
ap : list
- For a circular aperture, len(ap)=3:
ap = [xcen,ycen,radius]
- For an elliptical aperture, len(ap)=5:
ap = [xcen,ycen,height,width,PA]
wcs : wcs
a pywcs.WCS instance associated with the data cube
coordsys : str
the coordinate system the aperture is specified in. Options are ‘celestial’ and ‘galactic’. Default is ‘galactic’
wunit : str
units of width/height. default ‘arcsec’, options ‘arcmin’ and ‘degree’
method : str
‘mean’ or ‘sum’ (average over spectra, or sum them) or ‘error’ for sqrt(sum-of-squares / n)
- pyspeckit.cubes.cubes.flatten_header(header, delete=False)[source] [github] [bitbucket]¶
Attempt to turn an N-dimensional fits header into a 2-dimensional header Turns all CRPIX[>2] etc. into new keywords with suffix ‘A’
header must be a fits.Header instance
- pyspeckit.cubes.cubes.getspec(lon, lat, rad, cube, header, r_fits=True, inherit=True, wunit='arcsec')[source] [github] [bitbucket]¶
Given a longitude, latitude, aperture radius (arcsec), and a cube file, return a .fits file or a spectrum.
- Parameters
lon: float :
lat: float :
longitude and latitude center of a circular aperture in WCS coordinates must be in coordinate system of the file
rad: float :
radius (default degrees) of aperture
- pyspeckit.cubes.cubes.getspec_reg(cubefilename, region, **kwargs)[source] [github] [bitbucket]¶
Aperture extraction from a cube using a pyregion circle region
The region must be in the same coordinate system as the cube header
Warning
The second argument of getspec_reg requires a pyregion region list, and therefore this code depends on pyregion.
- pyspeckit.cubes.cubes.integ(file, vrange, xcen=None, xwidth=None, ycen=None, ywidth=None, **kwargs)[source] [github] [bitbucket]¶
wrapper of subimage_integ that defaults to using the full image
- pyspeckit.cubes.cubes.plane_smooth(cube, cubedim=0, parallel=True, numcores=None, **kwargs)[source] [github] [bitbucket]¶
parallel-map the smooth function
- Parameters
parallel: bool :
defaults True. Set to false if you want serial (for debug purposes?)
numcores: int :
pass to parallel_map (None = use all available)
- pyspeckit.cubes.cubes.speccen_header(header, lon=None, lat=None, proj='TAN', system='celestial', spectral_axis=3, celestial_axes=[1, 2])[source] [github] [bitbucket]¶
Turn a cube header into a spectrum header, retaining RA/Dec vals where possible (speccen is like flatten; spec-ify would be better but, specify? nah)
Assumes 3rd axis is velocity
- pyspeckit.cubes.cubes.spectral_smooth(cube, smooth_factor, downsample=True, parallel=True, numcores=None, **kwargs)[source] [github] [bitbucket]¶
Smooth the cube along the spectral direction
- pyspeckit.cubes.cubes.subcube(cube, xcen, xwidth, ycen, ywidth, header=None, dvmult=False, return_HDU=False, units='pixels', widthunits='pixels')[source] [github] [bitbucket]¶
Crops a data cube
All units assumed to be pixel units
cube has dimensions (velocity, y, x)
xwidth and ywidth are “radius” values, i.e. half the length that will be extracted
if dvmult is set, multiple the average by DV (this is useful if you set average=sum and dvmul=True to get an integrated value)
- pyspeckit.cubes.cubes.subimage_integ(cube, xcen, xwidth, ycen, ywidth, vrange, header=None, average=<function mean>, dvmult=False, return_HDU=False, units='pixels', zunits=None)[source] [github] [bitbucket]¶
Returns a sub-image from a data cube integrated over the specified velocity range
NOTE: With spectral_cube, subcube features can be easily applied with the
subcube
method, and integration is handled separately.- Parameters
cube : np.ndarray
A 3-dimensional numpy array with dimensions (velocity, y, x)
xcen,ycen : float
The center in the X,Y-dimension. See
units
below for unit informationxwidth,ywidth : float
The width in the X,Y-dimension. See
units
below for unit information xwidth and ywidth are “radius” values, i.e. half the length that will be extractedvrange : (float,float)
The velocity range to integrate over. See
zunits
below for unit informationheader :
astropy.io.fits.Header
or NoneIf specified, will allow the use of WCS units
average : function
The function to apply when ‘integrating’ over the subcube
dvmult : bool
If dvmult is set, multiply the average by DV (this is useful if you set average=sum and dvmul=True to get an integrated value, e.g. K km/s or Jy km/s)
return_hdu : bool
If specified, will return an HDU object, otherwise will return the array and header
units : ‘pixels’ or ‘wcs’
If ‘pixels’, all units (xcen, ycen, xwidth, ywidth) will be in pixels. If ‘wcs’, the values will be converted from WCS units to pixel units using the WCS specified by the
header
zunits : ‘pixels’ or ‘wcs’ or None
If None, will be set to be the same as
units
- Returns
subim, hdu : tuple
A tuple (integrated array, header) if
return_hdu
isFalse
, or an HDU if it is True