Synopsis
Load a FITS binary file as a data set.
Syntax
load_table(id, filename=None, ncols=2, colkeys=None, dstype=Data1D) id - int or str, optional ncols - int, optional colkeys - array of str, optional dstype - optional
Examples
Example 1
Read in the first two columns of the file, as the independent (X) and dependent (Y) columns of the default data set:
>>> load_table('sources.fits')
Example 2
Read in the first three columns (the third column is taken to be the error on the dependent variable):
>>> load_table('sources.fits', ncols=3)
Example 3
Read in from columns 'RMID' and 'SUR_BRI' into data set 'prof':
>>> load_table('prof', 'rprof.fits', ... colkeys=['RMID', 'SUR_BRI'])
Example 4
The first three columns are taken to be the two independent axes of a two-dimensional data set ( x0 and x1 ) and the dependent value ( y ):
>>> load_table('fields.fits', ncols=3, ... dstype=Data2D)
Example 5
When using the Crates I/O library, the file name can include CIAO Data Model syntax, such as column selection. This can also be done using the colkeys parameter, as shown above:
>>> load_table('prof', ... 'rprof.fits[cols rmid,sur_bri,sur_bri_err]', ... ncols=3)
Example 6
Read in a data set using Crates:
>>> cr = pycrates.read_file('table.fits') >>> load_table(cr)
Example 7
Read in a data set using AstroPy:
>>> hdus = astropy.io.fits.open('table.fits') >>> load_table(hdus)
PARAMETERS
The parameters for this function are:
Parameter | Definition |
---|---|
id | The identifier for the data set to use. If not given then the default identifier is used, as returned by `get_default_id` . |
filename | Identify the file to read: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a tablecrate for crates, as used by CIAO, or a list of AstroPy HDU objects. |
ncols | The number of columns to read in (the first ncols columns in the file). The meaning of the columns is determined by the dstype parameter. |
colkeys | An array of the column name to read in. The default is none . |
dstype | The data class to use. The default is `Data1D` . |
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the `filename` parameter. If given two un-named arguments, then they are interpreted as the `id` and `filename` parameters, respectively. The remaining parameters are expected to be given as named arguments.
The column order for the different data types are as follows, where x indicates an independent axis and y the dependent axis:
Identifier | Required Fields | Optional Fields |
---|---|---|
Data1D | x, y | statistical error, systematic error |
Data1DInt | xlo, xhi, y | statistical error, systematic error |
Data2D | x0, x1, y | shape, statistical error, systematic error |
Data2DInt | x0lo, x1lo, x0hi, x1hi, y | shape, statistical error, systematic error |
Bugs
See the bugs pages on the Sherpa website for an up-to-date listing of known bugs.
See Also
- data
- dataspace1d, dataspace2d, datastack, fake, load_arf, load_arrays, load_ascii, load_bkg, load_bkg_arf, load_bkg_rmf, load_data, load_grouping, load_image, load_multi_arfs, load_multi_rmfs, load_pha, load_quality, load_rmf, load_staterror, load_syserror, pack_image, pack_pha, pack_table, unpack_arf, unpack_arrays, unpack_ascii, unpack_bkg, unpack_data, unpack_image, unpack_pha, unpack_rmf, unpack_table
- filtering
- load_filter
- info
- get_default_id, list_bkg_ids, list_data_ids
- modeling
- add_model, add_user_pars, load_table_model, load_template_interpolator, load_template_model, load_user_model, save_model, save_source
- saving
- save_arrays, save_data, save_delchi, save_error, save_filter, save_grouping, save_image, save_pha, save_quality, save_resid, save_staterror, save_syserror, save_table
- statistics
- load_user_stat