Image Definition and Terminology
In , each image is modeled as a 1D, 2D, 3D or 4D array of scalar values, uniformly discretized on a rectangular/parallelepipedic domain.
The four dimensions of this array are respectively denoted by:
width, the number of image columns (size along the x-axis).
height, the number of image rows (size along the y-axis).
depth, the number of image slices (size along the z-axis). The depth is equal to 1 for usual color or grayscale 2D images.
spectrum, the number of image channels (size along the c-axis). The spectrum is respectively equal to 3 and 4 for usual RGB and RGBA color images.
There are no hard limitations on the size of the image along each dimension. For instance, the number of image slices or channels can be of arbitrary size within the limits of the available memory.
of an image are considered as spatial dimensions, while the spectrum
has a multi-spectral meaning. Thus, a 4D image in
should be most often regarded as a 3D dataset of multi-spectral voxels. Most of the
commands will stick with this idea (e.g. command blur
blurs images only along the spatial xyz
stores all the image data as buffers of float values (32 bits, value range [-3.4E38,+3.4E38]. It performs all its image processing operations with floating point numbers. Each image pixel takes then 32bits/channel (except if double-precision buffers have been enabled during the compilation of the software, in which case 64bits/channel can be the default).
-valued pixels ensure to keep the numerical precision when executing image processing pipelines. For image input/output operations, you may want to prescribe the image datatype to be different than float
, etc.). This is possible by specifying it as a file option when using I/O commands. (see section Input/Output Properties
to learn more about file options).