#include <archon/util/image_data.H>
Collaboration diagram for Archon::Utilities::ImageData:
Public Member Functions | |
ImageData (void *buffer, int numberOfStrips, int pixelsPerStrip, const PixelFormat &pixelFormat=PixelFormat(), const BufferFormat &bufferFormat=BufferFormat(), int left=0, int bottom=0, int width=0, int height=0, const vector< bool > &endianness=vector< bool >()) throw (PixelFormat::UnsupportedWordTypeException, PixelFormat::InconsistencyException, invalid_argument) | |
void | queryPixelMap (double x, double y, long double *pixel, int horizontalRepeat=0, int verticalRepeat=0) const |
Fetch the color at the specified floating point position. | |
void | getPixel (long x, long y, long double *pixel, int horizontalRepeat=0, int verticalRepeat=0) const |
Fetch the pixel at the specified coordinates. | |
Classes | |
struct | BufferFormat |
struct | MemoryField |
At the top-most level image data is organized (stored in memory) as a sequence of pixel strips. Strips are either meant to be displayed horizontally (horizontal strips) or vertically (vertical strips). This is controlled by a single flag.
The pixels in a strip are stored consecutively in memory. Every strip contains the same number of pixels and occupy the same amount of memory (number of bytes or bits, depending on format). The distance (in number of bytes or bits, depending on format) between two consecutive strips is the 'stride' and is constant throughout the image.
When strips are meant to be displayed horizontally we say that the pixel layout is y-major (or row-major), because an increase in the y-coordinate signifies a major advance in memory address comparend to an increase in the x-coordinate. Similarly, if strips are meant to be displayed vertically, we say that the pixel layout is x-major (or column-major).
Coming soon...
Coming soon...
Endianess is all about the way bytes are ordered when combined into wider elements. Every hardware architecture has a specific endianess. The most common are big endian and little endian architectures:
Unfortunately there are other type of endianness than these. Some architectures order bytes in one way when combined to double-bytes and order double-bytes in the opposite order when combining them into quadruple bytes.
For the sake of maximum flexibility this class supports all types of endianness that can be described by a sequence of ordering flags, one for each level of combination.
Native endianness is the default and the best choise when you want good performance, since it avoids the cumbersome byte reordering.
The reason one would ever specify an explicit endianness might be to access data that was generated on a different architecture, or to be able to inteface to systems/libraries that for one reason or another deals with image data with an alien endianess.
The endianness specification affects only the formation of words of the specified type from bytes in the image buffer. Thus, it is ignored if the selected word type is byte/char.
In some situations it is desireable to limit access to a particular region in the image. This is possible by means of the "frame of interest" feature of this class. The frame is specified by its width and height in pixels, and by the displacement of its lower left corner from the lower left corner of the full image.
A good example is texture mapping where we sample colors from an image, and we generally expect that a horizontal coordinate of 1 corresponds with the right edge of the image. However several textures are often combined into a single image requireing coordinate transformations on our side to access a specific sub-picture. A much better idea is to utilize the frame of interest feature of this class - especially since it also handles the problem of color bleeding near the edges.
If you leave the frame of interest parameters at their defaults it will always be coincident with the full picture.
The frame of interest is what defines the contents of the principal image. See below.
It is by no means possible to address pixels outside the frame of interest, not even partially (no color bleeding).
The principal image is an important concept when reading pixels from the buffer. It is defined by its position in the infinite 2-D coordinate space. The principal image is always located in coordinate space such that the lower left corner is at (0, 0). With integer coordinates the upper right corner is always at (width-1, height-1). When using continuous coordinates the upper right corner is still at (width-1, height-1) by default but is in general a function of the currently chosen coordinate transformation.
Many pixel access methods accept two optional arguments: horizontalRepeat and verticalRepeat. Setting horizontalRepeat to 0 has the effect of repeating the principal image infinitely both to the right and to the left. Likewise, setting verticalRepeat to 0 will repeat it infinitely in the vertical direction. If both are set to zero, the entire infinit 2-D plane will be tiled with the principal image.
If you choose a non-zero value n for the horizontal repeat the the principal image and its vertical replica will be repeated n-1 times to the right. Reading pixels from coordinates beyond the right edge of the rightmost replica, has the same efefct as reading the last pixel directly to the left.
In general, the right-most column of pixels in a finite array of replica is repeated infinitely to the right, the top-most row is repeated infinitely upwards, and so on. Each corner pixel is used to fill the remaing four corner areas respectively.
When sampling with continuous coordinates near the right edge of a finite array of replica no color bleeding from the opposite edge will occur, which is the way it should be.
When sampling with continuous coordinates near the transition from one replica to the next color bleeding from the opposite side will occur, which indeed is also the way it should be.
Definition at line 204 of file image_data.H.
|
Definition at line 36 of file image_data.C. References Archon::Utilities::PixelFormat::bitsPerWord, Archon::Utilities::PixelFormat::channelLayout, Archon::Utilities::compareEndianness(), Archon::Utilities::computeBytePermutation(), Archon::Utilities::PixelFormat::direct, Archon::Utilities::findMostSignificantBit(), Archon::Utilities::PixelFormat::formatType, Archon::Utilities::PixelFormat::mostSignificantBitsFirst, Archon::Utilities::PixelFormat::pixelSize, std::swap(), Archon::Utilities::PixelFormat::tight, and Archon::Utilities::PixelFormat::wordType. |
|
Fetch the pixel at the specified coordinates.
Definition at line 375 of file image_data.H. References std::swap(). Referenced by queryPixelMap(). |
|
Fetch the color at the specified floating point position. This will in general be an interpolation over a set of nearby pixels. This method is well suited for situations where images are used as texture maps in such places as raytracers.
Definition at line 326 of file image_data.H. References Archon::Utilities::Array< T >::get(), getPixel(), and n. |