More

Is the pixel aspect ratio of a raster dataset always 1:1?

Is the pixel aspect ratio of a raster dataset always 1:1?


When we are talking about rasters, is the spatial resolution of the pixel defined in a raster dataset always fixed to a ratio of 1:1, eg, 1m x 1m ?


Rasters are basically stored as matrices, so that pixels are located in a rectangular grid with rows and column coordinates. At this stage, they have no size.

For geographic data, this grid is located according to a reference system, using a transform. Most of the time, the data is resampled in order to align the grid with coordinate system. Then, you only need a simple affine transform (scale + shift) to switch from row/colum to the actual coordinates. Usually, square pixels are preferred for the sake of simplicity (faster display and analysis). However, resampling results in a loss of information, especially if the "structure" of the input and output grids are very different. Therefore, it is sometimes not recommended to resample your data on square pixels (e.g. satellite image with a low elevation angle, regular grid in lat long at high latitude that you want to convert into XY coordinates) or even to avoid multiple resampling (e.g. use the projection on the fly to visualize an image from another coordinate system). In those cases, you will not have a 1:1 ratio (and the ratio could change over the scene).


Detect changes in display aspect ratio [closed]

Want to improve this question? Update the question so it's on-topic for Video Production Stack Exchange.

I would like to know if it is possible to find time stamps or frame numbers where a change of the display aspect ratio occurs in a video?

In more detail, I recorded a TV-show from an SD-channel which resulted in a .ts file with a pixel resolution of 720x576. Since it is an old show, the show itself has a display aspect ratio (DAR) of 4:3. The commercials however are broadcast with a DAR of 16:9.

I was wondering if it is possible to extract the positions of the video where the DAR changes since I'd like to use this information to detect the commercial breaks?

I tried using comskip but it seems that it always assumes quadratic pixels and therefore is not able to detect the change of the aspect ratio. Other software is however able to respect the varying DAR. For example, vlc always rescales the video accordingly and projectx also displays the right DAR for the respective positions in the video.


6 Answers 6

Did every programmer of every game implemented all possible various API's that old graphic cards supported?

Yes - but it went even deeper than that. Early graphics cards had virtually no callable code associated with them at all, the concept of "drivers" had not quite become a reality yet. There was the concept of a Video BIOS, which were extensions to the INT 10h BIOS video services, that were effectively limited to initialization and switching video modes.

Instead, graphics cards, at least in DOS land, all had memory mapped display RAM, and extensive documentation was available about exactly how setting various bits in display RAM would affect the pixels that appeared on the screen. There were no drawing APIs to call, if you wanted something to appear on the screen (whether it be a pixel, a character, a line, a circle, a sprite, etc) you would write the code to move the bytes into the right places in display RAM. Entire books were written about how to write efficient code to draw graphics.

There were some systems like the Borland Graphics Interface that abstracted graphics drawing primitives into an API with different drivers that one could call to draw things on different graphics cards. However, these were typically slower than what would be required for building action type games.

An action game would typically be optimized for a particular graphics display mode on a particular card. For example, a popular display mode was VGA 640x480 with 16 colors. This would be listed in the software requirements, and you needed to have the right hardware to support the game. If you bought a VGA game but you only had an EGA card, then the game would not work at all. As you said, some games would ask what you had and you had to know what the right answer was, otherwise the game would not work.

Early on, you had to explicitly code your game for each graphics card you wanted to support: Hercules, CGA, Tandy, EGA, VGA. You had to know how to put the card into graphics mode and you had to know the memory layout, palette, and so on. You had to figure out how to avoid flicker and how to prevent tearing. You had to write your own line drawing and fill routines, and if you wanted 3-D, you had to know how to project it onto a 2-D screen, how to remove hidden lines and so on.

Later when graphics cards started gaining accelerated functionality, SGI created the IrisGL API, which later became OpenGL, in order to simplify CAD (computer aided drafting/design) software development for those graphics cards by providing a standard API that video hardware manufacturers could support and developers could design their software against. OpenGL provided access to underlying hardware features, and if a feature did not exist on the hardware, OpenGL provided a software implementation.

The same problem existed in games development. Originally, graphics card manufacturers would work with game studios to create a version (release) of their game that would be accelerated by the specific graphics card. Early 3D games like MechWarrior 2 had different releases for 3dfx Voodoo, S3 Virge, STB Velocity, and so on. It was a bit of a mess. At around the same time, Microsoft created the DirectX library for Windows which was similar to OpenGL. Manufacturers would support OpenGL and/or DirectX, and game studios who were brave enough to abandon DOS for Windows as a platform could then program for one or both libraries instead of creating another release for each individual graphics card they wanted to support. Any relatively minor differences between graphics cards could be handled at runtime in the same release.

In DOS you had direct access to the hardware so you grabbed some good source of information about the card you wanted to support, and got down to code your routines.

A book which was often cited as a good source was "Programmer's Guide to the Ega, Vga, and Super Vga Cards", by Richard F. Ferraro I hadn't the luck to own it or read it, but it was fondly remembered by those who did.

Another invaluable source of information was Ralph Brown's Interrupt List you can find a HTML conversion of the list here: http://www.delorie.com/djgpp/doc/rbinter/

The original was just made of (long) text files and, if memory serves me correctly, there were some programs to navigate it more easily, at least in the later versions.

Another nice collection of infomation was contained in the "PC Game Programmer's Encyclopedia", or PC-GPE a HTML conversion can be found here: http://qzx.com/pc-gpe/

You had at least three different ways to interact with a given piece of hardware io ports, interrupts, and memory mapped registers. Graphic cards used all three of them.

The situation with audio cards was very similar.

Another thing to consider is that attached to the video card was an analog CRT monitor. The older/cheaper ones were only able to sync to a given set of vertical and horizontal rates but the newer/best ones were basically able to sync to any signal in a given range. That means that with the right parameters written to the video card registers, you could create some custom (or weird) resolutions.

Games aimed for broad compatibility, so they rarely used weird ones, while in the demoscene it was quite common (and custom resolutions were the norm in arcade games too.)

But, for example, Mode X was very popular with games!

It was popularized by Michael Abrash on the pages of Dr. Dobb's Journal you got a 320x240 resolution, that, viewed on a 4:3 monitor, meant the pixels were square. So, for example, you could naively draw circles and they would look like circles in 320x200 they were stretched, as the pixel aspect ratio was not 1:1, and you had to calculate and compensate for that while drawing.

It was a planar mode, so by setting a register you could decide which planes would receive a write in the memory mapped area. For example, for a quick fill operation you would set all planes, and a single byte write would affect four pixels (one for each plane). That also helped to address all the 256 KB of the VGA memory using only a 64 KB segment.

I am positive there was a little utility which let you explore the VGA registers, where you could put whatever values you fancied, and, when you applied your settings, you could finally see if your monitor supported the resulting output. But my memory is too weak right now to remember the name or the author of this program.

Another common trick was to change a part of the color palette during the horizontal retrace done correctly, you could have more than 256 colours on screen. There was not enough time to change the whole palette on each line, so you had to be creative.

(During vertical retraces instead there was enough time to change every colour, and it was done for example for fade in/fade out effects).

(The most popular palette trick was probably changing the background color during tape loading on 8 bit machines (C64 for example).)

One thing that is often overlooked, was that the VGA card was effectively a small three channel DAC creative people found ways to use and abuse that as well.

To a similar effect, Tempest for Eliza used the radio waves emitted by the monitor to transmit a radio signal which could be listened to with a common AM radio.


Contents

Background Edit

The original JPEG specification published in 1992 implements processes from various earlier research papers and patents cited by the CCITT (now ITU-T, via ITU-T Study Group 16) and Joint Photographic Experts Group. [1] The main basis for JPEG's lossy compression algorithm is the discrete cosine transform (DCT), [1] [14] which was first proposed by Nasir Ahmed as an image compression technique in 1972. [8] [14] Ahmed developed a practical DCT algorithm with T. Natarajan of Kansas State University and K. R. Rao of the University of Texas at Arlington in 1973. [8] Their seminal 1974 paper [15] is cited in the JPEG specification, along with several later research papers that did further work on DCT, including a 1977 paper by Wen-Hsiung Chen, C.H. Smith and S.C. Fralick that described a fast DCT algorithm, [1] [16] as well as a 1978 paper by N.J. Narasinha and S.C. Fralick, and a 1984 paper by B.G. Lee. [1] The specification also cites a 1984 paper by Wen-Hsiung Chen and W.K. Pratt as an influence on its quantization algorithm, [1] [17] and David A. Huffman's 1952 paper for its Huffman coding algorithm. [1]

The JPEG specification cites patents from several companies. The following patents provided the basis for its arithmetic coding algorithm. [1]

    • U.S. Patent 4,652,856 – February 4, 1986 – Kottappuram M. A. Mohiuddin and Jorma J. Rissanen – Multiplication-free multi-alphabet arithmetic code
    • U.S. Patent 4,905,297 – February 27, 1990 – G. Langdon, J.L. Mitchell, W.B. Pennebaker, and Jorma J. Rissanen – Arithmetic coding encoder and decoder system
    • U.S. Patent 4,935,882 – June 19, 1990 – W.B. Pennebaker and J.L. Mitchell – Probability adaptation for arithmetic coders
    • JP H02202267 (1021672) – January 21, 1989 – Toshihiro Kimura, Shigenori Kino, Fumitaka Ono, Masayuki Yoshida – Coding system
    • JP H03247123 (2-46275) – February 26, 1990 – Fumitaka Ono, Tomohiro Kimura, Masayuki Yoshida, and Shigenori Kino – Coding apparatus and coding method

    The JPEG specification also cites three other patents from IBM. Other companies cited as patent holders include AT&T (two patents) and Canon Inc. [1] Absent from the list is U.S. Patent 4,698,672 , filed by Compression Labs' Wen-Hsiung Chen and Daniel J. Klenke in October 1986. The patent describes a DCT-based image compression algorithm, and would later be a cause of controversy in 2002 (see Patent controversy below). [18] However, the JPEG specification did cite two earlier research papers by Wen-Hsiung Chen, published in 1977 and 1984. [1]

    JPEG standard Edit

    "JPEG" stands for Joint Photographic Experts Group, the name of the committee that created the JPEG standard and also other still picture coding standards. The "Joint" stood for ISO TC97 WG8 and CCITT SGVIII. Founded in 1986, the group developed the JPEG standard during the late 1980s. Among several transform coding techniques they examined, they selected the discrete cosine transform (DCT), as it was by far the most efficient practical compression technique. The group published the JPEG standard in 1992. [5]

    In 1987, ISO TC 97 became ISO/IEC JTC1 and, in 1992, CCITT became ITU-T. Currently on the JTC1 side, JPEG is one of two sub-groups of ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 1 (ISO/IEC JTC 1/SC 29/WG 1) – titled as Coding of still pictures. [19] [20] [21] On the ITU-T side, ITU-T SG16 is the respective body. The original JPEG Group was organized in 1986, [22] issuing the first JPEG standard in 1992, which was approved in September 1992 as ITU-T Recommendation T.81 [23] and, in 1994, as ISO/IEC 10918-1.

    The JPEG standard specifies the codec, which defines how an image is compressed into a stream of bytes and decompressed back into an image, but not the file format used to contain that stream. [24] The Exif and JFIF standards define the commonly used file formats for interchange of JPEG-compressed images.

    JPEG standards are formally named as Information technology – Digital compression and coding of continuous-tone still images. ISO/IEC 10918 consists of the following parts:

    Digital compression and coding of continuous-tone still images – Parts [20] [22] [25]
    Part ISO/IEC standard ITU-T Rec. First public release date Latest amendment Title Description
    Part 1 ISO/IEC 10918-1:1994 T.81 (09/92) Sep 18, 1992 Requirements and guidelines
    Part 2 ISO/IEC 10918-2:1995 T.83 (11/94) Nov 11, 1994 Compliance testing Rules and checks for software conformance (to Part 1).
    Part 3 ISO/IEC 10918-3:1997 T.84 (07/96) Jul 3, 1996 Apr 1, 1999 Extensions Set of extensions to improve the Part 1, including the Still Picture Interchange File Format (SPIFF). [26]
    Part 4 ISO/IEC 10918-4:1999 T.86 (06/98) Jun 18, 1998 Jun 29, 2012 Registration of JPEG profiles, SPIFF profiles, SPIFF tags, SPIFF colour spaces, APPn markers, SPIFF compression types and Registration Authorities (REGAUT) methods for registering some of the parameters used to extend JPEG
    Part 5 ISO/IEC 10918-5:2013 T.871 (05/11) May 14, 2011 JPEG File Interchange Format (JFIF) A popular format which has been the de facto file format for images encoded by the JPEG standard. In 2009, the JPEG Committee formally established an Ad Hoc Group to standardize JFIF as JPEG Part 5. [27]
    Part 6 ISO/IEC 10918-6:2013 T.872 (06/12) Jun 2012 Application to printing systems Specifies a subset of features and application tools for the interchange of images encoded according to the ISO/IEC 10918-1 for printing.
    Part 7 ISO/IEC 10918-7:2019 T.873 (05/19) May 2019 Digital compression and coding of continuous-tone still images Provides reference software for the coding technology specified in Recommendation ITU-T T.81 – ISO/IEC 10918-1. While the reference implementations also provide an encoder, conformance testing of their encoding process is beyond the scope of this Specification.

    Patent controversy Edit

    In 2002, Forgent Networks asserted that it owned and would enforce patent rights on the JPEG technology, arising from a patent that had been filed on October 27, 1986, and granted on October 6, 1987: U.S. Patent 4,698,672 by Compression Labs' Wen-Hsiung Chen and Daniel J. Klenke. [18] [29] While Forgent did not own Compression Labs at the time, Chen later sold Compression Labs to Forgent, before Chen went on to work for Cisco. This led to Forgent acquiring ownership over the patent. [18] Forgent's 2002 announcement created a furor reminiscent of Unisys' attempts to assert its rights over the GIF image compression standard.

    The JPEG committee investigated the patent claims in 2002 and were of the opinion that they were invalidated by prior art, [30] a view shared by various experts. [18] [31] The patent describes an image compression algorithm based on the discrete cosine transform (DCT), [18] a lossy image compression technique that originated from a 1974 paper by Nasir Ahmed, T. Natarajan and K. R. Rao. [1] [14] [15] Wen-Hsiung Chen further developed their DCT technique, describing a fast DCT algorithm in a 1977 paper with C.H. Smith and S.C. Fralick. [16] [18] The 1992 JPEG specification cites both the 1974 Ahmed paper and the 1977 Chen paper for its DCT algorithm, as well as a 1984 paper by Chen and W.K. Pratt for its quantization algorithm. [1] [17] Compression Labs was founded by Chen, and was the first company to commercialize DCT technology. [32] By the time Chen had filed his patent for a DCT-based image compression algorithm with Klenke in 1986, most of what would later become the JPEG standard had already been formulated in prior literature. [18] JPEG representative Richard Clark also claimed that Chen himself sat in one of the JPEG committees, but Forgent denied this claim. [18]

    Between 2002 and 2004, Forgent was able to obtain about US$105 million by licensing their patent to some 30 companies. In April 2004, Forgent sued 31 other companies to enforce further license payments. In July of the same year, a consortium of 21 large computer companies filed a countersuit, with the goal of invalidating the patent. In addition, Microsoft launched a separate lawsuit against Forgent in April 2005. [33] In February 2006, the United States Patent and Trademark Office agreed to re-examine Forgent's JPEG patent at the request of the Public Patent Foundation. [34] On May 26, 2006 the USPTO found the patent invalid based on prior art. The USPTO also found that Forgent knew about the prior art, yet it intentionally avoided telling the Patent Office. This makes any appeal to reinstate the patent highly unlikely to succeed. [35]

    Forgent also possesses a similar patent granted by the European Patent Office in 1994, though it is unclear how enforceable it is. [36]

    As of October 27, 2006, the U.S. patent's 20-year term appears to have expired, and in November 2006, Forgent agreed to abandon enforcement of patent claims against use of the JPEG standard. [37]

    The JPEG committee has as one of its explicit goals that their standards (in particular their baseline methods) be implementable without payment of license fees, and they have secured appropriate license rights for their JPEG 2000 standard from over 20 large organizations.

    Beginning in August 2007, another company, Global Patent Holdings, LLC claimed that its patent ( U.S. Patent 5,253,341 ) issued in 1993, is infringed by the downloading of JPEG images on either a website or through e-mail. If not invalidated, this patent could apply to any website that displays JPEG images. The patent was under reexamination by the U.S. Patent and Trademark Office from 2000–2007 in July 2007, the Patent Office revoked all of the original claims of the patent but found that an additional claim proposed by Global Patent Holdings (claim 17) was valid. [38] Global Patent Holdings then filed a number of lawsuits based on claim 17 of its patent.

    In its first two lawsuits following the reexamination, both filed in Chicago, Illinois, Global Patent Holdings sued the Green Bay Packers, CDW, Motorola, Apple, Orbitz, Officemax, Caterpillar, Kraft and Peapod as defendants. A third lawsuit was filed on December 5, 2007 in South Florida against ADT Security Services, AutoNation, Florida Crystals Corp., HearUSA, MovieTickets.com, Ocwen Financial Corp. and Tire Kingdom, and a fourth lawsuit on January 8, 2008 in South Florida against the Boca Raton Resort & Club. A fifth lawsuit was filed against Global Patent Holdings in Nevada. That lawsuit was filed by Zappos.com, Inc., which was allegedly threatened by Global Patent Holdings, and sought a judicial declaration that the '341 patent is invalid and not infringed.

    Global Patent Holdings had also used the '341 patent to sue or threaten outspoken critics of broad software patents, including Gregory Aharonian [39] and the anonymous operator of a website blog known as the "Patent Troll Tracker." [40] On December 21, 2007, patent lawyer Vernon Francissen of Chicago asked the U.S. Patent and Trademark Office to reexamine the sole remaining claim of the '341 patent on the basis of new prior art. [41]

    On March 5, 2008, the U.S. Patent and Trademark Office agreed to reexamine the '341 patent, finding that the new prior art raised substantial new questions regarding the patent's validity. [42] In light of the reexamination, the accused infringers in four of the five pending lawsuits have filed motions to suspend (stay) their cases until completion of the U.S. Patent and Trademark Office's review of the '341 patent. On April 23, 2008, a judge presiding over the two lawsuits in Chicago, Illinois granted the motions in those cases. [43] On July 22, 2008, the Patent Office issued the first "Office Action" of the second reexamination, finding the claim invalid based on nineteen separate grounds. [44] On Nov. 24, 2009, a Reexamination Certificate was issued cancelling all claims.

    Beginning in 2011 and continuing as of early 2013, an entity known as Princeton Digital Image Corporation, [45] based in Eastern Texas, began suing large numbers of companies for alleged infringement of U.S. Patent 4,813,056 . Princeton claims that the JPEG image compression standard infringes the '056 patent and has sued large numbers of websites, retailers, camera and device manufacturers and resellers. The patent was originally owned and assigned to General Electric. The patent expired in December 2007, but Princeton has sued large numbers of companies for "past infringement" of this patent. (Under U.S. patent laws, a patent owner can sue for "past infringement" up to six years before the filing of a lawsuit, so Princeton could theoretically have continued suing companies until December 2013.) As of March 2013, Princeton had suits pending in New York and Delaware against more than 55 companies. General Electric's involvement in the suit is unknown, although court records indicate that it assigned the patent to Princeton in 2009 and retains certain rights in the patent. [46]

    The JPEG compression algorithm operates at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. For web usage, where reducing the amount of data used for an image is important for responsive presentation, JPEG's compression benefits make JPEG popular. JPEG/Exif is also the most common format saved by digital cameras.

    However, JPEG is not well suited for line drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels can cause noticeable artifacts. Such images are better saved in a lossless graphics format such as TIFF, GIF, or PNG. [47] The JPEG standard includes a lossless coding mode, but that mode is not supported in most products.

    As the typical use of JPEG is a lossy compression method, which reduces the image fidelity, it is inappropriate for exact reproduction of imaging data (such as some scientific and medical imaging applications and certain technical image processing work).

    JPEG is also not well suited to files that will undergo multiple edits, as some image quality is lost each time the image is recompressed, particularly if the image is cropped or shifted, or if encoding parameters are changed – see digital generation loss for details. To prevent image information loss during sequential and repetitive editing, the first edit can be saved in a lossless format, subsequently edited in that format, then finally published as JPEG for distribution.

    JPEG uses a lossy form of compression based on the discrete cosine transform (DCT). This mathematical operation converts each frame/field of the video source from the spatial (2D) domain into the frequency domain (a.k.a. transform domain). A perceptual model based loosely on the human psychovisual system discards high-frequency information, i.e. sharp transitions in intensity, and color hue. In the transform domain, the process of reducing information is called quantization. In simpler terms, quantization is a method for optimally reducing a large number scale (with different occurrences of each number) into a smaller one, and the transform-domain is a convenient representation of the image because the high-frequency coefficients, which contribute less to the overall picture than other coefficients, are characteristically small-values with high compressibility. The quantized coefficients are then sequenced and losslessly packed into the output bitstream. Nearly all software implementations of JPEG permit user control over the compression ratio (as well as other optional parameters), allowing the user to trade off picture-quality for smaller file size. In embedded applications (such as miniDV, which uses a similar DCT-compression scheme), the parameters are pre-selected and fixed for the application.

    The compression method is usually lossy, meaning that some original image information is lost and cannot be restored, possibly affecting image quality. There is an optional lossless mode defined in the JPEG standard. However, this mode is not widely supported in products.

    There is also an interlaced progressive JPEG format, in which data is compressed in multiple passes of progressively higher detail. This is ideal for large images that will be displayed while downloading over a slow connection, allowing a reasonable preview after receiving only a portion of the data. However, support for progressive JPEGs is not universal. When progressive JPEGs are received by programs that do not support them (such as versions of Internet Explorer before Windows 7) [48] the software displays the image only after it has been completely downloaded.

    Lossless editing Edit

    A number of alterations to a JPEG image can be performed losslessly (that is, without recompression and the associated quality loss) as long as the image size is a multiple of 1 MCU block (Minimum Coded Unit) (usually 16 pixels in both directions, for 4:2:0 chroma subsampling). Utilities that implement this include:

      and its GUI, Jpegcrop. using "JPG Lossless Crop (PlugIn)" and "JPG Lossless Rotation (PlugIn)", which require installing the JPG_TRANSFORM plugin. using "Lossless Crop to File" and "JPEG Lossless Rotate". using "JPEG lossless transformations". supports lossless rotation (but not lossless cropping) with its "Force lossless JPEG operations" option.

    Blocks can be rotated in 90-degree increments, flipped in the horizontal, vertical and diagonal axes and moved about in the image. Not all blocks from the original image need to be used in the modified one.

    The top and left edge of a JPEG image must lie on an 8 × 8 pixel block boundary, but the bottom and right edge need not do so. This limits the possible lossless crop operations, and also prevents flips and rotations of an image whose bottom or right edge does not lie on a block boundary for all channels (because the edge would end up on top or left, where – as aforementioned – a block boundary is obligatory).

    Rotations where the image width and height not a multiple of 8 or 16 (depending upon the chroma subsampling), are not lossless. Rotating such an image causes the blocks to be recomputed which results in loss of quality. [49]

    When using lossless cropping, if the bottom or right side of the crop region is not on a block boundary, then the rest of the data from the partially used blocks will still be present in the cropped file and can be recovered. It is also possible to transform between baseline and progressive formats without any loss of quality, since the only difference is the order in which the coefficients are placed in the file.

    Furthermore, several JPEG images can be losslessly joined together, as long as they were saved with the same quality and the edges coincide with block boundaries.

    The file format known as "JPEG Interchange Format" (JIF) is specified in Annex B of the standard. However, this "pure" file format is rarely used, primarily because of the difficulty of programming encoders and decoders that fully implement all aspects of the standard and because of certain shortcomings of the standard:

    • Color space definition
    • Component sub-sampling registration
    • Pixel aspect ratio definition.

    Several additional standards have evolved to address these issues. The first of these, released in 1992, was the JPEG File Interchange Format (or JFIF), followed in recent years by Exchangeable image file format (Exif) and ICC color profiles. Both of these formats use the actual JIF byte layout, consisting of different markers, but in addition, employ one of the JIF standard's extension points, namely the application markers: JFIF uses APP0, while Exif uses APP1. Within these segments of the file that were left for future use in the JIF standard and are not read by it, these standards add specific metadata.

    Thus, in some ways, JFIF is a cut-down version of the JIF standard in that it specifies certain constraints (such as not allowing all the different encoding modes), while in other ways, it is an extension of JIF due to the added metadata. The documentation for the original JFIF standard states: [50]

    JPEG File Interchange Format is a minimal file format which enables JPEG bitstreams to be exchanged between a wide variety of platforms and applications. This minimal format does not include any of the advanced features found in the TIFF JPEG specification or any application specific file format. Nor should it, for the only purpose of this simplified format is to allow the exchange of JPEG compressed images.

    Image files that employ JPEG compression are commonly called "JPEG files", and are stored in variants of the JIF image format. Most image capture devices (such as digital cameras) that output JPEG are actually creating files in the Exif format, the format that the camera industry has standardized on for metadata interchange. On the other hand, since the Exif standard does not allow color profiles, most image editing software stores JPEG in JFIF format, and also includes the APP1 segment from the Exif file to include the metadata in an almost-compliant way the JFIF standard is interpreted somewhat flexibly. [51]

    Strictly speaking, the JFIF and Exif standards are incompatible, because each specifies that its marker segment (APP0 or APP1, respectively) appear first. In practice, most JPEG files contain a JFIF marker segment that precedes the Exif header. This allows older readers to correctly handle the older format JFIF segment, while newer readers also decode the following Exif segment, being less strict about requiring it to appear first.

    JPEG filename extensions Edit

    The most common filename extensions for files employing JPEG compression are .jpg and .jpeg , though .jpe , .jfif and .jif are also used. It is also possible for JPEG data to be embedded in other file types – TIFF encoded files often embed a JPEG image as a thumbnail of the main image and MP3 files can contain a JPEG of cover art in the ID3v2 tag.

    Color profile Edit

    Many JPEG files embed an ICC color profile (color space). Commonly used color profiles include sRGB and Adobe RGB. Because these color spaces use a non-linear transformation, the dynamic range of an 8-bit JPEG file is about 11 stops see gamma curve.

    A JPEG image consists of a sequence of segments, each beginning with a marker, each of which begins with a 0xFF byte, followed by a byte indicating what kind of marker it is. Some markers consist of just those two bytes others are followed by two bytes (high then low), indicating the length of marker-specific payload data that follows. (The length includes the two bytes for the length, but not the two bytes for the marker.) Some markers are followed by entropy-coded data the length of such a marker does not include the entropy-coded data. Note that consecutive 0xFF bytes are used as fill bytes for padding purposes, although this fill byte padding should only ever take place for markers immediately following entropy-coded scan data (see JPEG specification section B.1.1.2 and E.1.2 for details specifically "In all cases where markers are appended after the compressed data, optional 0xFF fill bytes may precede the marker").

    Within the entropy-coded data, after any 0xFF byte, a 0x00 byte is inserted by the encoder before the next byte, so that there does not appear to be a marker where none is intended, preventing framing errors. Decoders must skip this 0x00 byte. This technique, called byte stuffing (see JPEG specification section F.1.2.3), is only applied to the entropy-coded data, not to marker payload data. Note however that entropy-coded data has a few markers of its own specifically the Reset markers (0xD0 through 0xD7), which are used to isolate independent chunks of entropy-coded data to allow parallel decoding, and encoders are free to insert these Reset markers at regular intervals (although not all encoders do this).

    Common JPEG markers [52]
    Short name Bytes Payload Name Comments
    SOI 0xFF, 0xD8 none Start Of Image
    SOF0 0xFF, 0xC0 variable size Start Of Frame (baseline DCT) Indicates that this is a baseline DCT-based JPEG, and specifies the width, height, number of components, and component subsampling (e.g., 4:2:0).
    SOF2 0xFF, 0xC2 variable size Start Of Frame (progressive DCT) Indicates that this is a progressive DCT-based JPEG, and specifies the width, height, number of components, and component subsampling (e.g., 4:2:0).
    DHT 0xFF, 0xC4 variable size Define Huffman Table(s) Specifies one or more Huffman tables.
    DQT 0xFF, 0xDB variable size Define Quantization Table(s) Specifies one or more quantization tables.
    DRI 0xFF, 0xDD 4 bytes Define Restart Interval Specifies the interval between RSTn markers, in Minimum Coded Units (MCUs). This marker is followed by two bytes indicating the fixed size so it can be treated like any other variable size segment.
    SOS 0xFF, 0xDA variable size Start Of Scan Begins a top-to-bottom scan of the image. In baseline DCT JPEG images, there is generally a single scan. Progressive DCT JPEG images usually contain multiple scans. This marker specifies which slice of data it will contain, and is immediately followed by entropy-coded data.
    RSTn 0xFF, 0xDn (n=0..7) none Restart Inserted every r macroblocks, where r is the restart interval set by a DRI marker. Not used if there was no DRI marker. The low three bits of the marker code cycle in value from 0 to 7.
    APPn 0xFF, 0xEn variable size Application-specific For example, an Exif JPEG file uses an APP1 marker to store metadata, laid out in a structure based closely on TIFF.
    COM 0xFF, 0xFE variable size Comment Contains a text comment.
    EOI 0xFF, 0xD9 none End Of Image

    There are other Start Of Frame markers that introduce other kinds of JPEG encodings.

    Since several vendors might use the same APPn marker type, application-specific markers often begin with a standard or vendor name (e.g., "Exif" or "Adobe") or some other identifying string.

    At a restart marker, block-to-block predictor variables are reset, and the bitstream is synchronized to a byte boundary. Restart markers provide means for recovery after bitstream error, such as transmission over an unreliable network or file corruption. Since the runs of macroblocks between restart markers may be independently decoded, these runs may be decoded in parallel.

    Although a JPEG file can be encoded in various ways, most commonly it is done with JFIF encoding. The encoding process consists of several steps:

    1. The representation of the colors in the image is converted to Y′CBCR, consisting of one luma component (Y'), representing brightness, and two chroma components, (CB and CR), representing color. This step is sometimes skipped.
    2. The resolution of the chroma data is reduced, usually by a factor of 2 or 3. This reflects the fact that the eye is less sensitive to fine color details than to fine brightness details.
    3. The image is split into blocks of 8×8 pixels, and for each block, each of the Y, CB, and CR data undergoes the discrete cosine transform (DCT). A DCT is similar to a Fourier transform in the sense that it produces a kind of spatial frequency spectrum.
    4. The amplitudes of the frequency components are quantized. Human vision is much more sensitive to small variations in color or brightness over large areas than to the strength of high-frequency brightness variations. Therefore, the magnitudes of the high-frequency components are stored with a lower accuracy than the low-frequency components. The quality setting of the encoder (for example 50 or 95 on a scale of 0–100 in the Independent JPEG Group's library [53] ) affects to what extent the resolution of each frequency component is reduced. If an excessively low quality setting is used, the high-frequency components are discarded altogether.
    5. The resulting data for all 8×8 blocks is further compressed with a lossless algorithm, a variant of Huffman encoding.

    The decoding process reverses these steps, except the quantization because it is irreversible. In the remainder of this section, the encoding and decoding processes are described in more detail.

    Encoding Edit

    Many of the options in the JPEG standard are not commonly used, and as mentioned above, most image software uses the simpler JFIF format when creating a JPEG file, which among other things specifies the encoding method. Here is a brief description of one of the more common methods of encoding when applied to an input that has 24 bits per pixel (eight each of red, green, and blue). This particular option is a lossy data compression method.

    Color space transformation Edit

    First, the image should be converted from RGB into a different color space called Y′CBCR (or, informally, YCbCr). It has three components Y', CB and CR: the Y' component represents the brightness of a pixel, and the CB and CR components represent the chrominance (split into blue and red components). This is basically the same color space as used by digital color television as well as digital video including video DVDs, and is similar to the way color is represented in analog PAL video and MAC (but not by analog NTSC, which uses the YIQ color space). The Y′CBCR color space conversion allows greater compression without a significant effect on perceptual image quality (or greater perceptual image quality for the same compression). The compression is more efficient because the brightness information, which is more important to the eventual perceptual quality of the image, is confined to a single channel. This more closely corresponds to the perception of color in the human visual system. The color transformation also improves compression by statistical decorrelation.

    A particular conversion to Y′CBCR is specified in the JFIF standard, and should be performed for the resulting JPEG file to have maximum compatibility. However, some JPEG implementations in "highest quality" mode do not apply this step and instead keep the color information in the RGB color model, [54] where the image is stored in separate channels for red, green and blue brightness components. This results in less efficient compression, and would not likely be used when file size is especially important.

    Downsampling Edit

    Due to the densities of color- and brightness-sensitive receptors in the human eye, humans can see considerably more fine detail in the brightness of an image (the Y' component) than in the hue and color saturation of an image (the Cb and Cr components). Using this knowledge, encoders can be designed to compress images more efficiently.

    The transformation into the Y′CBCR color model enables the next usual step, which is to reduce the spatial resolution of the Cb and Cr components (called "downsampling" or "chroma subsampling"). The ratios at which the downsampling is ordinarily done for JPEG images are 4:4:4 (no downsampling), 4:2:2 (reduction by a factor of 2 in the horizontal direction), or (most commonly) 4:2:0 (reduction by a factor of 2 in both the horizontal and vertical directions). For the rest of the compression process, Y', Cb and Cr are processed separately and in a very similar manner.

    Block splitting Edit

    After subsampling, each channel must be split into 8×8 blocks. Depending on chroma subsampling, this yields Minimum Coded Unit (MCU) blocks of size 8×8 (4:4:4 – no subsampling), 16×8 (4:2:2), or most commonly 16×16 (4:2:0). In video compression MCUs are called macroblocks.

    If the data for a channel does not represent an integer number of blocks then the encoder must fill the remaining area of the incomplete blocks with some form of dummy data. Filling the edges with a fixed color (for example, black) can create ringing artifacts along the visible part of the border repeating the edge pixels is a common technique that reduces (but does not necessarily completely eliminate) such artifacts, and more sophisticated border filling techniques can also be applied.

    Discrete cosine transform Edit

    Next, each 8×8 block of each component (Y, Cb, Cr) is converted to a frequency-domain representation, using a normalized, two-dimensional type-II discrete cosine transform (DCT), see Citation 1 in discrete cosine transform. The DCT is sometimes referred to as "type-II DCT" in the context of a family of transforms as in discrete cosine transform, and the corresponding inverse (IDCT) is denoted as "type-III DCT".

    As an example, one such 8×8 8-bit subimage might be:

    This step results in the following values:

    The next step is to take the two-dimensional DCT, which is given by:

    If we perform this transformation on our matrix above, we get the following (rounded to the nearest two digits beyond the decimal point):

    Note the top-left corner entry with the rather large magnitude. This is the DC coefficient (also called the constant component), which defines the basic hue for the entire block. The remaining 63 coefficients are the AC coefficients (also called the alternating components). [55] The advantage of the DCT is its tendency to aggregate most of the signal in one corner of the result, as may be seen above. The quantization step to follow accentuates this effect while simultaneously reducing the overall size of the DCT coefficients, resulting in a signal that is easy to compress efficiently in the entropy stage.

    The DCT temporarily increases the bit-depth of the data, since the DCT coefficients of an 8-bit/component image take up to 11 or more bits (depending on fidelity of the DCT calculation) to store. This may force the codec to temporarily use 16-bit numbers to hold these coefficients, doubling the size of the image representation at this point these values are typically reduced back to 8-bit values by the quantization step. The temporary increase in size at this stage is not a performance concern for most JPEG implementations, since typically only a very small part of the image is stored in full DCT form at any given time during the image encoding or decoding process.

    Quantization Edit

    The human eye is good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency brightness variation. This allows one to greatly reduce the amount of information in the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This rounding operation is the only lossy operation in the whole process (other than chroma subsampling) if the DCT computation is performed with sufficiently high precision. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers, which take many fewer bits to represent.

    The elements in the quantization matrix control the compression ratio, with larger values producing greater compression. A typical quantization matrix (for a quality of 50% as specified in the original JPEG Standard), is as follows:

    The quantized DCT coefficients are computed with

    Using this quantization matrix with the DCT coefficient matrix from above results in:

    For example, using −415 (the DC coefficient) and rounding to the nearest integer

    Notice that most of the higher-frequency elements of the sub-block (i.e., those with an x or y spatial frequency greater than 4) are quantized into zero values.

    Entropy coding Edit

    Entropy coding is a special form of lossless data compression. It involves arranging the image components in a "zigzag" order employing run-length encoding (RLE) algorithm that groups similar frequencies together, inserting length coding zeros, and then using Huffman coding on what is left.

    The JPEG standard also allows, but does not require, decoders to support the use of arithmetic coding, which is mathematically superior to Huffman coding. However, this feature has rarely been used, as it was historically covered by patents requiring royalty-bearing licenses, and because it is slower to encode and decode compared to Huffman coding. Arithmetic coding typically makes files about 5–7% smaller.

    The previous quantized DC coefficient is used to predict the current quantized DC coefficient. The difference between the two is encoded rather than the actual value. The encoding of the 63 quantized AC coefficients does not use such prediction differencing.

    The zigzag sequence for the above quantized coefficients is shown below. (The format shown is just for ease of understanding/viewing.)

    −26
    −3 0
    −3 −2 −6
    2 −4 1 −3
    1 1 5 1 2
    −1 1 −1 2 0 0
    0 0 0 −1 −1 0 0
    0 0 0 0 0 0 0 0
    0 0 0 0 0 0 0
    0 0 0 0 0 0
    0 0 0 0 0
    0 0 0 0
    0 0 0
    0 0
    0

    Once all similar-positioned coefficients have been encoded, the next position to be encoded is the one occurring next in the zigzag traversal as indicated in the figure above. It has been found that baseline progressive JPEG encoding usually gives better compression as compared to baseline sequential JPEG due to the ability to use different Huffman tables (see below) tailored for different frequencies on each "scan" or "pass" (which includes similar-positioned coefficients), though the difference is not too large.

    In the rest of the article, it is assumed that the coefficient pattern generated is due to sequential mode.

    In order to encode the above generated coefficient pattern, JPEG uses Huffman encoding. The JPEG standard provides general-purpose Huffman tables, though encoders may also choose to dynamically generate Huffman tables optimized for the actual frequency distributions in images being encoded.

    The process of encoding the zigzag quantized data begins with a run-length encoding, where:

    • x is the non-zero, quantized AC coefficient.
    • RUNLENGTH is the number of zeroes that came before this non-zero AC coefficient.
    • SIZE is the number of bits required to represent x .
    • AMPLITUDE is the bit-representation of x .

    The run-length encoding works by examining each non-zero AC coefficient x and determining how many zeroes came before the previous AC coefficient. With this information, two symbols are created:

    Symbol 1 Symbol 2
    (RUNLENGTH, SIZE) (AMPLITUDE)

    Both RUNLENGTH and SIZE rest on the same byte, meaning that each only contains four bits of information. The higher bits deal with the number of zeroes, while the lower bits denote the number of bits necessary to encode the value of x .

    This has the immediate implication of Symbol 1 being only able store information regarding the first 15 zeroes preceding the non-zero AC coefficient. However, JPEG defines two special Huffman code words. One is for ending the sequence prematurely when the remaining coefficients are zero (called "End-of-Block" or "EOB"), and another for when the run of zeroes goes beyond 15 before reaching a non-zero AC coefficient. In such a case where 16 zeroes are encountered before a given non-zero AC coefficient, Symbol 1 is encoded as (15, 0)(0).

    The overall process continues until "EOB" – denoted by (0, 0) – is reached.

    With this in mind, the sequence from earlier becomes:

    (The first value in the matrix, −26, is the DC coefficient it is not encoded the same way. See above.)

    From here, frequency calculations are made based on occurrences of the coefficients. In our example block, most of the quantized coefficients are small numbers that are not preceded immediately by a zero coefficient. These more-frequent cases will be represented by shorter code words.

    Compression ratio and artifacts Edit

    The resulting compression ratio can be varied according to need by being more or less aggressive in the divisors used in the quantization phase. Ten to one compression usually results in an image that cannot be distinguished by eye from the original. A compression ratio of 100:1 is usually possible, but will look distinctly artifacted compared to the original. The appropriate level of compression depends on the use to which the image will be put.

    Those who use the World Wide Web may be familiar with the irregularities known as compression artifacts that appear in JPEG images, which may take the form of noise around contrasting edges (especially curves and corners), or "blocky" images. These are due to the quantization step of the JPEG algorithm. They are especially noticeable around sharp corners between contrasting colors (text is a good example, as it contains many such corners). The analogous artifacts in MPEG video are referred to as mosquito noise, as the resulting "edge busyness" and spurious dots, which change over time, resemble mosquitoes swarming around the object. [56] [57]

    These artifacts can be reduced by choosing a lower level of compression they may be completely avoided by saving an image using a lossless file format, though this will result in a larger file size. The images created with ray-tracing programs have noticeable blocky shapes on the terrain. Certain low-intensity compression artifacts might be acceptable when simply viewing the images, but can be emphasized if the image is subsequently processed, usually resulting in unacceptable quality. Consider the example below, demonstrating the effect of lossy compression on an edge detection processing step.

    Image Lossless compression Lossy compression
    Original
    Processed by
    Canny edge detector

    Some programs allow the user to vary the amount by which individual blocks are compressed. Stronger compression is applied to areas of the image that show fewer artifacts. This way it is possible to manually reduce JPEG file size with less loss of quality.

    Since the quantization stage always results in a loss of information, JPEG standard is always a lossy compression codec. (Information is lost both in quantizing and rounding of the floating-point numbers.) Even if the quantization matrix is a matrix of ones, information will still be lost in the rounding step.

    Decoding Edit

    Decoding to display the image consists of doing all the above in reverse.

    Taking the DCT coefficient matrix (after adding the difference of the DC coefficient back in)

    and taking the entry-for-entry product with the quantization matrix from above results in

    which closely resembles the original DCT coefficient matrix for the top-left portion.

    The next step is to take the two-dimensional inverse DCT (a 2D type-III DCT), which is given by:

    Rounding the output to integer values (since the original had integer values) results in an image with values (still shifted down by 128)


    This paper is about the estimation of calibration parameters of images to be used in Structure from Motion (SfM) pipelines and 3D reconstruction from image feature correspondences. It addresses the estimation of calibration parameters when they are not available, so that additional images may be included in the 3D reconstruction and so that the initial model may be closer to the true geometry of the scene. The approach is to take advantage of known calibration information of some of the images, to estimate calibration information of uncalibrated views, calibration information is therefore extended to images where visual features of the same objects are detected. The approach is based on the standard fundamental matrix, and extended versions of the fundamental matrix that embed the radial distortion model, named radial fundamental matrices. It is shown that the distortion model may be extracted from radial fundamental matrices, along with the standard fundamental matrix, and that the focal length may be subsequently estimated from it. By integrating a few of methods, the number of images that can be used in a large scale 3D reconstruction may be augmented and a better geometric model may be reconstructed. With this approach, the initial values of the parameters and the reconstructed geometry are close to the true solution, so that an optimization step may converge without getting stuck in local minima.

    José Henrique Brito is Assistant Professor at the Technology School of the Polytechnic Institute of Cávado and Ave. He graduated in Electrical Engineering and Computers at Instituto Superior Técnico in Lisbon, Portugal in 1999, earned a Masters degree in Computer Graphics and Virtual Environments from the University of Minho, Portugal in 2009 and a Ph.D. also from the University do Minho in 2014. The research focus of his Ph.D. thesis was in computer vision, namely in multiple view geometry and camera calibration. His more recent work and research is in machine learning and object recognition.


    Camera orientation, calibration and inverse perspective with uncertainties: A Bayesian method applied to area estimation from diverse photographs

    Large collections of images have become readily available through modern digital catalogs, from sources as diverse as historical photographs, aerial surveys, or user-contributed pictures. Exploiting the quantitative information present in such wide-ranging collections can greatly benefit studies that follow the evolution of landscape features over decades, such as measuring areas of glaciers to study their shrinking under climate change. However, many available images were taken with low-quality lenses and unknown camera parameters. Useful quantitative data may still be extracted, but it becomes important to both account for imperfect optics, and estimate the uncertainty of the derived quantities. In this paper, we present a method to address both these goals, and apply it to the estimation of the area of a landscape feature traced as a polygon on the image of interest. The technique is based on a Bayesian formulation of the camera calibration problem. First, the probability density function (PDF) of the unknown camera parameters is determined for the image, based on matches between 2D (image) and 3D (world) points together with any available prior information. In a second step, the posterior distribution of the feature area of interest is derived from the PDF of camera parameters. In this step, we also model systematic errors arising in the polygon tracing process, as well as uncertainties in the digital elevation model. The resulting area PDF therefore accounts for most sources of uncertainty. We present validation experiments, and show that the model produces accurate and consistent results. We also demonstrate that in some cases, accounting for optical lens distortions is crucial for accurate area determination with consumer-grade lenses. The technique can be applied to many other types of quantitative features to be extracted from photographs when careful error estimation is important.


    Structure of a Photoshop-style APP13 segment

    The Adobe's Photoshop program, a de-facto standard for image manipulation, uses the APP13 segment for storing non-graphic information, such as layers, paths, IPTC data and more. The unit for this kind of information is called a "resource data block" (because they hold data that was stored in the Macintosh's resource fork in early versions of Photoshop). The content of an APP13 segment is formed by an identifier string (usually "Photoshop 3.000", but also 'Adobe_Photoshop2.5:', used by earlier versions, is accepted in this case some additional undocumented bytes are read (resolution info?) and saved in a root 'Resolution' record) followed by a sequence of resource data blocks a resource block has the following structure:

    (a Pascal string is made up of a single byte, giving the string length, followed by the string itself, padded to make size even including the length byte since the string length is explicit, there is no need of a terminating null character). The signature (type) is usually ƎBIM', but Photoshop used ƎBPS' up to version 3.0, and some rogue program (Adobe PhotoDeluxe?) is using 'PHUT' ("PHotoshop User Tags" ?) for path information (ID=7d0-bb7). Valid Image Resource IDs are listed in the Photoshop-style tags' list section. In general a resource block contains only a few bytes, but there is an important block, the IPTC block, which can be quite large the structure of this block is analysed in more detail in the IPTC data block section.

    The reference document for the Photoshop file format is:

    Another interesting source of information is:

    Structure of an IPTC data block

    An IPTC/NAA resource data block of a Photoshop-style APP13 segment embeds an IPTC stream conforming to the standard defined by the International Press and Telecommunications Council (IPTC) and the Newspaper Association of America (NAA) for exchanging interoperability information related to various news objects. The data part of a resource block, an IPTC stream, is simply a sequence of units called datasets no preamble nor count is present. Each dataset consists of a unique tag header and a data field (the list of valid tags [dataset numbers] can be found in section about IPTC data). A standard tag header is used when the data field size is less than 32768 bytes otherwise, an extended tag header is used. The datasets do not need to show up in numerical order according to their tag. The structure of a dataset is:

    So, standard datasets have a 5 bytes tag header the last two bytes in the header contain the data field length, the most significant bit being always 0. For extended datasets instead, these two bytes contain the length of the (following) data field length, the most significant bit being always 1. The value of the most significant bit thus distinguishes "standard" from "extended" in digital photographies, I assume that the datasets which are actually used (a subset of the standard) are always standard therefore, we likely do not have the IPTC block spanning more than one APP13 segment. The record types defined by the IPTC-NAA standard are the following (but the "pseudo"-standard by Adobe for APP13 IPTC data is restricted to the first application record, 2:xx, and sometimes to the envelope record, 1:xx, I believe, because everything else can be accomodated more simply by other JPEG Segments):

    The reference document for the IPTC standard is:


    JFIF spec

    The JFIF spec describes 3 fields in the JPEG header:

    1. At byte offset 0x0d the "Units" field. This field has a 1 byte length. It can hold 3 values. The values describe "Units of the X and Y densities".
      • 0 , means: "no units, X and Y describe the pixel aspect ratios".
      • 1 , means: "X and Y are dots per inch"
      • 2 , means: "X and Y are dots per cm".
    2. At byte offset 0x0e the "XDensity" field. This field has 2 bytes length. It describes the "Horizontal pixel density".

    At byte offset 0x10 the "YDensity" field. Its length is 2 bytes. It describes the "Vertical pixel density".


    1 Answer 1

    This was slightly too long for a comment, sorry. -)

    Here's some sample code, should be properly commented to help you understand what it does. Had fun writing it.

    What it does, is creating and displaying 6 images (of random noise). There will be 3 different images, each created in two different ways. 4 will use a banded model and 2 interleaved model. For the banded models, it shows how to create them, both using single and multidimensional arrays.

    Here's what the test output looks like:

    PS: Bonus points to anyone who can explain why banded rasters 1 & 2 and 3 & 4 respectively are not equal or of the same type. I think they should be, as interleaved rasters 1 & 2 are equal.


    3 Answers 3

    You are probably doing it wrong. :-)

    . should create an image with an IntegerInterleavedRaster and DataBufferInt (it does for me on OS X, using all my installed Java 6 and 7 JREs). Try printing the raster and the data buffer right after creation. I'm sure it will print what you originally expected.

    Speculative answer below:

    You haven't posted the relevant parts of your code, but I would guess based on the comments that you are a) creating an empty image and assigning it to image , then b) reading another image using ImageIO and assigning the result to image , replacing the previous value of image . The image now has a raster and data buffer of type byte. You then try to cast to int type, and boom.

    Now, it is possible to make ImageIO read into a pre-created BufferedImage , or just specify the type. But this is plugin- and source-specific (ie: there's no guarantee it will support TYPE_INT_RGB ). To query the reader what image layouts it supports, do something like:

    Casting does what it has always done, but I don't think that is your problem here. -)


    1 Answer 1

    Ok, so you're really close. I can't test this right now, but basically what needs to be changed is

    1) You need to apply the same scale to both X and Y, so you need to pick the smaller one (try the bigger one if that doesn't work).

    2) The result will be a bitmap where at least one side is 512px large, the other one will be smaller. So you need to add the padding to fit that side to 512px (equally left and right/top and bottom for centering). In order to do so, you need to create an new bitmap of the desired size:

    3) and lastly depending on what side of the resizedBitmap is 512px you need to draw resizedBitmap to the correct position in outputImage


    Watch the video: #Raster Data in #GIS. Raster #Pixels, #Bands and Formats. Urdu. Hindi. Part 1