JPEG

From wiki.gis.com
Jump to: navigation, search
JPEG
Phalaenopsis JPEG.png
A photo of a flower compressed with successively more lossy compression ratios from left to right.
Filename extension .jpg, .jpeg, .jpe
.jif, .jfif, .jfi (notice that the extension .jfi is not directly supported in Windows XP)(containers)
Internet media type image/jpeg
Type code JPEG
Uniform Type Identifier public.jpeg
Magic number ff d8
Developed by Joint Photographic Experts Group
Type of format lossy image format

In computing, JPEG (pronounced /ˈdʒeɪpɛɡ/, JAY-peg) is a commonly used method of compression for photographic images. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality.

JPEG compression is used in a number of image file formats. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format for storing and transmitting photographic images on the World Wide Web. These format variations are often not distinguished, and are simply called JPEG.

The MIME media type for JPEG is image/jpeg (defined in RFC 1341).

The JPEG standard

The name "JPEG" stands for Joint Photographic Experts Group, the name of the committee that created the standard. The group was organized in 1986, issuing a standard in 1992, which was approved in 1994 as ISO 10918-1.

The JPEG standard specifies both the codec, which defines how an image is compressed into a stream of bytes and decompressed back into an image, and the file format used to contain that stream.

Typical usage

The JPEG compression algorithm is at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. For web usage, where the bandwidth used by an image is important, JPEG is very popular. JPEG/Exif is also the most common format saved by digital cameras.

On the other hand, JPEG is not as well suited for line drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels cause noticeable artifacts. Such images are better saved in a lossless graphics format such as TIFF, GIF, PNG, or a raw image format.

JPEG is also not well suited to files that will undergo multiple edits, as some image quality will usually be lost each time the image is decompressed and recompressed, particularly if the image is cropped or shifted, or if encoding parameters are changed – see digital generation loss for details. To avoid this, an image that is being modified or may be modified in the future can be saved in a lossless format such as PNG, and a copy exported as JPEG for distribution.

As JPEG is a lossy compression method, which removes information from the image, it must not be used in astronomical or medical imaging or other purposes where the exact reproduction of the data is required. Lossless formats such as PNG must be used instead.

JPEG compression

A chart showing the relative quality of various JPEG encoding settings and also compares saving a file as a JPEG normally and using Photoshop's "save for web" option

The compression method is usually lossy, meaning that some original image information is lost and cannot be restored (possibly affecting image quality.) There are variations on the standard baseline JPEG that are lossless; however, these are not widely supported.

There is also an interlaced "Progressive JPEG" format, in which data is compressed in multiple passes of progressively higher detail. This is ideal for large images that will be displayed while downloading over a slow connection, allowing a reasonable preview after receiving only a portion of the data. However, progressive JPEGs are not as widely supported, and even some software which does support them (such as some versions of Internet Explorer) only displays the image once it has been completely downloaded.

There are also many medical imaging systems that create and process 12-bit JPEG images. The 12-bit JPEG format has been part of the JPEG specification for some time, but again, this format is not as widely supported.

Lossless editing

A number of alterations to a JPEG image can be performed losslessly (that is, without recompression and the associated quality loss) as long as the image size is a multiple 1 MCU block (Minimum Coded Unit) (usually 16 pixels in both directions, for 4:2:0 chroma subsampling). Utilities that implement this include jpegtran, with user interface Jpegcrop, and the JPG_TRANSFORM plugin to IrfanView.

Blocks can be rotated in 90 degree increments, flipped in the horizontal, vertical and diagonal axes and moved about in the image. Not all blocks from the original image need to be used in the modified one.

The top and left of a JPEG image must lie on a block boundary, but the bottom and right need not do so. This limits the possible lossless crop operations, and also what flips and rotates can be performed on an image whose edges do not lie on a block boundary for all channels.

When using lossless cropping, if the bottom or right side of the crop region is not on a block boundary then the rest of the data from the partially used blocks will still be present in the cropped file and can be recovered relatively easily by anyone with a hex editor and an understanding of the format.

It is also possible to transform between baseline and progressive formats without any loss of quality, since the only difference is the order in which the coefficients are placed in the file.

JPEG files

The file format is known as 'JPEG Interchange Format' (JIF), as specified in Annex B of the standard. However, this "pure" file format is rarely used, primarily because of the difficulty of programming encoders and decoders that fully implement all aspects of the standard and because of certain shortcomings of the standard:

  • Color space definition
  • Component sub-sampling registration
  • Pixel aspect ratio definition

Several additional standards have evolved to address these issues. The first of these, released in 1992, was JPEG File Interchange Format (or JFIF), followed in recent years by Exchangeable image file format (Exif) and ICC color profiles.

There is some confusion between the original 'JPEG Interchange Format' (JIF) and the similarly titled 'JPEG File Interchange Format' (JFIF). In some ways JFIF is a cutdown version of the JIF standard in that it specifies certain constraints (such as standard color space), while in other ways it is an extension of JIF due to the standard Application Segment header. The documentation for the original JFIF standard states:[1]

JPEG File Interchange Format is a minimal file format which enables JPEG bitstreams to be exchanged between a wide variety of platforms and applications. This minimal format does not include any of the advanced features found in the TIFF JPEG specification or any application specific file format. Nor should it, for the only purpose of this simplified format is to allow the exchange of JPEG compressed images.

Image files that employ JPEG compression are commonly called "JPEG files". Most image capture devices (such as digital cameras) and most image editing software programs that write to a "JPEG file" are actually creating a file in the JFIF and/or Exif format.[2]

Strictly speaking, the JFIF and Exif standards are incompatible because they each specify that their header appears first. In practice, most JPEG files in Exif format contain a small JFIF header that precedes the Exif header. This allows older readers to correctly handle the older format JFIF header, while newer readers also decode the following Exif header.

JPEG file extensions

The most common filename extensions for files employing JPEG compression are .jpg and .jpeg, though .jpe, .jfif and .jif are also used. It is also possible for JPEG data to be embedded in other file types - TIFF encoded files often embed a JPEG image as a thumbnail of the main image.

Color profile

Many JPEG files embed an ICC color profile (color space). Commonly used color profiles include sRGB and Adobe RGB. Because these color spaces use a non-linear transformation, the dynamic range of an 8-bit JPEG file is about 11 stops. However, many applications are not able to deal with JPEG color profiles and simply ignore them.

Syntax and structure

A JPEG image consists of a sequence of segments, each beginning with a marker, each of which begins with a 0xFF byte followed by a byte indicating what kind of marker it is. Some markers consist of just those two bytes; others are followed by two bytes indicating the length of marker-specific payload data that follows. (The length includes the two bytes for the length, but not the two bytes for the marker.) Some markers are followed by entropy-coded data; the length of such a marker does not include the entropy-coded data. Note that consecutive 0xFF bytes are used as fill bytes for padding purposes (see JPEG specification section B.1.2.3 for details).

Within the entropy-coded data, after any 0xFF byte, a 0x00 byte is inserted by the encoder before the next byte, so that there does not appear to be a marker where none is intended, preventing framing errors. Decoders must skip this 0x00 byte. This technique, called byte stuffing, is only applied to the entropy-coded data, not to marker payload data.

Common JPEG markers[3]
Short name Bytes Payload Name Comments
SOI 0xFFD8 none Start Of Image
SOF0 0xFFC0 variable size Start Of Frame (Baseline DCT) Indicates that this is a baseline DCT-based JPEG, and specifies the width, height, number of components, and component subsampling (e.g., 4:2:0).
SOF2 0xFFC2 variable size Start Of Frame (Progressive DCT) Indicates that this is a progressive DCT-based JPEG, and specifies the width, height, number of components, and component subsampling (e.g., 4:2:0).
DHT 0xFFC4 variable size Define Huffman Table(s) Specifies one or more Huffman tables.
DQT 0xFFDB variable size Define Quantization Table(s) Specifies one or more quantization tables.
DRI 0xFFDD 2 bytes Define Restart Interval Specifies the interval between RSTn markers, in macroblocks. This marker is followed by two bytes indicating the fixed size so it can be treated like any other variable size segment.
SOS 0xFFDA variable size Start Of Scan Begins a top-to-bottom scan of the image. In baseline DCT JPEG images, there is generally a single scan. Progressive DCT JPEG images usually contain multiple scans. This marker specifies which slice of data it will contain, and is immediately followed by entropy-coded data.
RSTn 0xFFD0 … 0xFFD7 none Restart Inserted every r macroblocks, where r is the restart interval set by a DRI marker. Not used if there was no DRI marker. The low 3 bits of the marker code, cycles from 0 to 7.
APPn 0xFFEn variable size Application-specific For example, an Exif JPEG file uses an APP1 marker to store metadata, laid out in a structure based closely on TIFF.
COM 0xFFFE variable size Comment Contains a text comment.
EOI 0xFFD9 none End Of Image

There are other Start Of Frame markers that introduce other kinds of JPEG.

Since several vendors might use the same APPn marker type, application-specific markers often begin with a standard or vendor name (e.g., "Exif" or "Adobe") or some other identifying string.

At a restart marker, block-to-block predictor variables are reset, and the bitstream is synchronized to a byte boundary. Restart markers provide means for recovery after bitstream error, such as transmission over an unreliable network or file corruption. Since the runs of macroblocks between restart markers may be independently decoded, these runs may be decoded in parallel.

JPEG codec example

Although a JPEG file can be encoded in various ways, most commonly it is done with JFIF encoding. The encoding process consists of several steps:

  1. The representation of the colors in the image is converted from RGB to YCbCr, consisting of one luma component (Y), representing brightness, and two chroma components, (Cb and Cr), representing color. This step is sometimes skipped.
  2. The resolution of the chroma data is reduced, usually by a factor of 2. This reflects the fact that the eye is less sensitive to fine color details than to fine brightness details.
  3. The image is split into blocks of 8×8 pixels, and for each block, each of the Y, Cb, and Cr data undergoes a discrete cosine transform (DCT). A DCT is similar to a Fourier transform in the sense that it produces a kind of spatial frequency spectrum.
  4. The amplitudes of the frequency components are quantized. Human vision is much more sensitive to small variations in color or brightness over large areas than to the strength of high-frequency brightness variations. Therefore, the magnitudes of the high-frequency components are stored with a lower accuracy than the low-frequency components. The quality setting of the encoder (for example 50 or 95 on a scale of 0–100 in the Independent JPEG Group's library[4]) affects to what extent the resolution of each frequency component is reduced. If an excessively low quality setting is used, the high-frequency components are discarded altogether.
  5. The resulting data for all 8×8 blocks is further compressed with a loss-less algorithm, a variant of Huffman encoding.

The decoding process reverses these steps. In the remainder of this section, the encoding and decoding processes are described in more detail.

Encoding

Many of the options in the JPEG standard are not commonly used, and as mentioned above, most image software uses the simpler JFIF format when creating a JPEG file, which among other things specifies the encoding method. Here is a brief description of one of the more common methods of encoding when applied to an input that has 24 bits per pixel (eight each of red, green, and blue). This particular option is a lossy data compression method.

Color space transformation

First, the image should be converted from RGB into a different color space called YCbCr. It has three components Y, Cb and Cr: the Y component represents the brightness of a pixel, the Cb and Cr components represent the chrominance (split into blue and red components). This is the same color space as used by digital color television as well as digital video including video DVDs, and is similar to the way color is represented in analog PAL video and MAC but not by analog NTSC, which uses the YIQ color space. The YCbCr color space conversion allows greater compression without a significant effect on perceptual image quality (or greater perceptual image quality for the same compression). The compression is more efficient as the brightness information, which is more important to the eventual perceptual quality of the image, is confined to a single channel, more closely representing the human visual system.

This conversion to YCbCr is specified in the JFIF standard, and should be performed for the resulting JPEG file to have maximum compatibility. However, some JPEG implementations in "highest quality" mode do not apply this step and instead keep the color information in the RGB color model[citation needed], where the image is stored in separate channels for red, green and blue luminance. This results in less efficient compression, and would not likely be used if file size was an issue.

Downsampling

Due to the densities of color- and brightness-sensitive receptors in the human eye, humans can see considerably more fine detail in the brightness of an image (the Y component) than in the color of an image (the Cb and Cr components). Using this knowledge, encoders can be designed to compress images more efficiently.

The transformation into the YCbCr color model enables the next step, which is to reduce the spatial resolution of the Cb and Cr components (called "downsampling" or "chroma subsampling"). The ratios at which the downsampling can be done on JPEG are 4:4:4 (no downsampling), 4:2:2 (reduce by factor of 2 in horizontal direction), and most commonly 4:2:0 (reduce by factor of 2 in horizontal and vertical directions). For the rest of the compression process, Y, Cb and Cr are processed separately and in a very similar manner.

Block splitting

After subsampling, each channel must be split into 8×8 blocks of pixels. Depending on chroma subsampling, this yields (Minimum Coded Unit) MCU blocks of size 8×8 (4:4:4 – no subsampling), 16×8 (4:2:2), or most commonly 16×16 (4:2:0).

If the data for a channel does not represent an integer number of blocks then the encoder must fill the remaining area of the incomplete blocks with some form of dummy data. Filling the edge pixels with a fixed color (typically black) creates ringing artifacts along the visible part of the border; repeating the edge pixels is a common technique that reduces the visible border, but it can still create artifacts.

Discrete cosine transform

The 8×8 sub-image shown in 8-bit grayscale

Next, each component (Y, Cb, Cr) of each 8×8 block is converted to a frequency-domain representation, using a normalized, two-dimensional type-II discrete cosine transform (DCT).

As an example, one such 8×8 8-bit subimage might be:


\begin{bmatrix}
 52 & 55 & 61 & 66 & 70 & 61 & 64 & 73 \\
 63 & 59 & 55 & 90 & 109 & 85 & 69 & 72 \\
 62 & 59 & 68 & 113 & 144 & 104 & 66 & 73 \\
 63 & 58 & 71 & 122 & 154 & 106 & 70 & 69 \\
 67 & 61 & 68 & 104 & 126 & 88 & 68 & 70 \\
 79 & 65 & 60 & 70 & 77 & 68 & 58 & 75 \\
 85 & 71 & 64 & 59 & 55 & 61 & 65 & 83 \\
 87 & 79 & 69 & 68 & 65 & 76 & 78 & 94
\end{bmatrix}

Before computing the DCT of the subimage, its gray values are shifted from a positive range to one centered around zero. For an 8-bit image each pixel has 256 possible values: [0, 255]. To center around zero it is necessary to subtract by half the number of possible values, or 128.

\frac{2^{bit}}{2} = \frac{2^8}{2} = 2^7 = 128

Subtracting 128 from each pixel value yields pixel values on [-128, 127]


\begin{array}{c}
x \\
\longrightarrow \\
\begin{bmatrix}
 -76 & -73 & -67 & -62 & -58 & -67 & -64 & -55 \\
 -65 & -69 & -73 & -38 & -19 & -43 & -59 & -56 \\
 -66 & -69 & -60 & -15 & 16 & -24 & -62 & -55 \\
 -65 & -70 & -57 & -6 & 26 & -22 & -58 & -59 \\
 -61 & -67 & -60 & -24 & -2 & -40 & -60 & -58 \\
 -49 & -63 & -68 & -58 & -51 & -60 & -70 & -53 \\
 -43 & -57 & -64 & -69 & -73 & -67 & -63 & -45 \\
 -41 & -49 & -59 & -60 & -63 & -52 & -50 & -34
\end{bmatrix}
\end{array}
\Bigg\downarrow y

The next step is to take the two-dimensional DCT, which is given by:

The DCT transforms 64 pixels to a linear combination of these 64 squares. Horizontally is u and vertically is v.
Failed to parse (PNG conversion failed; check for correct installation of latex and dvipng (or dvips + gs + convert)): \ G_{{u,v}}=\alpha (u)\alpha (v)\sum _{{x=0}}^{7}\sum _{{y=0}}^{7}g_{{x,y}}\cos \left[{\frac {\pi }{8}}\left(x+{\frac {1}{2}}\right)u\right]\cos \left[{\frac {\pi }{8}}\left(y+{\frac {1}{2}}\right)v\right]


where

  • \ u is the horizontal spatial frequency, for the integers \ 0 \leq u < 8.
  • \ v is the vertical spatial frequency, for the integers \ 0 \leq v < 8.
  • Failed to parse (PNG conversion failed; check for correct installation of latex and dvipng (or dvips + gs + convert)): \alpha _{p}(n)={\begin{cases}{\sqrt {{\frac {1}{8}}}},&{\mbox{if }}n=0\\{\sqrt {{\frac {2}{8}}}},&{\mbox{otherwise}}\end{cases}}
is a normalizing function
  • \ g_{x,y} is the pixel value at coordinates \ (x,y)
  • \ G_{u,v} is the DCT coefficient at coordinates \ (u,v)

If we perform this transformation on our matrix above, and then round to the nearest integer, we get


\begin{array}{c}
u \\
\longrightarrow \\
\begin{bmatrix}
 -415 & -30 & -61 & 27 & 56 & -20 & -2 & 0 \\
 4 & -22 & -61 & 10 & 13 & -7 & -9 & 5 \\
 -47 & 7 & 77 & -25 & -29 & 10 & 5 & -6 \\
 -49 & 12 & 34 & -15 & -10 & 6 & 2 & 2 \\
 12 & -7 & -13 & -4 & -2 & 2 & -3 & 3 \\
 -8 & 3 & 2 & -6 & -2 & 1 & 4 & 2 \\
 -1 & 0 & 0 & -2 & -1 & -3 & 4 & -1 \\
 0 & 0 & -1 & -4 & -1 & 0 & 1 & 2
\end{bmatrix}
\end{array}
\Bigg\downarrow v

Note the rather large value of the top-left corner. This is the DC coefficient. The remaining 63 coefficients are called the AC coefficients. The advantage of the DCT is its tendency to aggregate most of the signal in one corner of the result, as may be seen above. The quantization step to follow accentuates this effect while simultaneously reducing the overall size of the DCT coefficients, resulting in a signal that is easy to compress efficiently in the entropy stage.

The DCT temporarily increases the bit-depth of the image, since the DCT coefficients of an 8-bit/component image take up to 11 or more bits (depending on fidelity of the DCT calculation) to store. This may force the codec to temporarily use 16-bit bins to hold these coefficients, doubling the size of the image representation at this point; they are typically reduced back to 8-bit values by the quantization step. The temporary increase in size at this stage is not a performance concern for most JPEG implementations, because typically only a very small part of the image is stored in full DCT form at any given time during the image encoding or decoding process.

Quantization

The human eye is good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency brightness variation. This allows one to greatly reduce the amount of information in the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This is the main lossy operation in the whole process. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers, which take many fewer bits to store.

A typical quantization matrix, as specified in the original JPEG Standard[5], is as follows:


\begin{bmatrix}
 16 & 11 & 10 & 16 & 24 & 40 & 51 & 61 \\
 12 & 12 & 14 & 19 & 26 & 58 & 60 & 55 \\
 14 & 13 & 16 & 24 & 40 & 57 & 69 & 56 \\
 14 & 17 & 22 & 29 & 51 & 87 & 80 & 62 \\
 18 & 22 & 37 & 56 & 68 & 109 & 103 & 77 \\
 24 & 35 & 55 & 64 & 81 & 104 & 113 & 92 \\
 49 & 64 & 78 & 87 & 103 & 121 & 120 & 101 \\
 72 & 92 & 95 & 98 & 112 & 100 & 103 & 99
\end{bmatrix}

The quantized DCT coefficients are computed with

B_{j,k} = \mathrm{round} \left( \frac{G_{j,k}}{Q_{j,k}} \right) \mbox{ for } j=0,1,2,\cdots,N_1-1; k=0,1,2,\cdots,N_2-1

where G is the unquantized DCT coefficients; Q is the quantization matrix above; and B is the quantized DCT coefficients. (Note that this is in no way matrix multiplication.)

Using this quantization matrix with the DCT coefficient matrix from above results in:


\begin{bmatrix}
 -26 & -3 & -6 & 2 & 2 & -1 & 0 & 0 \\
 0 & -2 & -4 & 1 & 1 & 0 & 0 & 0 \\
 -3 & 1 & 5 & -1 & -1 & 0 & 0 & 0 \\
 -4 & 1 & 2 & -1 & 0 & 0 & 0 & 0 \\
 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}

For example, using −415 (the DC coefficient) and rounding to the nearest integer


\mathrm{round}
\left(
 \frac{-415}{16}
\right)
=
\mathrm{round}
\left(
 -25.9375
\right)
=
-26

Entropy coding

Zigzag ordering of JPEG image components

Entropy coding is a special form of lossless data compression. It involves arranging the image components in a "zigzag" order employing run-length encoding (RLE) algorithm that groups similar frequencies together, inserting length coding zeros, and then using Huffman coding on what is left.

The JPEG standard also allows, but does not require, the use of arithmetic coding, which is mathematically superior to Huffman coding. However, this feature is rarely used as it is covered by patents and because it is much slower to encode and decode compared to Huffman coding. Arithmetic coding typically makes files about 5% smaller.

The zigzag sequence for the above quantized coefficients are shown below. (The format shown is just for ease of understanding/viewing.)

−26
−3 0
−3 −2 −6
2 −4 1 −4
1 1 5 1 2
−1 1 −1 2 0 0
0 0 0 −1 −1 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0
0 0 0 0
0 0 0
0 0
0

If the i-th block is represented by Bi and positions within each block are represented by (p,q) where p = 0, 1, ..., 7 and q = 0, 1, ..., 7, then any coefficient in the DCT image can be represented as Bi(p,q). Thus, in the above scheme, the order of encoding pixels (for the i-th block) is Bi(0,0), Bi(0,1), Bi(1,0), Bi(2,0), Bi(1,1), Bi(0,2), Bi(0,3), Bi(1,2) and so on.

Baseline sequential JPEG encoding and decoding processes

This encoding mode is called baseline sequential encoding. Baseline JPEG also supports progressive encoding. While sequential encoding encodes coefficients of a single block at a time (in a zigzag manner), progressive encoding encodes similar-positioned coefficients of all blocks in one go, followed by the next positioned coefficients of all blocks, and so on. So, if the image is divided into N 8×8 blocks {B0,B1,B2, ..., Bn-1}, then progressive encoding encodes Bi(0,0) for all blocks, i.e., for all i = 0, 1, 2, ..., N-1. This is followed by encoding Bi(0,1) coefficient of all blocks, followed by Bi(1,0)-th coefficient of all blocks, then Bi(2,0)-th coefficient of all blocks, and so on. It should be noted here that once all similar-positioned coefficients have been encoded, the next position to be encoded is the one occurring next in the zigzag traversal as indicated in the figure above. It has been found that Baseline Progressive JPEG encoding usually gives better compression as compared to Baseline Sequential JPEG due to the ability to use different Huffman tables (see below) tailored for different frequencies on each "scan" or "pass" (which includes similar-positioned coefficients), though the difference is not too large.

In the rest of the article, it is assumed that the coefficient pattern generated is due to sequential mode.

In order to encode the above generated coefficient pattern, JPEG uses Huffman encoding. JPEG has a special Huffman code word for ending the sequence prematurely when the remaining coefficients are zero.

Using this special code word: "EOB", the sequence becomes:

−26
−3 0
−3 −2 −6
2 −4 1 −4
1 1 5 1 2
−1 1 −1 2 0 0
0 0 0 −1 −1 EOB

JPEG's other code words represent combinations of (a) the number of significant bits of a coefficient, including sign, and (b) the number of consecutive zero coefficients that precede it. (Once you know how many bits to expect, it takes 1 bit to represent the choices {-1, +1}, 2 bits to represent the choices {-3, -2, +2, +3}, and so forth.) In our example block, most of the quantized coefficients are small numbers that are not preceded immediately by a zero coefficient. These more-frequent cases will be represented by shorter code words.

The JPEG standard provides general-purpose Huffman tables; encoders may also choose to generate Huffman tables optimized for the actual frequency distributions in images being encoded.

Compression ratio and artifacts

This image shows the pixels that are different between a non-compressed image and the same image JPEG compressed with a quality setting of 50. Darker means a larger difference. Note especially the changes occurring near sharp edges and having a block-like shape.
The compressed 8×8-squares are visible in the scaled up picture, together with other visual artifacts of the lossy compression.

The resulting compression ratio can be varied according to need by being more or less aggressive in the divisors used in the quantization phase. Ten to one compression usually results in an image that cannot be distinguished by eye from the original. 100 to one compression is usually possible, but will look distinctly artifacted compared to the original. The appropriate level of compression depends on the use to which the image will be put.

Those who use the World Wide Web may be familiar with the irregularities known as compression artifacts (commonly known as 'jaggies') that appear in JPEG images. These are due to the quantization step of the JPEG algorithm. They are especially noticeable around sharp corners between contrasting colours (text is a good example as it contains many such corners). They can be reduced by choosing a lower level of compression; they may be eliminated by saving an image using a lossless file format, though for photographic images this will usually result in a larger file size. The images created with ray-tracing programs have noticeable blocky shapes on the terrain. Compression artifacts are acceptable when the images are used for visualization purpose. Unfortunately subsequent processing of these images usually result in unacceptable artifacts.

Image Lossless Compression Lossy Compression
Original
Processed by
Canny edge detector

Some programs allow the user to vary the amount by which individual blocks are compressed. Stronger compression is applied to areas of the image that show fewer artifacts. This way it is possible to manually reduce JPEG file size with less loss of quality.

Since the quantization stage always results in a loss of information, JPEG standard is always a lossy compression codec. (Information is lost both in quantizing and rounding of the floating-point numbers.) Even if the quantization matrix is a matrix of ones, information will still be lost in the rounding step.

Decoding

Decoding to display the image consists of doing all the above in reverse.

Taking the DCT coefficient matrix (after adding the difference of the DC coefficient back in)


\begin{bmatrix}
 -26 & -3 & -6 & 2 & 2 & -1 & 0 & 0 \\
 0 & -2 & -4 & 1 & 1 & 0 & 0 & 0 \\
 -3 & 1 & 5 & -1 & -1 & 0 & 0 & 0 \\
 -4 & 1 & 2 & -1 & 0 & 0 & 0 & 0 \\
 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}

and taking the entry-for-entry product with the quantization matrix from above results in


\begin{bmatrix}
 -416 & -33 & -60 & 32 & 48 & -40 & 0 & 0 \\
 0 & -24 & -56 & 19 & 26 & 0 & 0 & 0 \\
 -42 & 13 & 80 & -24 & -40 & 0 & 0 & 0 \\
 -56 & 17 & 44 & -29 & 0 & 0 & 0 & 0 \\
 18 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}

which closely resembles the original DCT coefficient matrix for the top-left portion. Taking the inverse DCT (type-III DCT) results in an image with values (still shifted down by 128)



\begin{bmatrix}
 -68 & -65 & -73 & -70 & -58 & -67 & -70 & -48 \\
 -70 & -72 & -72 & -45 & -20 & -40 & -65 & -57 \\
 -68 & -76 & -66 & -15 & 22 & -12 & -58 & -61 \\
 -62 & -72 & -60 & -6 & 28 & -12 & -59 & -56 \\
 -59 & -66 & -63 & -28 & -8 & -42 & -69 & -52 \\
 -60 & -60 & -67 & -60 & -50 & -68 & -75 & -50 \\
 -54 & -46 & -61 & -74 & -65 & -64 & -63 & -45 \\
 -45 & -32 & -51 & -72 & -58 & -45 & -45 & -39
\end{bmatrix}

and adding 128 to each entry


\begin{bmatrix}
 60 & 63 & 55 & 58 & 70 & 61 & 58 & 80 \\
 58 & 56 & 56 & 83 & 108 & 88 & 63 & 71 \\
 60 & 52 & 62 & 113 & 150 & 116 & 70 & 67 \\
 66 & 56 & 68 & 122 & 156 & 116 & 69 & 72 \\
 69 & 62 & 65 & 100 & 120 & 86 & 59 & 76 \\
 68 & 68 & 61 & 68 & 78 & 60 & 53 & 78 \\
 74 & 82 & 67 & 54 & 63 & 64 & 65 & 83 \\
 83 & 96 & 77 & 56 & 70 & 83 & 83 & 89
\end{bmatrix}

This is the uncompressed subimage and can be compared to the original subimage (also see images to the right) by taking the difference (original − uncompressed) results in error values


\begin{bmatrix}
 -8 & -8 & 6 & 8 & 0 & 0 & 6 & -7 \\
 5 & 3 & -1 & 7 & 1 & -3 & 6 & 1 \\
 2 & 7 & 6 & 0 & -6 & -12 & -4 & 6 \\
 -3 & 2 & 3 & 0 & -2 & -10 & 1 & -3 \\
 -2 & -1 & 3 & 4 & 6 & 2 & 9 & -6 \\
 11 & -3 & -1 & 2 & -1 & 8 & 5 & -3 \\
 11 & -11 & -3 & 5 & -8 & -3 & 0 & 0 \\
 4 & -17 & -8 & 12 & -5 & -7 & -5 & 5
\end{bmatrix}

with an average absolute error of about 5 values per pixels (i.e., \frac{1}{64} \sum_{x=1}^8 \sum_{y=1}^8 |e(x,y)| = 4.8125).

The error is most noticeable in the bottom-left corner where the bottom-left pixel becomes darker than the pixel to its immediate right.

Required precision

The JPEG encoding does not fix the precision needed for the output compressed image. On the contrary, the JPEG standard (as well as the derived MPEG standards) have very strict precision requirements for the decoding, including all parts of the decoding process (variable length decoding, inverse DCT, dequantization, renormalization of outputs); the output from the reference algorithm must not exceed:

  • a maximum 1 bit of difference for each pixel component
  • low mean square error over each 8×8-pixel block
  • very low mean error over each 8×8-pixel block
  • very low mean square error over the whole image
  • extremely low mean error over the whole image

These assertions are tested on a large set of randomized input images, to handle the worst cases. Look at the IEEE 1180-1990 standard for reference. This has a consequence on the implementation of decoders, and it is extremely critical because some encoding processes (notably used for encoding sequences of images like MPEG) need to be able to construct, on the encoder side, a reference decoded image. In order to support 8-bit precision per pixel component output, dequantization and inverse DCT transforms are typically implemented with at least 14-bit precision in optimized decoders.

Effects of JPEG compression

JPEG compression artifacts blend well into photographs with detailed non-uniform textures, allowing higher compression ratios. Notice how a higher compression ratio first affects the high-frequency textures in the upper-left corner of the image, and how the contrasting lines become more fuzzy. The very high compression ratio severely affects the quality of the image, although the overall colors and image form are still recognizable. However, the precision of colors suffer less (for a human eye) than the precision of contours (based on luminance). This justifies the fact that images should be first transformed in a color model separating the luminance from the chromatic information, before subsampling the chromatic planes (which may also use lower quality quantization) in order to preserve the precision of the luminance plane with more information bits.

Sample photographs

For information, the uncompressed 24-bit RGB bitmap image below (73,242 pixels) would require 219,726 bytes (excluding all other information headers). The filesizes indicated below include the internal JPEG information headers and some meta-data. For full quality images (Q=100), about 8.25 bits per color pixel is required. On grayscale images, a minimum of 6.5 bits per pixel is enough (a comparable Q=100 quality color information requires about 25% more encoded bits). The full quality image below (Q=100) is encoded at 9 bits per color pixel, the medium quality image (Q=25) uses 1 bit per color pixel. For most applications, the quality factor should not go below 0.75 bit per pixel (Q=12.5), as demonstrated by the low quality image. The image at lowest quality uses only 0.13 bit per pixel, and displays very poor color, it could only be usable after subsampling to a much lower display size.

NOTE: The above images are not IEEE / CCIR / EBU test images, and the encoder settings are not specified or available.
Image Quality Size (bytes) Compression Ratio Comment
JPEG example JPG RIP 100.jpg Full quality (Q = 100) 83,261 2.6:1 Extremely minor artifacts
JPEG example JPG RIP 050.jpg Average quality (Q = 50) 15,138 15:1 Initial signs of subimage artifacts
JPEG example JPG RIP 025.jpg Medium quality (Q = 25) 9,553 23:1 Stronger artifacts; loss of high resolution information
JPEG example JPG RIP 010.jpg Low quality (Q = 10) 4,787 46:1 Severe high frequency loss; artifacts on subimage boundaries ("macroblocking") are obvious
JPEG example JPG RIP 001.jpg Lowest quality (Q = 1) 1,523 144:1 Extreme loss of color and detail; the leaves are nearly unrecognizable

The medium quality photo uses only 4.3% of the storage space but has little noticeable loss of detail or visible artifacts. However, once a certain threshold of compression is passed, compressed images show increasingly visible defects. See the article on rate distortion theory for a mathematical explanation of this threshold effect.

Lossless further compression

From 2004 to 2008, new research has emerged on ways to further compress the data contained in JPEG images without modifying the represented image.[6][7][8][9] This has applications in scenarios where the original image is only available in JPEG format, and its size needs to be reduced for archival or transmission. Standard general-purpose compression tools cannot significantly compress JPEG files.

Typically, such schemes take advantage of improvements to the naive scheme for coding DCT coefficients, which fails to take into account:

  • Correlations between magnitudes of adjacent coefficients in the same block;
  • Correlations between magnitudes of the same coefficient in adjacent blocks;
  • Correlations between magnitudes of the same coefficient/block in different channels;
  • The DC coefficients when taken together resemble a downscale version of the original image multiplied by a scaling factor. Well-known schemes for lossless coding of continuous-tone images can be applied, achieving somewhat better compression than the Huffman coded DPCM used in JPEG.

Some standard but rarely-used options already exist in JPEG to improve the efficiency of coding DCT coefficients: the arithmetic coding option, and the progressive coding option (which produces lower bitrates because values for each coefficient are coded independently, and each coefficient has a significantly different distribution). Modern methods have improved on these techniques by reordering coefficients to group coefficients of larger magnitude together;[6] using adjacent coefficients and blocks to predict new coefficient values;[8] dividing blocks or coefficients up among a small number of independently coded models based on their statistics and adjacent values;[7][8] and most recently, by decoding blocks, predicting subsequent blocks in the spatial domain, and then encoding these to generate predictions for DCT coefficients.[9]

Typically, such methods can compress existing JPEG files between 15 and 25 percent, and for JPEGs compressed at low-quality settings, can produce improvements of up to 65%.[8][9]

A freely-available tool called packJPG[10] is based on the 2007 paper "Improved Redundancy Reduction for JPEG Files." There are also at least two companies selling proprietary tools with similar capabilities, Infima's JPACK[11] and Smith Micro Software's StuffIt,[12] both of which claim to have pending patents on their respective technologies.[13][14]

Derived formats

  • JPEG Stereoscopic (JPS, extension .jps) is a JPEG-based format for stereoscopic images. It contains two images of identical size side by side, one for each eye.[15][16]
  • JPEG Multi-Picture Format (MPO, extension .mpo) is a JPEG-based format for multi-view images. It contains two or more images. [17]

Patent issues

In 2002 Forgent Networks asserted that it owned and would enforce patent rights on the JPEG technology, arising from a patent that had been filed on October 27, 1986, and granted on October 6, 1987 (U.S. Patent 4,698,672). The announcement created a furor reminiscent of Unisys' attempts to assert its rights over the GIF image compression standard.

The JPEG committee investigated the patent claims in 2002 and were of the opinion that they were invalidated by prior art.[18] Others also concluded that Forgent did not have a patent that covered JPEG.[19] Nevertheless, between 2002 and 2004 Forgent was able to obtain about US$105 million by licensing their patent to some 30 companies. In April 2004, Forgent sued 31 other companies to enforce further license payments. In July of the same year, a consortium of 21 large computer companies filed a countersuit, with the goal of invalidating the patent. In addition, Microsoft launched a separate lawsuit against Forgent in April 2005.[20] In February 2006, the United States Patent and Trademark Office agreed to re-examine Forgent's JPEG patent at the request of the Public Patent Foundation.[21] On May 26, 2006 the USPTO found the patent invalid based on prior art. The USPTO also found that Forgent knew about the prior art, and did not tell the Patent Office, making any appeal to reinstate the patent highly unlikely to succeed.[22]

Forgent also possesses a similar patent granted by the European Patent Office in 1994, though it is unclear how enforceable it is.[23]

As of October 27, 2006, the U.S. patent's 20-year term appears to have expired, and in November 2006, Forgent agreed to abandon enforcement of patent claims against use of the JPEG standard.[24]

The JPEG committee has as one of its explicit goals that their standards (in particular their baseline methods) be implementable without payment of license fees, and they have secured appropriate license rights for their upcoming JPEG 2000 standard from over 20 large organizations.

Beginning in August 2007, another company, Global Patent Holdings, LLC claimed that its patent (U.S. Patent 5,253,341), is infringed by the downloading of JPEG images on either a website or through e-mail. If not invalidated, this patent could apply to any website that displays JPEG images. The patent emerged in July 2007 following a seven-year reexamination by the U.S. Patent and Trademark Office in which all of the original claims of the patent were revoked, but an additional claim (claim 17) was confirmed.[25] In its first two lawsuits following the reexamination, both filed in Chicago, Illinois, Global Patent Holdings sued the Green Bay Packers, CDW, Motorola, Apple, Orbitz, Officemax, Caterpillar, Kraft and Peapod as defendants. A third lawsuit was filed on December 5, 2007 in Southern Florida against ADT Security Services, AutoNation, Florida Crystals Corp., HearUSA, MovieTickets.com, Ocwen Financial Corp. and Tire Kingdom, and a fourth lawsuit on January 8, 2008 in Southern Florida against the Boca Raton Resort & Club. A fifth lawsuit was filed against Global Patent Holdings in Nevada. That lawsuit was filed by Zappos.com, Inc., who was allegedly threatened by Global Patent Holdings, and seeks a judicial declaration that the '341 patent is invalid and not infringed. The patent owner has also used the patent to sue or threaten outspoken critics of broad software patents, including Gregory Aharonian[26] and the anonymous operator of a website blog known as the "Patent Troll Tracker."[27] On December 21, 2007, patent lawyer Vernon Francissen of Chicago asked the U.S. Patent and Trademark Office to reexamine the sole remaining claim of the '341 patent on the basis of new prior art.[28] On March 5, 2008, the U.S. Patent and Trademark Office agreed to reexamine the '341 patent, finding that the new prior art raised substantial new questions regarding the patent's validity.[29] In light of the reexamination, the accused infringers in four of the five pending lawsuits have filed motions to suspend (stay) their cases until completion of the U.S. Patent and Trademark Office's review of the '341 patent. On April 23, 2008, a judge presiding over the two lawsuits in Chicago, Illinois granted the motions in those cases.[30] On July 22, 2008, the Patent Office issued the first "Office Action" of the second reexamination, finding the claim invalid based on nineteen separate grounds.[31]

Standards

  • JPEG (lossy and lossless): ITU-T T.81, ISO/IEC IS 10918-1
  • JPEG (extensions): ITU-T T.84
  • JPEG-LS (lossless, improved): ITU-T T.87, ISO/IEC IS 14495-1
  • JBIG (black and white pictures): ITU-T T.82, ISO/IEC IS 11544-1
  • JPEG 2000 : ITU-T T.800, ISO/IEC IS 15444-1
  • JPEG-2000 (extensions): ITU-T T.801
  • JPEG XR (formerly called HD Photo) : ISO/IEC 29199-2

See also

  • Image compression
  • Image file formats
  • Comparison of graphics file formats
  • Windows Picture and Fax Viewer
  • Exchangeable image file format (Exif)
  • JPEG File Interchange Format (JFIF)
  • Design rule for Camera File system (DCF)
  • JPEG 2000
  • Motion JPEG
  • Graphics editing program
  • GDI+ vulnerability section of GDI article, exploitable bug in JPEG handling code of GDI+ library
  • Comparison of layout engines (graphics)
  • Generation loss
  • PGF
  • PNG
  • Lossless Image Codec FELICS
  • C-Cube an early implementer of JPEG in chip form
  • Libjpeg of Independent JPEG Group
  • Deblocking filter (video), the similar deblocking methods could be applied to JPEG
  • Lenna, the traditional standard image used to test image processing algorithms

References

  1. http://www.w3.org/Graphics/JPEG/jfif3.pdf JFIF File Format as PDF
  2. Tom Lane (1999-03-29). "JPEG image compression FAQ". http://www.faqs.org/faqs/jpeg-faq/part1/. Retrieved 2007-09-11.  (q. 14: "Why all the argument about file formats?")
  3. "ISO/IEC 10918-1 : 1993(E) p.36". http://www.digicamsoft.com/itu/itu-t81-36.html. 
  4. Thomas G. Lane.. "ADVANCED FEATURES: Compression parameter selection". Using the IJG JPEG Library. http://apodeline.free.fr/DOC/libjpeg/libjpeg-3.html. 
  5. The JPEG Still Picture Compression Standard, Summary by Gregory K. Wallace (Gzipped PostScript file)
  6. 6.0 6.1 I. Bauermann and E. Steinbacj. Further Lossless Compression of JPEG Images. Proc. of Picture Coding Symposium (PCS 2004), San Francisco, USA, December 15–17, 2004.
  7. 7.0 7.1 N. Ponomarenko, K. Egiazarian, V. Lukin and J. Astola. Additional Lossless Compression of JPEG Images, Proc. of the 4th Intl. Symposium on Image and Signal Processing and Analysis (ISPA 2005), Zagreb, Croatia, pp.117–120, September 15–17, 2005.
  8. 8.0 8.1 8.2 8.3 M. Stirner and G. Seelmann. Improved Redundancy Reduction for JPEG Files. Proc. of Picture Coding Symposium (PCS 2007), Lisbon, Portugal, November 7–9, 2007
  9. 9.0 9.1 9.2 Ichiro Matsuda, Yukio Nomoto, Kei Wakabayashi and Susumu Itoh. Lossless Re-encoding of JPEG images using block-adaptive intra prediction. Proceedings of the 16th European Signal Processing Conference (EUSIPCO 2008).
  10. "Latest Binary Releases of packJPG: V2.3a". January 3 2008. http://www.elektronik.htw-aalen.de/packjpg/. 
  11. "Reduce Your JPEG Storage and Bandwidth Cost by up to 80% While Enhancing User's Experience With Infima's JPACK(TM) Compression Solution". Infima Technologies. November 17, 2008. http://www.marketwatch.com/news/story/Reduce-Your-JPEG-Storage-Bandwidth/story.aspx?guid=%7B924ADA85-3E91-40FE-B0A1-162EC91A9467%7D. 
  12. "StuffIt Image Compression White Paper rev. 2.1". 1/5/2006. http://my.smithmicro.com/stuffitcompression/wp_stuffit_imgcomp.pdf. 
  13. "Infima Ultimate Compression". http://www.myinfima.com/images/. 
  14. "Stuffit Image". Smith Micro Software. http://my.smithmicro.com/stuffitcompression/imagecompression.html. 
  15. J. Siragusa, D. C. Swift, “General Purpose Stereoscopic Data Descriptor”, VRex, Inc., Elmsford, New York, USA, 1997.
  16. Tim Kemp, JPS files
  17. Multi-Picture Format (MPF) from JPEG images
  18. Concerning recent patent claims
  19. JPEG and JPEG2000 - Between Patent Quarrel and Change of Technology (Archive)
  20. Kawamoto, Dawn (April 22, 2005). "Graphics patent suit fires back at Microsoft". CNET News. http://news.cnet.com/2100-1025_3-5681112.html. Retrieved 2009-01-28. 
  21. "Trademark Office Re-examines Forgent JPEG Patent". Publish.com. February 3 2006. http://www.publish.com/c/a/Graphics-Tools/Trademark-Office-Reexamines-Forgent-JPEG-Patent/. Retrieved 2009-01-28. 
  22. "USPTO: Broadest Claims Forgent Asserts Against JPEG Standard Invalid". Groklaw.net. May 26 2006. http://www.groklaw.net/article.php?story=20060526105754880. Retrieved 2007-07-21. 
  23. Coding System for Reducing Redundancy
  24. "JPEG Patent Claim Surrendered". Public Patent Foundation. November 2 2006. http://www.pubpat.org/jpegsurrendered.htm. Retrieved 2006-11-03. 
  25. Ex Parte Reexamination Certificate for U.S. Patent No. 5,253,341
  26. Rozmanith: Using Software Patents to Silence Critics
  27. A Bounty of $5,000 to Name Troll Tracker: Ray Niro Wants To Know Who Is saying All Those Nasty Things About Him
  28. Hunting trolls: USPTO asked to reexamine broad image patent
  29. U.S. Patent Office - Granting Reexamination on 5,253,341 C1
  30. Judge Puts JPEG Patent On Ice
  31. JPEG Patent's Single Claim Rejected (And Smacked Down For Good Measure)

External links