The Ins and Outs of Interpolation: Digital Photography

Whether in more bay demosaicing or photo enlargement, all digital images use image interpolation at some point. When you resize or remap (distort) your image from the one-pixel grid to another, this happens. Thus, image scaling is required when the total number of pixels needs to be increased or decreased. In contrast, remapping can occur in various situations, including correcting lens distortion, adjusting perspective, and rotating an image.

Even when the identical image resizes or remaps, the results can differ dramatically depending on the interpolation algorithm. Because it is merely an approximation, each time Interpolation is conducted, an image will always lose some quality. This tutorial tries to understand better how the results can differ from minimizing any image quality losses caused by Interpolation.

Concept

Interpolation works by estimating values at unknown sites using known data. For example, if you needed to know the temperature at noon but only had data between 11 a.m. and 1 p.m., you could use linear Interpolation to approximate the value:

If you had an extra measurement at 11:30 a.m., you might observe that the majority of the temperature rise occurred before noon, and you could run a quadratic interpolation with this new data point:

The more close-to-midday temperature observations you have, the more advanced (and ideally more accurate) your interpolation technique can be.

Image Resize Example

Image interpolation operates in two directions, attempting to approximate the colour and intensity of a pixel as closely as possible using the values of nearby pixels. The following is an illustration of how resizing and enlargement work:

In contrast to air temperature changes and the ideal gradient described above, pixel values can shift dramatically from one area to the next. As with the temperature example, the more information you have about the pixels around you, the better your Interpolation will be. As a result, the more you stretch an image, the worse the results get since Interpolation can never add detail to an image that isn’t already there.

Image Rotation Example

When you rotate or deform an image, Interpolation occurs as well. The above example was deceptive since it is a task that interpolators excel at. The following example demonstrates how quickly image detail can be lost:

Because no pixel has to be relocated onto the border between two pixels, the 90° rotation is lossless (and therefore divided). Note how the initial spin loses most of the detail while the image diminishes with further rotations. When feasible, avoid rotating your images; if an unleveled shot necessitates it, rotate it no more than once.

The above results demonstrate significant deterioration and are based on a “bicubic” algorithm. Take note of the overall loss of contrast, as evidenced by the hue becoming less vivid and the black haloes surrounding the light blue. The findings, as mentioned above, could be substantially improved depending on the interpolation algorithm and subject matter.

Types of Interpolation Algorithms

Adaptive and non-adaptive interpolation methods are the two most common types of interpolation algorithms. Non-adaptive approaches treat all pixels equally, but adaptive methods modify depending on their interpolating (sharp edges vs smooth texture).

Adaptive Interpolation

Nearest neighbour, bicubic, bilinear, spline, sinc, Lanczos, and other non-adaptive algorithms include the nearest neighbour, bilinear, bicubic, spline, sinc, Lanczos, etc. When interpolating, these use anywhere from 0 to 256 (or more) adjacent pixels, depending on their complexity. The more neighbouring pixels they incorporate, the more precise they can be, but at the cost of a significantly longer processing time. These algorithms can distort as well as resize a photograph.

Non-Adaptive Interpolation

Many algorithms in licensed software, such as Qimage, Genuine Fractals, PhotoZoom Pro and others, are adaptive algorithms. When they detect the existence of an edge, many of them use a different version of their algorithm (pixel-by-pixel) to reduce unattractive interpolation artefacts in areas where they are most noticeable. Some of these methods can’t be used to distort or rotate an image because they’re meant to optimize artefact-free detail in larger photos.

Nearest Neighbor Interpolation

Because it just analyzes one pixel — the one closest to the interpolated point — it is the simplest and takes the least amount of processing time of all the interpolation algorithms. This has the effect of merely increasing the size of each pixel.

Bilinear Interpolation is a technique for calculating the distance between two points.

The nearest 22 neighbourhood of known pixel values surrounding the unknown pixel is considered in bilinear Interpolation. The final interpolated value is calculated by taking a weighted average of these four pixels. As a result, the photos are substantially smoother than their nearest neighbours.

When all known pixel distances are equal, the interpolated value is just their sum divided by four.

Bicubic Interpolation

Various alternative interpolators take into account more surrounding pixels and are thus far more computationally costly. After an interpolation, these methods, such as spline and sinc, maintain the most image information. When the image requires many rotations/distortions in independent steps, they are convenient. On the other hand, these higher-order algorithms provide diminishing visual enhancement as processing time is raised for single-step enlargements or rotations.

Higher-Order Interpolation: Spline and Sinc

Various alternative interpolators take into account more surrounding pixels and are thus far more computationally costly. After an interpolation, these methods, such as spline and sinc, maintain the most image information. When the image requires many rotations/distortions in independent steps, they are convenient. On the other hand, these higher-order algorithms provide diminishing visual enhancement as processing time is raised for single-step enlargements or rotations.

Interpolation Artifacts To Watch Out For

Edge halos, blurring, and aliasing are all unwanted phenomena that non-adaptive interpolators try to achieve an ideal compromise between.

Even the most sophisticated non-adaptive interpolators must always enhance or reduce one of the above artefacts at the expense of the other two, ensuring that at least one is visible. Also, notice how the edge halo resembles the artefact created by over-sharpening with an unsharp mask and improves the impression of sharpness by enhancing acutance.

Adaptive interpolators may or may not cause the following artefacts, but at small scales, they can also produce non-image textures or weird pixels:

On the other hand, some of these adaptive interpolator “artefacts” could be viewed as advantages. These patterns have been said to fool the eye from a distance since the eye expects to perceive detail down to the tiniest scales in fine-textured areas such as vegetation (for some subject matter).

Anti-aliasing

Anti-aliasing is a technique for reducing the appearance of aliased or jagged diagonal edges, also known as “jaggies.” These give text or graphics a shabby digital look:

Anti-aliasing smooths out jaggies and offers the impression of increased resolution and smoother edges. It operates by calculating how much an ideal edge overlaps neighbouring pixels. Thus, the anti-aliased edge offers a value proportionate to how much of the edge was within each pixel. In contrast, the aliased edge rounds up or down with no intermediate value:

Preventing the interpolator from causing or aggravating aliasing is a major challenge when expanding an image. Many adaptive interpolators detect edges and change their settings to reduce aliasing while preserving edge sharpness. Because an anti-aliased edge carries information about its location at higher resolutions, a sophisticated adaptive (edge-detecting) interpolator could at least partially recreate this edge when enlarging.

Note on Optical vs Digital Zoom

Many tiny digital cameras include both optical and digital zoom capabilities. Optical zoom is achieved by shifting the zoom lens so that light magnification rises before it reaches the digital sensor. On the other hand, a digital zoom decreases image quality by merely interpolating the image after being captured at the sensor.

Even though the shot with digital zoom has the same amount of pixels as the photo with optical zoom, the detail is visibly reduced. So unless it helps you visualize a faraway object on your camera’s LCD preview screen, digital zoom should be avoided nearly altogether. On the other hand, if you usually shoot in JPEG and plan on cropping and enlarging the image later, digital zoom offers the advantage of completing the Interpolation before any compression errors appear. If you find yourself using digital zoom much too often, consider investing in a teleconverter or, better yet, a lens with a greater focal length.