Subpixel rendering and image resizing

Disclaimer

I want to be clear that I’m not claiming to have invented any of this. A lot of modern graphics platforms already support some type of subpixel rendering. And there are other utilities that do LCD-optimized resampling.

I also admit that I’m not exactly a leading expert on this sort of thing, and that anything on this page could be wrong.

Is this the same as ClearType™?, you may ask. I don’t think that it is. ClearType uses a slightly more sophisticated algorithm (more information here), in which the different color channels have an effect on each other. In my simplistic version of subpixel rendering, red samples never have an effect on green samples, etc.

Is this patented?, you may ask. I don’t know. You’ll have to ask a lawyer. I’ve heard that there are a lot of patents related to subpixel rendering. It’s hard to imagine how adding 1/3 to a number could be patented, but then, I’m no longer surprised about anything that the patent office does.

The basics

Most personal computers (as of 2011) and other display devices use LCD screens in which each screen pixel is composed of three small “subpixels”. The subpixels are colored red, green, and blue, and in most cases, they are in that order: red on the left, green in the middle, and blue on the right.

Raster images are often stored in files such as PNG or JPEG files. These files store the red, green, and blue components of pixels, and you can generally assume that those components were meant to be physically in the same place. When a computer program displays such an image, it usually simply copies the pixels directly from the file to the screen. Even if it resizes the image first, it probably assumes that the display’s color components are all in the same place. The problem is that, for LCD screens, that assumption is not quite correct.

To illustrate this, suppose you have an image file that is 4 pixels wide and 1 pixel high. You want to display it on the screen, in an area that is just 3 pixels wide. So you need to resize it from 4×1 to 3×1.

Consider just the blue component. You might calculate the blue component of the first screen pixel by averaging the amount of blue in the first 1.333 blue pixels in the image, and the second screen pixel from the next 1.333 source pixels, etc.

(This particular resizing algorithm is sometimes called pixel mixing, but the algorithm used is mostly irrelevant to the issues being discussed here.)

Notice how, for an R-G-B LCD display, there is a consistent bias as the blue colors are shifted to the right. Likewise, the red colors are shifted to the left. Looking just at how the middle pixel would be calculated:

But if the image viewer knows how your screen’s subpixels are arranged, it could do better than that. Here’s how it could calculate the blue components:

(And similarly for the red components.) Looking just at the middle pixel:

By calculating the red and blue components based on where the subpixels are actually physically located, you can usually create a better looking image.

If you’re going to resize an image to display it, you should try to do the subpixel processing while you resize it, and not in a separate pass afterward. If you resize it first, you lose information that could have been used to produce a better image. It is usually very easy to modify a resizing algorithm to support this type of subpixel rendering – it’s literally as easy as adding or subtracting 1/3 at the right spot, to change the way that the source pixels are aligned with the target pixels.

Edge effects

You may have noticed a potential difficulty regarding the edges of the image. The rightmost pixel’s blue component should be based partly on the nonexistent 5th pixel in the 4-pixel source image. And part of the leftmost source pixel is completely ignored.

This is generally not a very serious problem, and there are various ways to deal with it. For example, you could give extra weight to those source pixels that are available. Or you could pretend that any missing pixel has the same color as the nearest pixel that does exist. Or you could expand the size of the target image, and possibly make the extreme pixels partially transparent (but this can be difficult; see below).

ImageWorsener

You can use my image processing utility, ImageWorsener, to create images that have been optimized in this way. Typically, you should use the option “-offsetrb 1/3”.

Caution

Subpixel optimization should only be done on the computer on which the image is going to be displayed, or if you somehow have specific knowledge of how it is going to be displayed. Doing it inappropriately can make your images look much worse than not doing it at all. By using subpixel rendering, you’re making a lot of assumptions:

Example

Here’s an example. Consider this image:

On an LCD display, if you look closely, you should be able to see faint blue and orange fringes at some of the edges of the black line. Most likely, the left edges will have a blue fringe, and the right edges will have an orange fringe. The colors are not present in the image itself; they are an artifact created by your monitor.

Now I’m going to resize the image and make it a little smaller. Resizing isn’t required to use subpixel rendering, but it makes it more effective.

Normal resize Optimized for R-G-B LCDs Optimized for B-G-R LCDs

The first image above is still a grayscale image, and should have the same color fringes as the original. In one of the other two, the fringes should be mostly gone (and in the remaining image they will be accentuated).

The improvement is not large, but there definitely is an improvement.

This optimization is not just about removing color fringes. It’s really about making an image that more faithfully represents the original. The optimized image will be smoother and more accurate. Reducing the color fringes is just one of the most easily visible effects.

Interactive demo

Gradients

You might think this technique only affects small details, but it can sometimes have a larger effect. If we enlarge a thin line, using a resampling algorithm that does interpolation, we’ll get something like this:

The gray fringe to the left of center probably appears to be a slightly different color than the one to the right. Most people will see the left fringe look slightly bluish, and the right one may look slightly reddish or brownish.

With good subpixel rendering, the differences in color should disappear, and the grays should look purely gray:

Normal Optimized for R-G-B LCDs

Interactive demo

What’s going on? Consider a gradient from light to dark gray. Without subpixel rendering, each pixel has its color components set to the same intensity, so each pixel is gray, so you would expect the gradient to appear gray:

But the pixel boundaries are an arbitrary convention. If I simply change where I imagine the pixel boundaries to be, it could look like this instead:

Now, each pixel (except for one at the beginning) is clearly blue. So you would expect the gradient to appear blue.

Well, which is it? Gray or blue? The actual appearance will be sort of an average of the three possible interpretations. But the average of gray and blue and blue is grayish blue, so it will still appear blue.

To counteract this effect, we can use subpixel rendering, which in this case makes most of the pixels slightly red to counteract the blue bias:

Color images

Although I used grayscale images in my examples, there is nothing about this that is specific to grayscale images. It works fine with full color images, though it can be harder to see the difference.

Normal Optimized for R-G-B LCDs

Interactive demo

Transparency

Transparency poses a problem when doing subpixel rendering. Most image processing uses a single “alpha channel”, which stores a transparency level for each pixel. But when dealing with images at the subpixel level, that isn’t enough. The desired transparency level of the red subpixel will often be different from that of the green subpixel, for example.

So, if you want to do this properly, you need to have at least three alpha channels: one for red, one for green, and one for blue. And that can be a problem, because no common image file format has a standard way to store multiple alpha channels. I suspect that most graphics libraries and APIs have the same limitation. Realistically, transparency cannot be done perfectly unless you’re using your own code for everything.

(Actually, while multiple alpha channels are definitely necessary, I’m not entirely sure they are sufficient. I need to think about this some more.)

(Multiple alpha channels can be a good thing, anyway, because they let you render colored translucent objects more realistically. They are not often used, though, because it’s rarely worth the extra storage space and processing time. Colored translucent objects aren’t very common in the real world.)

Gamma correctness

As with most image processing, you should expect that this type of subpixel rendering works best when performed in a linear colorspace. If you don’t do that, you won’t be adjusting the colors by the right amount. All the examples on this page were created with a colorspace-aware application, so the calculations were done in a linear colorspace.

Is 1/3 of a pixel the optimal offset?

I find that, on my monitor, an offset of 1/3 leaves very faint color fringes that are the negatives of the fringes when there is no offset. The offset that eliminates the fringes entirely is closer to 0.25.

I don’t know why this is. Is there some space between the pixels, so that the distance between the subpixels is less than 1/3? Does my monitor use some technique that partially corrects for the problem that I’m trying to correct for, causing me to overcorrect?

Again, I don’t know. But the point is that, depending on the monitor, it’s possible that 1/3 is not the optimal offset.