Imagine a red circle on a white background:
Now imagine a similar image, but with the red part being 90% transparent:
A pixel in the white area has a color of RGBA(1.00, 1.00, 1.00, 1.00). A pixel in the red area has color RGBA(1.00, 0.00, 0.00, 0.10). If you average those numbers together, you get (1.00, 0.50, 0.50, 0.55). That’s the color a typical pixel on the boundary would have if you resize the image in the straightforward way.
But that color – a half-opaque light red – is the wrong color! You can see a faint red halo around the border, which shouldn’t be there:
The red, since it is nearly transparent, should actually have much less effect on the average color.
To resize it correctly, you should first convert to associated alpha by multiplying the RGB values by the alpha value. That changes the red pixels to RGBA(0.10, 0.00, 0.00, 0.10). Then average it with white to get RGBA(0.55, 0.50, 0.50, 0.55). Then convert back to unassociated alpha by dividing the RGB values by the opacity, to get RGBA(1.00, 0.91, 0.91, 0.55). That makes the border pixels much closer to white, as they should be, and the halo disappears:
Doing the resize correctly unfortunately tends to be less efficient than doing it wrong, and you can’t resize the channels independently of each other.
If your image data is already in an associated alpha format, then this problem doesn’t occur. That’s one of the advantages of associated alpha. But it has disadvantages, too: it’s harder to understand, stores less color information, and makes it possible to store invalid values like (1.00, 1.00, 1.00, 0.50) that your program will have defend against.
With some resampling filters, you can end up with opacity values larger than 100% or smaller than 0%. See this page for more information about this issue.