Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're trying to show raw space RG (50, 50), then when you show (100, 0) and (0, 100) in adjacent pixels, there are exactly 50 * 2 units of light distributed over each pair of pixels, so your eyes will see (50, 50) if the pixels are small/far enough.

Another way to put this is that dithering works because of physical blending of photons due to an insufficiently sharp eye lens before perceptual mechanisms in the brain.



You are taking the example when dithering is done independently for each color channel. Dithering can also be done across color channels, which can be useful for displays with more than three primary colors. Even in those cases, I found dithering to work better in the physical space, and not in perceptual space. I am trying to understand why.

Further, the question still remains why is it that mixing of photons spatially as you explained works better imperceptible pixels, and yet we need these non-linear color spaces when having larger areas.

Goes without saying that the intensity hit for 50 need not be the midpoint of that hit for 0 and 100 given the gamma curve, and actual mapping of the value to intensity for the pixel.


Dithering is best done linearly with respect to light intensity.

But if you zoom in enough, any smooth curve looks linear.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: