Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not that familiar with XYB and its properties. Is there anywhere I can read more? Found some specifications, but not anything on its properties.

I think this might be a case where the requirements for image editing and image compression are different.

For image editing, especially when working with HDR images, I think it is better to just have a simple power function, since this makes less assumptions on the exact viewing conditions. E.g a user might want to adjust exposure while editing an image, and if the prediction of hue changes when the exposure is altered, that would be confusing (which happen if more complex non-linearities are used). When compressing final images though, that wouldn’t be an issue in the same way.



https://gitlab.com/wg1/jpeg-xl/-/blob/master/lib/jxl/opsin_p... has the numbers for sRGB to XYB.

Basically the M1 matrix for linear sRGB [linearR, linearG, linearB, 1] to approximate cone responses:

(I think in this normalization 1 means 250 nits, but not completely sure at this stage of optimizations -- we changed normalizations on this recently.)

M1 = [ [0.300, 0.622, 0.078, 0.0038], [0.240, 0.682, 0.078, 0.0038], [0.243, 0.205, 0.552, 0.0038], [0, 0, 0, 1] ]

then non-linearity by cubic root, in decoding cube, see: https://gitlab.com/wg1/jpeg-xl/-/blob/master/lib/jxl/dec_xyb...

The LMS values after cubic root are coded by this matrix M2:

M2 = [[1, -1, 0], [1, 1, 0], [0, 0, 1]]

In practice Y->X and Y->B correlations are decorrelated, so M2 looks more like this:

M2 = [[1+a, -1+a, 0], [1, 1, 0], [b, b, 1]]

after decorrelations a is often around zero and b is around -0.5.

The first dimension in this formulation is X (red-green), second Y (luma), third B (blueness-yellowness).

For quantization, X, Y and B channels are multiplied by constants representing their psychovisual strength. X and B channels (the chromacity channels) are less important when quantization is low, and particularly X channel increases in strength when more quantization is done.

Cube is beautiful in the sense that it allows scaling the intensity without considerations, but it is quite awful in the near black psychovisual performance. That is why sRGB added a linear ramp, and I added biasing (homogeneous transform instead of 3x3).


Thanks!

Regarding: “Cube is beautiful in the sense that it allows scaling the intensity without considerations, but it is quite awful in the near black psychovisual performance.”

Yeah, that is the tradeoff, same for dealing with hdr values. The idea with Oklab is to avoid having to know what luminance the eye is adapted to, by treating all colors as if they are within the normal color vision range basically. Makes it simpler to use and more predictable to use, but makes predictions in the extreme ends worse than it would be taking knowledge of the viewing conditions into account (given that you can do so accurately)

E.g. linear ramp for near black values would not be good if you are in a dark room, only viewing very dark values full screen on a monitor (so there isn’t anything bright around to adapt to)


BTW, just if it didn't become clear from all the proposals I had: I adore your work with Oklab. Humanity would benefit a lot if more scientists and engineers were able to think like you -- from first principles and completely out-of-the-cargo-cult-box. What you propose with Oklab is practical and a huge improvement over the current practice.


Thanks a lot for the kind words!


I would consider just continuing to use the CIELAB adjusted cube root function, with a linear part near zero. It has been used widely for 45 years and people understand it pretty well. It is plenty fast to implement (just takes one extra conditional move or the like and one FMA).


We don't need to use something just because it is old. CIELAB is based on 2 degree samples of color. Colors work differently at smaller angles due to the different densities of receptors, particularly the larger size and lower density of S receptors. Pixels on most recent monitors are about 0.02 degrees, 100x smaller in angle, 10'000x smaller in area than what the old color research is based on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: