AcademySoftwareFoundation / OpenPBR

Specification and reference implementation for the OpenPBR Surface shading model

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Emission units

anderslanglands opened this issue · comments

In the emissive section it says "emissive properties are specified in photometric units", but then doesn't say what units they're actually specified in. Assuming this means nits, this should be specified.

Agree we should be more specific about that in section 2.4 "Emission model" (which is supposed to be a discussion of how emission fits into the slab formalism, before we get to the specifics of the layer structure and parametrization).

We just say:

Each slab also has an emission distribution function (EDF), though it corresponds by default to no emission at all. Emissive properties are expressed in photometric units, since reference values are widely available. Thus the EDF is a directional luminance function Le. One can imagine the luminance emitted homogeneously from the slab interior (by some unspecified physical process).

The "thus" doesn't really follow I guess, since there are different kinds of photometric quantities/units, not just luminance/nits. Also we didn't really define what an emission distribution function is supposed to mean -- presumably mathematically it gives the photometric luminance (in nits) at the surface point, as a function of direction.

Note we do specify that the emission_luminosity parameter has units of nits, in section 3.7 "Emission" about the specific emission parametrization in the uber-shader.

image

But I think this needs rationalization in the earlier text.

We also say:

The emission_luminance parameter controls the luminance the emissive layer would have when emission_color is set to (1, 1, 1) and in the absence of coat and fuzz. The emission_color acts as a multiplier, thus the resulting luminance may be less than the input parameter, or even zero if the color multiplier is set to (0, 0, 0).

I'm not sure I fully understand how this translates into the process to obtain the RGB emission radiance in the renderer's color space, i.e. what do we compute inside the renderer given:

  • a surface point with emission_luminance and emission_color parameters
  • the assumed color space of the colors in the spec
  • the working color space of the renderer

Currently e.g. in Arnold, we just take the given emission_luminance * emission_color as the RGB radiance in the working color space, but that can't be right since it should differ depending on both the renderer working color space and the color space used in the spec (ACEScg by default).

I assume the luminance specifies the Y coordinate of the XYZ tristimulus. So perhaps we then construct XYZ (how exactly?), map that to the spec color space RGB and apply the tint color (which is also in the spec color space), then map that to the working color space RGB' ?

@anderslanglands or @AdrienHerubel, do you know the details of how this should work?

All lights therefore are implicitly emitting the spectrum corresponding to the
rendering colour space white point, normalised to 1 nit.

Does this mean that the emission spectrum depends on whatever the renderer working color space is then? That can be anything in principle, so it seems like that would make it (technically) undefined what the surface is emitting.

It's definitely defined, but it's defined to be dependent on the rendering space.

So if I understand the proposed process correctly:

  • assume the surface is emitting a spectrum "corresponding to the rendering colour space white point", normalized so that the total luminance is emission_luminance nits. This will correspond to some RGB1 in the renderer working color space. (What is that exactly? Maybe it just means (L, L, L) in the working color space, where L = emission_luminance, assuming the renderer radiance units are nits?).
  • map the emission_color from the shader color space (ACEScg by default) to the working color space, producing RGB2.
  • Multiply RGB1 * RGB2 to get the final luminance.

That sounds reasonable.. If the working RGB color spaces are different, i'm not even sure it's necessarily technically possible to ensure the photometric luminance is exactly the same (is it?). So maybe this is the best we can do.

I do think it would be beneficial to explain this in detail in the spec though.

@KelSolaar do you have any thoughts on this? My point was mostly that I am not entirely clear what the exact process should be to map the emission_luminance (in nits) and emission_color to the RGB values inside the renderer, given assumed color spaces of model and renderer. Is what we say in the spec unambiguous enough to make sense of this? (I may be overthinking the issue, but I'd like to make it as clear as it could be).

Yes, it's just (L,L,L) * RGB2

For the spec you could say something like "exitant radiance in an RGB renderer should be computed as emission_luminance * emission_color to make it really explicit.

I have a slightly wordier version in the doc comments of lightAPI.h here: PixarAnimationStudios/OpenUSD@f70949a

Thanks @anderslanglands! I will attempt to make a PR clarifying it, and point you to it for review.

Can we even talk about radiance with a RGB renderer?

Everything has already been integrated by the colour matching functions and is happening in the Standard Human Observer space. All the values are weighted by its sensitivity to light so it is very much the photometric domain and not the radiometric one.

It has the benefit of at least pushing the terminology into the photometry realm, maybe more emphasis should be made about the difference between radiometry and photometry and explain where and how the "conversion" happens.

In the UsdLux proposal I linked above I give a brief overview of that in the "Quantities and Units" section. @portsmouth you could crib that if you like?

The other terms I've used you can see there: "integrated radiance" and "tristimulus weight"

assume the surface is emitting a spectrum "corresponding to the rendering colour space white point", normalized so that the total luminance is emission_luminance nits. This will correspond to some RGB1 in the renderer working color space. (What is that exactly? Maybe it just means (L, L, L) in the working color space, where L = emission_luminance, assuming the renderer radiance units are nits?).

Yes, it's just (L,L,L) * RGB2

The rendering color space white-point will in general be at some chromaticity $(x_0, y_0)$ (not necessarily equal to (1/3, 1/3)). We want the spectrum $S(\lambda)$ to be such that

$$X = \frac{Y}{y} x , \quad Z = \frac{Y}{y} (1 - x - y)$$

where the Y tristimulus is equal to emission_luminance

So mathematically, don't we require (according to your definition) the SPD $S(\lambda)$ of the emission to be such that it satisfies:

$$X = Y \; \frac{\int \bar{x}(\lambda) \; S(\lambda) \; \mathrm{d}\lambda}{\int \bar{y}(\lambda) \; S(\lambda) \; \mathrm{d}\lambda}$$ $$Z = Y \; \frac{\int \bar{z}(\lambda) \; S(\lambda) \; \mathrm{d}\lambda}{\int \bar{y}(\lambda) \; S(\lambda) \; \mathrm{d}\lambda}$$

for those specific $X$, $Z$?

It's not even clear that such an $S(\lambda)$ exists.

Instead perhaps one can just do something like say the emission is e.g. D65 (or in general the illuminant of the model color space) with the specified $Y$, and required $(x_0, y_0)$ chromaticity from the model color space white-point, which defines $(X, Y, Z)$. Then transform that to the color space of the model (ACEScg by default), and multiply by the emission_color tint assuming it lives in that space?

This then makes no reference to the "renderer working color space", it's defined entirely in terms of the model color space, so is unambiguous.

what's the "model color space"?

what's the "model color space"?

Ah sorry, by "model" I meant just the color space assumed by the OpenPBR shading model (which can vary by asset). We say currently:

We recommend therefore, for the purposes of asset exchange, that the parameters be packaged with certain metadata that provides the following missing information...
[including] the assumed color space of all the color parameters. If unspecified, following MaterialX, by default this color space is assumed to be ACEScg.

I suppose in practice, the colors may be driven by texture inputs which specify their own color space (potentially different per color).

Actually, the most obvious way to define it is just to compute the chromaticity $(x, y)$ of emission_color (in the model color space), and then that combined with setting luminance $Y$ = emission_luminance specifies $(X, Y, Z)$ of the emitted light via

$$X = \frac{Y}{y} x , \quad Z = \frac{Y}{y} (1 - x - y)$$

and job done (can then transform to any desired RGB space).

I feel like you're overcomplicating this. Why not just define it in terms of a multiplication of the rendering colour space white?

I think that doesn't really make sense, since then you're not defining a specific color of emission, you're tying the definition of the emission color of the object to whatever the renderer chose for its working color space, so the same model parameters would lead (technically) to different physical emission in different renderers. I guess you assume this isn't an issue in practice? If we want to use this model to share assets between different renderers, that can use whatever internal color space they like, I think it would be problematic.

Also I don't really understand what doing the RGB multiplication of emission_color and emission_luminance with the RGB white-point means in a color theory sense, though it seems plausible.

But anyway what I just said above doesn't work since we said we needed emission_color to scale the luminance. It can be fixed by multiplying luminance of emission_color into emission_luminance.

So to be precise:

  • model defines some color space (default ACEScg)
  • the emission chromaticity is $(x, y)$ given by that of emission_color $(R, G, B)$ in that space.
  • the emission luminance is given by luminance of that emission_color times $Y$=emission_luminance.

In this interpretation, the emission_color $(R, G, B)$ components don't have units of luminance then, they are basically scale factors. Presumably we should require they live in $[0,1]$.

Maybe this actually corresponds to what you propose if we just work in the model color space, which (if I understand correctly) is just to make the emission be $Y (R, G, B)$ in that space.

so the same model parameters would lead (technically) to different physical emission in different renderers. I guess you assume this isn't an issue in practice?

There is no "physical emission" in an RGB renderer. It's all just multipliers on white. In a spectral renderer the emission spectrum could be user-defined, but in most cases would be a reference white, e.g. D65.

In both cases the expectation should be that you transform your emission_color to the rendering colour space before multiplying with the "emission white"

In this interpretation, the emission_color
components don't have units of luminance then, they are basically scale factors. Presumably we should require they live in [0, 1]

Yes, they're scale factors (same as diffuse colour). They should be positive, but no reason for them to be bounded otherwise.

There is no "physical emission" in an RGB renderer.

Agreed, I should have said "perceptual emission" since we're dealing with photometry.

I only mean that e.g. if the OpenPBR model says the emission color is (1, 0.5, 0.2) in ACEScg, and emission luminance is 100 nits, that this should produce the same perceptual color in renderer A as renderer B.

That won't work if the meaning of the emission color is defined to depend on each renderer's working color space choice, it needs to be unambiguously defined by the data "emission color is (1, 0.5, 0.2) in ACEScg, and emission luminance is 100 nits".

I think it's as simple as saying that the RGB color in the model color space (which is not a free choice of the renderer, but specified by the model) is emission_color * emission_luminance.

OK I think we're saying the same thing here ultimately. The discussion of SPDs above was confusing. Utlimately, as we said above, the process needs to be:

L = emission_luminance
emission = color(L) * ctransform(model, render, emission_color)

Yep I think that makes sense. I think it deserves a couple extra sentences in the spec to make it totally clear, so will put together a small PR for that.

Just to note, we're assuming a color with the chromaticity of the white-point and luminance $Y$ has RGB value $Y(1, 1, 1)$. I think that requires some proper normalization of the standard illuminant luminance.

So we probably want that the color corresponding to emission_color = $(1,1,1)$ is actually (1, 1, 1) * emission_luminance / $Y_\mathrm{D65}$ (assuming D65). The formula you write above would work only if you assume the standard illuminant is normalized to $Y_\mathrm{D65} = 1$ nit.

You're defining emission_luminance to be in nits, right? Therefore the implementation is responsible for ensuring correct normalization (if it's a spectral renderer)

I just mean, if the standard illuminant (of the model color space) is D65 say, then color (1, 1, 1) means light with the white-point color and luminance $Y_\mathrm{D65}$.

It seems that the exact normalization of the illuminant is arbitrary though, so we can just specify that $Y_\mathrm{D65}$=1 nit (or in general, say that the color space illuminant is normalized to emit 1 nit). Then if you do that, it follows that emission_luminance gives the luminance in nits of emission_luminance (1, 1, 1).

Technically I think you do have to state the normalization you're assuming for the illuminant, to be complete. (Alternatively for you maybe " emission_luminance is in nits" is equivalent to that, and enough).

Yes, I think saying emission_luminance is in nits is enough, and more to the point is simple enough to avoid confusion. Anyone who's just using an RGB renderer doesn't need to think about it any further, and anyone who's writing a spectral renderer will know what that means for their implementation.

@anderslanglands Please check out the linked PR, to make sure it's looking like you would expect. 🙏