How does HDR lighting actually work, internally?

Pocket

Half a Lambert is better than one.
aa
Nov 14, 2009
4,699
2,581
Something that's been bugging me for the longest time... how does generating HDR lighting take the same amount of compile time as LDR? I used to assume that VRAD ran two lighting passes, one for the minimum exposure level and one for the maximum, but then I realized that it would need to run a third for people who have HDR off. Which it clearly doesn't do. So then I thought, maybe it takes the LDR lighting data and converts the HDR into some sort of modifier, where the channels actually represent how much brighter and darker the range gets relative to that. After all, if you've ever seen the HDR skybox textures, they clearly don't look anything like what you actually see on the screen. It's more akin to a bump map in how it uses its RGB values.

But then I realized that you can compile the game with only HDR lighting. If the HDR lightmaps worked that way, there'd be no base levels to refer to. So I'm completely stumped. Has anyone out there ever dug into what lightmaps actually look like and/or some in-depth documentation (perhaps one of Valve's SIGGRAPH papers) of how VRAD actually works?
 

Idolon

they/them
aa
Feb 7, 2008
2,123
6,137
I'm not totally sure how it works, but my guess is that an exposure level is just a way of remapping brightness values. Tell the camera that white is 200 instead of 255 and everything becomes brighter.

HDR may also store lighting data with values above 255 for added dynamic range, but I don't really know.
 

B!scuit

L5: Dapper Member
Aug 12, 2016
205
267
I actually spent a fair chunk of today looking at how vrad works (I want to 'paint' lightmaps).

Wikipedia: HDR
Light values are recorded in a higher dynamic range with more bits per pixel (bpp)
The engine takes all these numbers and tonemaps the image to LDR so it can be drawn by a physical screen and then dynamically clamps/ white-balances to the 0-255 RGB range a screen needs.
So it's not multiple passes, it's wider passes (Idolon's pretty spot on here)

If you've calculated your HDR lightmaps you can just clamp them down to LDR by dividing each pixel by an arbitrary amount, no need to re-render the entire map.
The actual lightmap calculation would be most of the work but it only needs to be done once, compressing the HDR lightmap down to LDR would take almost no time this way

The closest we have to official vrad documentation: VDC article on the BSP format
"Version 20 [TF2 .bsp] files containing HDR lighting information have four extra lumps, the contents of which are currently uncertain.
Lump 53 is always the same size as the standard lighting lump (Lump 8) and probably contains higher-precision data for each lightmap sample.
Lump 54 is the same size as the worldlight lump (Lump 15) and presumably contains HDR-related data for each light entity."


Oh, also: SIGGRAPH06_Course_ShadingInValvesSourceEngine.pdf
 

Pocket

Half a Lambert is better than one.
aa
Nov 14, 2009
4,699
2,581
OK, so theoretically they could have made it so you only need the HDR lighting pass and then the lightmaps are quickly downconverted to their 8-bit counterparts automatically, but they didn't because reasons.
 

B!scuit

L5: Dapper Member
Aug 12, 2016
205
267
pretty much, but that's mostly because HDR allows you to have lighting totally different from your LDR lighting.
and for some reason lump 58 allows you to have different face data in HDR and I have no idea what you could do with that.
 

Pocket

Half a Lambert is better than one.
aa
Nov 14, 2009
4,699
2,581
This also means that env_tonemap_controller doesn't affect the lightmap data, and if I wanted to test different tonemap setups, I could recompile with -entsonly to save time.
 

Tumby

aa
May 12, 2013
1,087
1,196
You can literally change the tonemap stuff ingame.
In console: ent_fire [tonemap name] [settings n stuff]

Btw has anybody mentioned that hdr cubemaps always have the last image looking all weird?