normal maps taken into account when lightmapping, is that even possible ?

Started by RemiD, October 17, 2019, 18:13:55

Previous topic - Next topic

RemiD

i have read several posts about that on the web, a problem that seem to be common in several 3d engines (unity and unreal included) :
a normal map is normally used with per pixel lighting, and in this case this is not a problem because whatever the resolution (=texel size) of the normal map, for the near albedo+normalmaps, the texel size is usually bigger than a pixel of the screen, so a pixel can take into account the normal of the texel of the normalmap when calculating the lighting/shading

however when generating a lightmap, the texel size is usually bigger than the texel size on the normal map, so in this case, i don't see how a lightmap can take into account a normal map for lighting shading... except if the lightmap has the same/higher resolution (=same/smaller texel size) than the resolution of the normal map. but then the lightmapping would take a lot of time, and the resulting lightmaps would be really heavy (in file size and sram)

an example from my current project :

resolution of the normal map : (here a diffuse texture with fake lighting shading, but a normal map would have the same resolution)


resolution of the lightmap :


with my lightmap resolution (1texel = 0.1unit)
for 1 cell, the lightmaps would have 8 100 texels and this would use 24.3kb (in sram)
for 1000 cells, the lightmaps would have 8 100 000 texels and this would use 24.3mb (in sram)

with a lightmap resolution similar to the color/albedo texture (1texel = 0.01unit)
for 1 cell, the lightmaps would have 810 000 texels and this would use 2.43mb
for 1000 cells, the lightmaps would have 810 000 000 texels and this would use 2.43gb !!!

without considering the time it would take to light/shade high resolution lightmaps !

so you see the problem... ???

GaborD

Yes you'd need directional lightmaps or some similar approach because otherwise like you said your lightmaps would have to be full rez, in which case using lightmaps doesn't make sense in the first place because you could just use full baked textures with surface and lighting combined instead and not need to blend anything.

Valve used directional lightmaps long ago with their 3 directional system, they called it radiosity normal maps (they baked lightmaps for 3 directions and normalmaps were converted to special maps with baked in self shadowing for those 3 directions)
You can read about their approach here: https://developer.amd.com/wordpress/media/2013/02/Chapter1-Green-Efficient_Self-Shadowed_Radiosity_Normal_Mapping.pdf
I tinkered around with that approach back then, was really useful and simple to implement.

Nowadays I generally just keep it simple with 4 directions (because those fit nicely into a double width/height lightmap) and a stock normalmap, but anything goes I suppose, just depends on what specs you are aiming for. 

With normalmaps you'll also need geometry with tangent space so that you not only have the normal but also a binormal or tangent. They can be calculated at runtime, but that's a lot of math in the shader, better to have it baked into the geometry in my opinion.

RemiD

Quote
you'd need directional lightmaps or some similar approach because otherwise like you said your lightmaps would have to be full rez,
ok thanks for confirming what i thought...

Quote
you could just use full baked textures with surface and lighting combined instead and not need to blend anything
well i had the idea to do that before reading about normal mapping, but then the lightmapping would take too much time or the filesize of lightmaps would be too big (for what i am trying to do)

but i am curious to code something to do that to see what it renders... (a kind of simplified normal map used to help to calculate each texel normal and therefore a more precise lighting/shading of the texel...)