
The answer is still probably "no", however. I say "probably" because it's possible, but you shouldn't. Lightmaps are just way too low-resolution. I'm not talking half-resolution or even quarter-resolution, it's something in the order of one-sixteenth resolution! You don't see it in WinQuake because of the way it calculates it's blend from the colormap, and you don't see it in GLQuake because values of gl_texturemode don't affect lightmaps, but for a standard 64 x 64 Quake texture the lightmap equivalent is only 4 x 4.
Increasing the lightmap resolution (by filtering upwards during an upload, for example) is one option, but you would seriously hurt dynamic light update times. The same applies to mipmapping lightmaps, which would be another requirement. To encode a normal map in a lightmap you would need 3 channels, meaning that you've a single channel left over for colour, meaning that you'll lose coloured light. You could take an alternative tactic and bake a height map in, then convert that to the normal map in a pixel shader, but it's gonna be slow and take a lot of instructions.