using gl2 terminology, your vertex shader takes per-vertex inputs ('attributes'), and writes out 'varyings'.
the gpu takes those verticies and generates triangles. these triangles are clipped to the screen resulting in a series of 'fragments'.
your fragment shader is then run once for each pixel within that fragment, taking those varyings (which are interpolated across the various triangle fragments as they go from one vertex to the next). your fragment shader writes out the colour for that fragment's pixel.
the blend unit takes those pixel fragments and blends them with the value in the current framebuffer, and may still discard parts of the fragment if another fragment wrote a closer value for the depth buffer.
there are more steps in the pipeline, for instance if there are geometry shaders or tesselation evaluation+control shaders involved, but most people can ignore those still.
of course, your api might refer to any of these concepts with different names. the basic aproach is the same.
you might also encounter flat shading, in which case the various primitives use a single value from their 'provoking' vertex without interpolating. the rest of the pipeline doesn't change.
conceptually, texture sampling is just bolted on the side. think of it as just a subroutine that provides a lookup into a 2d grid, with the sampler saying whether to interpolate or just use the nearest value. your varyings store a colour or a texture coord or a position or really anything so long as its a float(or four, yay simd).
this is little different from how 'uniforms' work (these started out as individual constants that were sandwedged into the inactive parts of the gpu's fixed-function hardware, but moved to actual blocks of memory once the hardware lost all of its special case transistors). both uniform buffers and textures are just blocks of memory, and modern GPUs can even write back to them.
the rest is just maths, meshes, and materials. or something.
basically, figure out the texture coords based upon the world coords, instead of trying to insert geometry which would end up giving you a similar thing with less precision and more overdraw.
if you want a skybox, use a cubemap. don't draw 6 faces yourself, just give the direction as a texture coord. the gpu will normalize (or squarize or whatever you want to call it) automatically.
if you want a scrolling 2d sky, project the texture coords into the 2d texture coords yourself. quake projected the sky on a simple flat plane, but pulled closer to 0 the more vertical it got. my glsl should give you a formula equivelent to quake's, but there are multiple other ways to project a flat texture onto a sphere (like a map of the earth).
calculating the coords in your fragment shader means you will have less overdraw, more precision, and less depth issues/glitches. its also simpler, seeing as you'll have special shaders anyway (assuming you're comfortable writing shaders, but if you're not then you kinda need to learn in order to get any decent effects).
the gpu takes those verticies and generates triangles. these triangles are clipped to the screen resulting in a series of 'fragments'.
your fragment shader is then run once for each pixel within that fragment, taking those varyings (which are interpolated across the various triangle fragments as they go from one vertex to the next). your fragment shader writes out the colour for that fragment's pixel.
the blend unit takes those pixel fragments and blends them with the value in the current framebuffer, and may still discard parts of the fragment if another fragment wrote a closer value for the depth buffer.
there are more steps in the pipeline, for instance if there are geometry shaders or tesselation evaluation+control shaders involved, but most people can ignore those still.
of course, your api might refer to any of these concepts with different names. the basic aproach is the same.
you might also encounter flat shading, in which case the various primitives use a single value from their 'provoking' vertex without interpolating. the rest of the pipeline doesn't change.
conceptually, texture sampling is just bolted on the side. think of it as just a subroutine that provides a lookup into a 2d grid, with the sampler saying whether to interpolate or just use the nearest value. your varyings store a colour or a texture coord or a position or really anything so long as its a float(or four, yay simd).
this is little different from how 'uniforms' work (these started out as individual constants that were sandwedged into the inactive parts of the gpu's fixed-function hardware, but moved to actual blocks of memory once the hardware lost all of its special case transistors). both uniform buffers and textures are just blocks of memory, and modern GPUs can even write back to them.
the rest is just maths, meshes, and materials. or something.
basically, figure out the texture coords based upon the world coords, instead of trying to insert geometry which would end up giving you a similar thing with less precision and more overdraw.
if you want a skybox, use a cubemap. don't draw 6 faces yourself, just give the direction as a texture coord. the gpu will normalize (or squarize or whatever you want to call it) automatically.
if you want a scrolling 2d sky, project the texture coords into the 2d texture coords yourself. quake projected the sky on a simple flat plane, but pulled closer to 0 the more vertical it got. my glsl should give you a formula equivelent to quake's, but there are multiple other ways to project a flat texture onto a sphere (like a map of the earth).
calculating the coords in your fragment shader means you will have less overdraw, more precision, and less depth issues/glitches. its also simpler, seeing as you'll have special shaders anyway (assuming you're comfortable writing shaders, but if you're not then you kinda need to learn in order to get any decent effects).
Comment