Announcement

Collapse
No announcement yet.

Real Flash Quake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • element buffer == index buffer.
    different names in different apis.

    the general rule is to have as few 'draw calls' (whatever that is in your api) as is possible.
    like glDrawArrays or glDrawElements etc.
    this means that any texture, vbo, or uniform changes will be separate batches, but nothing else should be.

    'temporal coherance' means algorithms that depend upon things that don't change very much with time. pvs is an example - you can easily cache the ebos from one frame to the next, so long as the view leaf does not change.
    you can usually get away with using the previous frame's pvs until the ebo for the current view leaf has been generated. such cases are temporally incoherant, and that's fine because it only lasts for a single frame and you probably used quake's fatpvs thing anyway.
    Some Game Thing

    Comment


    • I've almost got this working but, I am having a problem getting the results I expect from the new indices.

      lets make something up:

      Lets say I have a face with 4 points, I make indices like this:

      0,1,2, 0,2,3

      Now let's say I concat another face to the vertex buffer which also has 4 points and make my new indices this:

      0,1,2, 0,2,3, 4,5,6, 4,6,7

      the results are a mess.

      Why? Is there something I'm missing? Does every triangle have to be connected to something? Do three points not equal a closed triangle? The way I understand it, 0,1,2 is a complete triangle, but it seems to be acting more like 0,1,2 is just three points and the next 0,2,3 is closing the first triangle and adding a point that will ?"magically" be connected to wherever the buffer started.

      If I could figure out how to properly index the final vertex buffer I'd be golden.

      what do I need to do between 0,2,3 and 4,5,6 to stop the board from joining itself to the other side of the board (like my image)

      Also, I am completely unconcerned with vis. Is it absolutely necessary that I use leafs/vis to get this to work properly? Due to the results I've been getting, I'm fine with simply turning each "texture" into it's own solid model. I had a 1fps loss on my BSPUtility version running in the debugger. Move that to my actual engine and switch out the super expensive bitmapData my utility uses for the gpu optimized images my engine uses and there will be a what... 0fps drop? @ > 6000 polys? I opine calculating the PVS every time the player is moving is more expensive than simply getting rid of it.
      Last edited by MadGypsy; 04-24-2016, 06:10 PM.
      http://www.nextgenquake.com

      Comment


      • Are you drawing with triangle fans?



        If so, there is one fixed point at all times and it is the first. And the order matters.
        Quakeone.com - Being exactly one-half good and one-half evil has advantages. When a portal opens to the antimatter universe, my opposite is just me with a goatee.

        So while you guys all have to fight your anti-matter counterparts, me and my evil twin will be drinking a beer laughing at you guys ...

        Comment


        • yes I am drawing triangle fans... clockwise

          I am trying to draw multiple, possibly and probably, not-connected faces to one vertex buffer. So, I would have multiple fixed points in one buffer.

          That actually seems to be the whole problem. Somehow my indices are not clear to the buffer as containing multiple fixed points.

          if I index v1,v2,v3 - v3 will automatically draw back to v1... correct? If so, why does this not work a1,a2,a3, b1,b2,b3? WOuld a1 and b1 not both be considered fixed points? If not, how the hell do I combine faces into one buffer? Also, if it doesn't work that way why do indices have to be 0,1,2,0,2,3 as opposed to 0,1,2,2,3 (assuming 0 is truly a fixed point, why do I need to keep indexing it)?

          PS> I'm not challenging you. I'm trying to understand this.
          Last edited by MadGypsy; 04-24-2016, 05:07 PM.
          http://www.nextgenquake.com

          Comment


          • For triangle fans you would feed p0

            Then p1, p2, p3, p4, p5, ...

            It would draw ...
            *) p0, p1, p2
            *) p0, p2, p3
            *) p0, p3, p4
            *) p0, p4, p5
            *) p0, p5, p6
            ...

            If you are using triangle fans, and not something else like triangle strips ...
            Quakeone.com - Being exactly one-half good and one-half evil has advantages. When a portal opens to the antimatter universe, my opposite is just me with a goatee.

            So while you guys all have to fight your anti-matter counterparts, me and my evil twin will be drinking a beer laughing at you guys ...

            Comment


            • It absolutely does work as I understood it. I hand coded all the model data to make damn sure. This is good, it means I am fucking something else up. My indices are correct.


              There is 2 completely different faces sharing a buffer. You can see from my handtyped indices that I am moving the fixed point without issue.


              Edit: Let's even remove a point and make sure the faces don't even share common axis points (not that any of this should matter)
              Last edited by MadGypsy; 04-24-2016, 06:30 PM.
              http://www.nextgenquake.com

              Comment


              • ZOMG! .jewhfaliruwqhgfliuabfalkjreswbfasjk

                It helps when you concat the buffers in the right fuckin order!



                I have lightmaps turned off. There's no reason they won't be perfect when I

                master = merge(master, extra)

                instead of

                master = merge(extra, master)

                LOL! shitty.

                edit: Just to grind in how shitty this is... Last night I wrote the perfect code the first time through EXCEPT for my switcheroo order in the merge. That was THE only mistake and it took me like 16 hours to figure that out. That's what's sucky about not being 100% sure of what you are doing and then making a major change... you can't even be sure of where the mistake is in the first place.
                Last edited by MadGypsy; 04-24-2016, 10:16 PM.
                http://www.nextgenquake.com

                Comment


                • Watch out, spike. I'm comin for you and your little dog ~err... engine, too. (hee hee)




                  Basically no loss of FPS... no_vis renders. For any body that don't know, the little statistics panel in the upper left has my current FPS in the upper left 60/120 and the number to the right of that is my average FPS. The /120 part is something flash wont even try to run on my computer, totally governed. I put the /120 there for people that can run flashs max FPS (120).
                  Last edited by MadGypsy; 04-25-2016, 12:31 AM.
                  http://www.nextgenquake.com

                  Comment


                  • one more time

                    3fps average loss with all of e1m1 being rendered. Every real Quake map image I have posted is also with my renderer at full screen, in debug mode (more overhead), using ram expensive bitmap data instead of gpu optimized ATFs. Just by exporting this, doing the image conversions, and loading it in my engine, in release mode, I could probably pop right back to a full and solid 60fps.



                    Last edited by MadGypsy; 04-25-2016, 12:58 AM.
                    http://www.nextgenquake.com

                    Comment


                    • I turned a lemon into lemonade. The whole reason I ever found my switcheroo mistake was because I decided to just write the entire map constructor from scratch. My BSP utility had like 3 or 4 different ways things could be done. I narrowed it all down to 1 and made it very very direct. There are a few things I need to add back in but, mostly the heart of the map constructor is complete. So, my brief little problems became brand new concise code and a far superior product to my previous constructor/renderer. This is probably where things are going to start picking up. I can render all these real maps so, I don't have to spend time making shitty tester ones. I'll just make the quake maps work, for now.



                      Last edited by MadGypsy; 04-25-2016, 12:55 AM.
                      http://www.nextgenquake.com

                      Comment


                      • Look at this. I added my inline vertex stripper back in. I tried to take this image from about the same spot as the last image in my previous post. The number in the white output box (roughly bottom left) is how many vertices it removed. If you compare poly counts there is a 19% reduction (18.8 but who's that picky?), for a total removal of 2967 polygons. It's not like I have had a lot of time to run thousands of test or anything but, I can't find anything obviously wrong with this map after stripping it.



                        I don't know if inline vertex stripping is something that other quake engines do. All I know is I actually read struct results and I found a lot of (ex) 0,10,90 / 0,20,90 / 0,30,90. Considering that is a completely straight line which is only pretending to be a triangle, I got the idea to simply remove the middle vertex and any uv/index data associated with it. It seems to work quite well. If anybody cares, I do it like this:

                        Code:
                        j = 0, e = 0; do
                        {	if((face.vertices.length/3) > 4)
                        	{	if ((j+9)<=face.vertices.length)
                        		{	s = 0, ++e;
                        			do s += ((face.vertices[j] == face.vertices[j+3]) && (face.vertices[j] == face.vertices[j+6]))? 1 : 0;
                        			while ((++j) % 3);
                        			if (s > 1) 
                        			{	face.vertices.splice(j, 3);
                        				face.uvs.splice((j - e), 2);
                        				face.sts.splice((j - e), 2);
                        				++cnt;
                        			}
                        		} else break;
                        	} else break;
                        } while ( face.vertices.length - j );

                        This is how I am reconcocting the indices. I just let each face get it's regular indices and process them by texture at one time once all faces have been given their base data. This means the indices can't be just merged, they have to be reconsidered. My little function below does a good job. It simply accepts the current texture name and the set of indices that is to be added to the "master" indices vector for that face of the texture. All this really does is add (the last number in the master buffer +1) to whatever the index being processed is. Too simple.

                        Code:
                        private static function reindice(name:String, indices:Vector.<uint>):void
                        {
                        	var l:int = texture_models[name].indices.length;
                        	l = (l) ? texture_models[name].indices[l-1] + 1 : l;
                        	
                        	indices.forEach(function(i:uint, n:int, self:Vector.<uint>):void
                        	{	texture_models[name].indices.push(i + l);
                        	});
                        }

                        This is all still using my faux linear filter. It might not be "right" but, not bad, right?
                        Last edited by MadGypsy; 04-25-2016, 02:04 AM.
                        http://www.nextgenquake.com

                        Comment


                        • regarding stripping verts:
                          gpu precision is guarenteed for two lines that share points, even if they're lines.
                          if you have two polys next to each other that do NOT share points, then the gpu is free to round those different points differently.
                          'but they're straight lines'... well, no. the raw verticies are, but after you transform them by the model matrix... and the view matrix... and then the projection matrix, that middle vertex may have ended up nudged sideways/up/in/whatever by a pixel.
                          this results in tiny cracks between the polies, which can be quite obvious with non-black backgrounds - enough so that they can serve as wallhacks.
                          glquake stripped them, controlled via the gl_keeptjunctions cvar (with less axial maths to detect them), but quakeworld engines opted to remove the feature due to the wallhack issue.
                          on modern systems, an extra few triangles is not a real performance concern, and ideally qbsp ought to the one doing the stripping. but then modern systems tend to have enough precision for the issue to be much less obvious.
                          so yeah, that's why those silly triangles exist.


                          now you need to add collision support.
                          Some Game Thing

                          Comment


                          • @collision support

                            1) increase all limits and handle buffer overflows before they overflow
                            2) PVS texture models - might as well check it out
                            3) render entities
                            4) real shaders + lit file
                            5) collision detection

                            I may even put sky back in. IMO quake sky is total bullocks and I will probably never use it but, maybe I should put it back in anyway.

                            I almost loaded jam6_daya. It broke my lstedge limit but, it was moving right along with no sweat til it hit that point.
                            Last edited by MadGypsy; 04-25-2016, 12:17 PM.
                            http://www.nextgenquake.com

                            Comment


                            • CompactSubGeometry

                              According to the docs CompactSubGeometry puts all model data into one buffer. All of it. Even secondary uvs.

                              Brush models are generally almost nothing, 6 or so faces. I was thinking about how I will be basically creating all these new buffers per brush entity and considering how I could truncate the data. I have no idea how CompactSubGeometry will effect my performance but even if it doesn't effect it at all, that's great cause, it will still be 3 * brushModelCount less buffers that I am using. there isn't an unlimited amount of buffers. There are lots and lots of them but, not unlimited. I want to consider stuff like this BEFORE I write something that is way too resource hungry.

                              An example of this is my vertex stripper. I get what Spike is saying and due to it giving me 0 extra performance I will probably remove it. I actually pretty much figured that I was stripping vertexes that were intended for other faces. However, the "couple of faces" is actually equivalent to over 40 mdl instances. In other words you could populate a quarter of a more modern map with the amount of polys it strips. Also, less commonly, it might strip just enough for that "one map" that barely breaks the limits to sneak in. I'm still going to remove it cause it's not really making any positive difference at this point but, the point is that I want to make things as clean as possible now, rather than going back later and possibly having to cascade some change through my entire engine.

                              @Spike &| Baker - tell me something about DLights. You know what I do and don't know (cause I don't know anything ). Tell me something I need to know.
                              DLights = original quake style (flicker, slow, gentle pulse, etc)
                              Last edited by MadGypsy; 04-25-2016, 06:03 PM.
                              http://www.nextgenquake.com

                              Comment


                              • by dlights, you mean lightstyles?
                                each frame, walk through the 255 total lightstyles and update the style's value for that frame ((stylestring[time%stylestringlength]-'a')*22), if its empty just treat it as 'm'. you can interpolate that later if you can cope with that.
                                then your surface has up to 4 lightstyles on it, each with its own lightmap. you just need to ensure that the lighting values on the surface are added together.
                                you can get the style strings from vanilla quake. switchable lights just change the lightstyle strings. easy.
                                by the looks of it, you're reading only the first lightmap from the surfaces and assuming that its style 0. if you're only going to support a single style per face, you should either add them all, or find style 0 and use that instead. either way you won't get lightstyle animations.
                                if you want to get fancy about it, you can use a dotproduct in your fragment shader to do all the multiply-and-add stuff in a single operation!... so long as your vertex program does the lookups to provide the per-frame/face values. that aproach has no slowdown at all from the entire world animating, but its hard to add lit support efficiently.
                                glquake does the adding together at run time, so there's humungous slowdown if the entire world has some animated lightstyle, and interpolation is a massive destroyer of framerates.
                                Some Game Thing

                                Comment

                                Working...
                                X