Announcement

Collapse
No announcement yet.

Real Flash Quake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Woah, this is crazy. It's obviously wrong but, it is also oddly correct.


    I had an idea where I do...

    red[curr] = Math.round( Math.sqrt( (red[lastOriginal] + diff ) * lightdata[curr] ) );

    the entire premise was based on 255 = Math.sqrt( (255) * 255);
    in other words, if the top equals the top anything else is on it's way to the bottom. That lead me to believe I was basically going to multiply the shadow into the colormap "by hand".

    The results are wrong(ish) but absolutely genius for an idiot.
    http://www.nextgenquake.com

    Comment


    • @spike - sorry I missed your post.

      I do have mipmaps. They are inside the .atf. Maybe there is a var I need to set but, they are definitely at least there. How do you look at a texture and know if it is "mipping".

      My offset should be fine cause I just

      old:Vector.<int>
      to
      new:Vector.<Vector.<int>>

      and
      (0xFF << 24 | old[n] << 16 | old[n] << 8 | old[n])
      to
      (0xFF << 24 | new[n][0] << 16 | new[n][1] << 8 | new[n][2])

      I have my regular lightmaps inside the exact same loop and when I turn off color mapping the lightmaps are perfect. I am using the identical formulas for both. I just skip the first 8 bytes of the lit.
      http://www.nextgenquake.com

      Comment


      • if you're getting moire patterns, then you're NOT using mipmapping.
        its one thing to provide mipmaps, but you do actually need to use them too - check your docs for 'trilinear filtering' or something, ideally use anisotropic filtering.
        Some Game Thing

        Comment


        • @its one thing to provide mipmaps, but you do actually need to use them too

          I'll check the docs to see if I need to explicitly set something...

          Nope, the var is already set to true by default. Hmmm, maybe I need to make more mips. I only have it making 3 from however big it is down.
          http://www.nextgenquake.com

          Comment


          • No moire problems. (ba-dump tsss)


            It turns out that atf is mini broke. If you don't generate all mip levels it wont honor mips. Now I need to go change back all the shit that I changed which wasn't the problem... fun

            edit: I changed it back and it broke everything. Final solution: All miplevels need to be included in the atf and you cant use an ATF with no miplevels (lightmaps) as a LightMapMethod for mipped textures. OK, shitty mcshitterton, ATF with mips for textures... PNG for lightmaps. Suck it! I still get a huge increase. My ATFs with all possible mips are slightly smaller than the PNG. It would be nice to compress my Lightmaps too but they will receive an automatic decrease in size when I stop faux filtering them and store them as 1x resolution.
            Last edited by MadGypsy; 04-19-2016, 02:00 PM.
            http://www.nextgenquake.com

            Comment


            • Check out my water animation!

              hah hah, I'm totally fucking with you, plus my animation sucks anyway
              Last edited by MadGypsy; 04-19-2016, 02:11 PM.
              http://www.nextgenquake.com

              Comment


              • break;

                Why haven't I included norms/specs maps? After all, it's built right into my API and as simple as thisTexture.normalMap = thisTexture_norm.png (basically).

                The answer is simple. Dynamic Light.

                1) Dynamic light is an FPS murderer and I don't have a lot of FPSs to spare. Absolute max is 120 on the best hardware. Average computers will run at 60.
                2) I don't know how to determine where to put the lights because I primarily use surface textures
                3) Too much bullshit. I'm not remaking darkplaces. Most of the time all that stuff looks like glossy dog-shit anyway. There will be light/shadow and color. I feel this is sufficient for any game I would end up making.

                This is what happens when you add dlights/norms/specs in my API. It looks OK at best and it is basically unusable for a game. 145.3 mb of ram for one stupid room. My current entire map sits around 20mb and it will actually be even less when I stop faux filtering shadows.
                http://infiniteturtles.co.uk/project...ponzaDemo.html

                Not to mention that, IMO, even my garbage looks better than their kitchen-sink demo. At least in a way regarding potential.
                Last edited by MadGypsy; 04-19-2016, 03:24 PM.
                http://www.nextgenquake.com

                Comment


                • Just fukkin around


                  The more I play with using HD replacement textures, the more I want to ditch them. There are a LOT of floor anomalies regarding textures with stark lines in them. Hovering the camera further from the floor mitigates them a bit but, everything about that (as a solution) smacks of garbage. This above image does not reflect the issues I'm facing.
                  http://www.nextgenquake.com

                  Comment


                  • I officially have 4 projects going simultaneously.

                    1) My engine
                    2) My BSP utility
                    3) I actually have almost completed my "unfinished/unbalanced" map (DM and SP)
                    4) game code for my engine

                    #4 is what I am working on today. I want to get a clear and solid connection going between the engine and game logic. I have various ideas on how to go about this and I am maybe 100 lines from getting one version working. My current game logic is going to be incredibly simple. I made a "func_click" brush. When I click it, it should turn to red. Whereas as an actual game entity that would be basically useless, it encompasses all the base elements of an entity: interaction and indication. Once it works, the interactions and indications can be easily switched with something more useful.

                    A couple of years ago I rewrote most of the QC source in a very different way. One example is func_move. My func move brush replaced ALL other moving brushes and even had the ability to move twice. Consider the first secret on e1m1. That's 2 brushes. One door moves back past the second door and the second door slides to the side. It gives the illusion of the door moving in 2 stages. My func_move obsoleted that method. It could be a door, plat, train... any moving brush entity. I am going to revisit these ideas in the gamecode for my engine.

                    edit: Well, I tried to post movebrush.qc for "show and tell" but it far exceeds 12000 chars (post max).
                    Last edited by MadGypsy; 04-23-2016, 11:51 AM.
                    http://www.nextgenquake.com

                    Comment


                    • exceed this!





                      http://www.nextgenquake.com

                      Comment


                      • according to the FZIP docs I can...

                        var importedSound:Class = lib.getDefinition("data.swf", "SoundClass") as Class;

                        Or more clearly, I can have the FZip library return classes from an swf contained in the zip, by stating which swf and requesting the class by name. That being said, there is no reason why I can't...

                        entityMesh:Mesh = someBrushWaitingToBeMoreInThisWorld();
                        entDefinition[i] = lib.getDefinition("progs.swf", entities_t[n].classname) as Class;
                        ent[t] = new entDefinition[i](entityMesh);

                        I used 3 different vars (i,t,n) to illustrate that this isn't happening all at one time. I'm not gonna request "func_door" from the swf for every instance of a "door"...
                        http://www.nextgenquake.com

                        Comment




                        • I know, you're looking at that thinking something like "WTF is this shit?".

                          What that is, is me finally unlocking the core of how my 3d API works. That is over 6000 polys, textured with the most ram expensive images, running 1fps slower than my maximum potential, at full screen. In other words, this method is blazing fast for my API.

                          "Why does it look like complete shit"

                          I realized that my API is creating a buffer for every instance of a subgeometry. In other words it was creating a buffer for every damn face.... * 4(indices,vertices,uvs,lm uvs). I started concatting all this data together and dumping it all at once into one subgeometry. The only problem is...indices become fucked. With every concat I wind up 3 indices behind. When I realized this I had it make a new indices vector based on the new combined vertex data. I knew it was going to make this mess but, I wanted to see what the performance was going to be. There is no doubt. One way or anothr I need to figure out how to get faces that share a texture into one subGeometry.
                          http://www.nextgenquake.com

                          Comment


                          • Originally posted by MadGypsy View Post
                            When I realized this I had it make a new indices vector based on the new combined vertex data.
                            if you had read my onedraw stuff, you would have realised that my code builds pretty much the entire bsp into a single draw call...
                            make one vbo per texture/lightmap atlas/65k verts combo (two passes, first to count them, second to build the actual data).
                            generate new index buffers only whenever the view leaf changes.
                            the result gets good framerates even with r_nopvs enabled, as you're now discovering.
                            Some Game Thing

                            Comment


                            • I did read it and I thought I was doing it. I thought sub geometries make a single geometry but it's more like geometry has multiple geometries that are just called subgeometry. I guess I still don't understand something about your method. I'm going to get it right. The performance is outstanding, like 1000's of verts are nothing.

                              I DO read your stuff, bro... religiously. I don't always understand it, and even when I do, I don't always understand the away3d end to accomplish it. Notice I haven't given up... no matter how much I have to keep going backwards. This is because I know 1) I AM going to succeed and 2) This is to be expected when you start something that you have basically no education in. 3) I am a stubbornly determined S.O.B. I mean, take out common 3D terms and their definition and whatever is left is what I had/have to learn to achieve this. You can add on learning an entire full-featured 3d API.

                              I think I am doing awesome for someone that woke up one day and ignorantly decided "I have a GED. I'm gonna be a heart surgeon starting today"
                              Last edited by MadGypsy; 04-24-2016, 01:16 PM.
                              http://www.nextgenquake.com

                              Comment


                              • Here let me break down what you told me so, you understand where it all falls apart for me. Maybe even doing this will spark an Aha! moment for me

                                you're thinking along the right sort of lines, but forgetting that the pvs isn't maleable enough for that to be practical.
                                SO COME UP WITH A HYBRID SOLUTION!

                                if you want good performance, you need to submit geometry in large batches. you cannot (normally) batch between multiple textures, so you need to subdivide the world into per-texture batches.


                                fully understand that.


                                if you have per-texture batches, its fairly trivial to move the batchid into your face struct instead of your texture struct. then you can easily split it so you have multiple batches for texture with over 65k verts in surfaces that use them.

                                I'm basically with you here although I have no clue what you mean by batchid from text to face. I ASSUME this basically means instead of getting textures by face you get faces by texture (which is what I am currently doing)


                                if you do that, then you find that you have only a few vertex buffers. with that done, you don't have to shuffle any more verticies around at all. all that then remains is the index/elements buffer. this is significantly cheaper to update, its just a few shorts. you can also calculate the maximum index count for each batch.

                                index/element buffer? I'm guessing index is indices. I have no clue what an elements buffer is.

                                the rest of the below is where this all falls apart for me (aka: temporal incoherence ). I mean, I understand specks of it but as a whole I am lost.

                                this means you can then easily loop through the surfaces and just add their indexes to that face's batchgroup's list of indexes.
                                then just submit each set of dynamic indexes along with their static vertex buffers.
                                if the pvs hasn't changed then you can just reuse your dynamic indexes from the previous frame. even if it has changed, you can usually get away with using the last frame's indexes anyway while you have a thread working out the new indexes (aka: temporal coherance).

                                really, the only complication is lightmaps, although you're ignoring lightstyles in a q3-esque way anyway, so that'll trivialize it for the most part. the only remaining problem there is splitting your batches according to both the texture and lightmap-atlas-texture, as well as vertex overflows.

                                ignoring the frustum and using only the pvs is easy enough. you'll end up with a load of geometry behind the camera, but the gpu can cope with that more easily than you can via checking if leafs are actually onscreen - assuming at least one surface from that batch is onscreen anyway.

                                you're running inside a VM, so its fairly safe to assume that the gpu will be noticably faster than the cpu.
                                http://www.nextgenquake.com

                                Comment

                                Working...
                                X