Announcement

Collapse
No announcement yet.

Real Flash Quake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • You read my code? Now, I'm not sure if I want to post it anymore... too much pressure (jk)

    My code should be getting lightstyles UNTIL it hits 255, just like you said many many pages back in this thread. if(maps < 4 && lightdata[maps] != 255). I am going to pursue the hard way that has no performance issues.

    -------

    It's a shame it would be such a pain in the butt to atlas lightmaps according to where they are in relation to an adjoined face. I ran weld on e1m1 and it got rid of a lot of data but, of course it destroyed lightmaps. I have a poly reduction script I want to try out but I can't run it properly without a fully connected mesh. Of course I can run it right now on the welded map but no matter what results I get they will be useless due to lightmap uvs being destroyed. One day I'm gonna turn a 15000 poly map into about 1000 polys with no visual difference between the 2. I don't really care if it's not a performance booster for me. To remove 14000 arbitrary vertices from a bsp could be a serious performance booster for somebody elses hobby engine. It could also be another way around upgrading limits. Plus it's just awesome.

    Side note: I removed the little script that sorts faces according to size before sending them to be allocated and it broke every quake map. That little sort function is apparently quite necessary. I just sort by height, tallest first. My lightmap atlases are generally pretty compact. Maybe not "quake" compact due to me completely ignoring blocklights but, certainly not sprawled out. In the below atlas, black does not necessarily indicate a gap. I still have one shitty spot in jmappy where it's making completely black shadows (under the entire ledge on the red armor side of the map)



    Hmmm, my code that draws the lightmap atlas to the smallest possible dimensions from (256, 512, 1024, 204 doesn't look like it's working. Surely that atlas could be at least one size smaller.

    Edit, actually it is working. the next size down would be a quarter the size of that image and the maps would not fit.

    one thing I like about external lightmaps are there editableness. One scenario is the secret doors in jmappy. Even after I suppressed dirt on the doors, the world around the door gets strange dirt. You can even exactly see these lines in the atlas above (if you know what you're looking for). I could just paint them out before release.
    Last edited by MadGypsy; 04-25-2016, 07:39 PM.
    http://www.nextgenquake.com

    Comment


    • I utilized CompactSubGeometry and nothing fell apart. I made one mistake when describing it. It does not include indices in the buffer. That makes sense though. Indices are the only buffer that's length cannot be derived from the vertices length. uv's, secondary uvs, tangents & normals all have a length that is locked to the vertices length in some way.

      To get it working was trivial. I had one little slip up where I was trying to dump everything straight to CompactSubGeometry. Turns out that I have to interleave the buffers before I can instantiate them as ComSubGeo. Away3D docs don't tell you this kind of stuff so I had to pull out some major google hackerzor skills to find maybe the only sentence on the entire internet that had the answer.


      The null, null below within interleaveBuffers is for tangents and normals, neither of which I have any data for ATM.
      Code:
      if (!compact)
      {	l = _tsubgeo.length;
      	_tsubgeo.push(new SubGeometry());
      	_tsubgeo[l].updateVertexData(model[name].vertices);
      	_tsubgeo[l].updateIndexData (model[name].indices);
      	_tsubgeo[l].updateUVData	(model[name].uvs);
      	_tsubgeo[l].updateSecondaryUVData(model[name].lmuvs);
      	
      	_tgeo[j].addSubGeometry(_tsubgeo[l]);
      	
      } else {
      	l = com_tsubgeo.length;
      	com_tsubgeo.push(new CompactSubGeometry());
      	com_tsubgeo[l].updateData(GeomUtil.interleaveBuffers((model[name].vertices.length/3), model[name].vertices, null, null, model[name].uvs, model[name].lmuvs));
      	com_tsubgeo[l].updateIndexData(model[name].indices);
      	
      	_tgeo[j].addSubGeometry(com_tsubgeo[l]);
      }
      This little feature gave me an idea. I'll build a subset of my engine in the future to test it. It goes basically like this:

      CompactSubGeometry has 2 distinct features. First it allows me to crush everything but indices to one buffer. Secondly it automatically handles buffer overflow... carrying the remaining over to a new instance of itself within the same Geometry instance.

      Spike's method for pvs replacement is basically to make 1 big model out of all the faces that share a given texture and store an id on each face that represents the batch it belongs to. Then when the pvs is considered it makes visible all the batches that contain textures which can currently be seen.

      What if we took that a step further? Use ComSubGeo and increment the faces batch id by decimals for every time the buffer overflows. When vis is run, instead of it making all of a particular textures batch visible, it will make only the mini batch which contains that particular face visible. If I'm not mistaken this is about as optimized as you can get. Instead of 1 big batch with 4 buffers, you end up with (something like) 4 small batches each utilizing 1 buffer.

      Most of what I just said is experimental ideas for another day. The current method I am using (spike's) has perfectly good performance. My one change is that all brush models are built with CompactSubGeometry. So far, it appears to work perfectly. I have no reason to believe that will change. Any manipulations I do will be to the mesh that contains the CSG. I seriously doubt moving a brush model mesh around is going to damage the CSG.
      Last edited by MadGypsy; 04-26-2016, 11:42 AM.
      http://www.nextgenquake.com

      Comment


      • interleaved vertex data is easier on the gpu cache, and hard to avoid if you're using d3d9.
        ideally each batch would have a single interleaved vertex buffer, and a single index buffer. ideally different batches would reuse the same vertex buffer too, and possibly even the same index buffer (with different offsets into it).

        my 'method' is to use static vertex buffers, and dynamically generate index buffers by just concatenating the per-surface index data of the surfaces that are visible to the current leaf, and recalculating the index buffer only when the view leaf has actually changed.
        this has very low cpu overhead, but the 'unused' parts of the vertex buffers may increase the demands on the gpu's cache, although I doubt by a significant abount.
        by ignoring the view frustum, you improve cpu cache performance, but will lose any depth ordering benefits (which is a problem when you're sorting by textures first anyway, and is mitigated by the fact that you're still using pvs), and you end up with many surfaces being submitted to the gpu which are offscreen (the gpu will need to run through your vertex shaders more, but will otherwise cull those offscreen surfaces with less fuss than walking through them on the cpu especially in a non-native language).
        the one concern is that by including the geometry behind the camera, you may be generating batches which would not otherwise be visible - with gl3 hardware you can use texture arrays to mitigate that, or even bindless textures and not care about it at all. in my experience, the benefits of skipping all the cpu frustum checks outweigh the performance lost due to extra batches in the majority of expensive scenes (especially so with texture arrays).
        Some Game Thing

        Comment


        • @spike - I'm letting what you said stew in my brain. I'll revisit it when some brain cells start firing

          -----

          I made another standalone build of my engine for you guys to check out. It's packed with a converted e1m1 (converted from BSP to AMF3 Object) utilizing all the replacement textures that ATF didn't complain about the squareness of. Same deal, double click the .exe and use the same controls as last time. There isn't animated water in this because cascading the BSPUtility changes through my engine turned out to be a royal pain in the ass and I stopped when I got the world to render.

          https://drive.google.com/open?id=0B_...mQ0UEh1QmlsVW8

          Something I would like to note here. A lot of people in this world... "flash is dead, flash is obsolete, html5 will replace flash". Well, for one, my engine smokes webquake under identical situations (ie a map with no entities or anything else beyond what my engine can currently do). There is absolutely no contest. The one situation that isn't identical is webquake uses a classic pvs and I'm rendering the entire world... I still smoke webquake.

          See for yourself. Download my example and fly around an entirely and continuously rendered e1m1 without even a burp. All these HTML5 people can kiss my ass for the next X years. To think you are going to outperform me with javascript is extremely laughable. As laughable as me outperforming Cryengine with flash. It simply isn't realistic. Before you can outperform me with javascript, javascript itself will have to see a major increase in it's read/execute speed. Bottom line: You can have all the gpu whatever you want but, if your language is slow as fuck by design, you have already lost.

          side note: You may notice that some shadows look weird. I am positive I am properly atlasing lightmaps and calculating the UVs. I believe the problem is lightstyles. Since I am not using them, areas around styled lights may be showing a lightmap that is not in sync with the ones not effected by a lightstyle. In short, they aren't wrong.... more like incomplete. It's also possible that my faux linear filter is making messes BUT, I really doubt this. Something strange would have to be there before it could exaggerate it. I'm sticking with lightstyles being ignored.
          Last edited by MadGypsy; 04-26-2016, 03:22 PM.
          http://www.nextgenquake.com

          Comment


          • heh, I was just reading some results data and +numTextname hit my eye. Which got me thinking about animated textures. I don't want to google it, be given hints or clues or anything. This one is tricky.

            I'm going to figure this one out with nothing but the Away3D source. Not even the docs... just the real code. Maybe it will turn out to be real easy. As of this moment though, I'm stumped.

            Really this WOULD be simple if I could just turn it all into a spritesheet but, if that don't fuck up repeat textures I don't know what will. Going "flashy" and putting a spritesheet in a masked sprite that replaces the texture is probably a bad idea regarding memory. Constantly assigning a new texture also sounds stupid to me.

            I need to figure out texture arrays. Of course I could google it and probably find something pertaining to my API. You just learn so much more by making mistakes that need to be solved and then solving them. If it weren't for errors, there would be no engine (worth a crap).
            http://www.nextgenquake.com

            Comment


            • ignore texture arrays. they're a gl3 / dx10 feature, and your api is probably too limited to support them - epsecially if it doesn't even support non-square textures.

              for animated textures all you need to do is to switch the textures separately from the geometry. this might end up with multiple batches using the same texture, but isn't worth optimising for. obviously you'll still need to write the code that figures out which texture to use for each batch each frame (this is noticably easier without texture arrays, lucky you).
              Some Game Thing

              Comment


              • @spike - Doing it the way you suggested is probably exactly right but, I hate the entire idea. You are correct, texture arrays are not supported by my API or it's dependencies but, that doesn't mean they can't be invented with some cunning. I dove deep into my API to try and find where it implements it's AGAL (adobe graphics assembly language, ie.. the shader language). I have not googled, researched or studied anything but a bunch of AGAL opcodes straight out of my API source. This is what I believe is possible:

                1) All +num textures can be stored in an Vector.<Vector.<BitmapData>>, where Vector[n] = texture arrays and Vector[n][c] = +ctexname

                2) Every timed pass of my shader will simply use the next image in the array as the source and the current image as the destination, overwriting every pixel with the new data.

                Now, is this even possible? I don't know. I do know that I can modify every pixel of an image within the image in a more generic way.

                ex: where x is actually the red channel and I have a line or two in my actual code that is feeding the shader fc0.x
                Code:
                // ***** vertex shader - the simplest around *****
                
                op = mul4x4(va0, vc0);
                v0 = va1;
                
                ####
                
                // ***** fragment shader *****
                
                ft0 = tex<2d,repeat,linear,nomip>( v0, fs0 );
                ft0.x = ft0.x * fc0.x;
                oc = ft0;
                The tricky part is, I believe this is a one shot deal. In other words I can only feed the shader something like 10, one time per pass, and that 10 effects everything in the image. Where I really need it to feed the shader unique data per pixel. If this doesn't work I have another idea but it might be really expensive. Simply skin the geometry with a gif. I have a feeling that will be terrible though. I really really don't want to use the way you suggested and I hope I find a more elegant solution.

                My only problem with your suggestion is the disaster code I will have to write to make it work. I don't want 100 lines that do nothing but dump and reassign textures to a spot of geometry. I have to at least try to get a shader to do the job. If I fail, meh... it will just result in me figuring out an even better way. As it stands right now, I have crushed everything a quake engine does to get to an equal point with my engine down to a couple hundredish lines. Even if I added in lightstyles my code might get 10 or so lines longer. I can't bring myself to dump a procedural mess in the middle of my code.

                One of the hardest parts about getting the scope of a "real" quake source (for me) is how many times it passes up data and then writes a new function to get that data later. It's all over the place and in many cases I can't find any good reason why. My code is very very different. It passes up nothing but things I currently decided to skip entirely. Someone like me who could greatly benefit from a clear and concise source will praise god when they find my source cause I can basically replace an entire client engine (ie... everything necessary for SP) with about 5 classes. No repeating code, none.


                I don't know how this could be made any more concise and direct. I could get rid of all those new face fields but I need them for BSP to AMF conversion. Ah, I left the vertex stripper in, that can go. Right screen split is a continuation of left.



                That was my BSP utility. Wanna see the actual engine? The below is 100% why I convert from BSP to AMF. That one image is EVERYTHING. No parsing, figuring, waiting...just "go". This is why it does not take a month for my map to load. It's already ready to go.
                Last edited by MadGypsy; 04-27-2016, 02:02 PM.
                http://www.nextgenquake.com

                Comment


                • mobile test... totally works. I have an old school LG Optimus. If it runs on this it will probably run on anything. Anybody want to test an IOS version for me?
                  http://www.nextgenquake.com

                  Comment


                  • This is bad-ass. My primary focus when I started this engine was small 3d mobile games. To see my engine running this smooth on a crap phone with 12000+ polys visible is a huge boost in determination to get this done. I even have some pretty decent invisible controls. I can currently fly all over this map on mobile without losing control or getting lost. It was actually pretty tricky to make it work. There is currently no backpedal or strafe in my mobile version but, all I have done so far is start a new mobile project, move all of my code to it, add mobile stuff to my Main document class and play with the mouse controls from my pc version to work more mobiley. In other words I haven't even really begun to work on this.

                    Last edited by MadGypsy; 04-28-2016, 01:28 PM.
                    http://www.nextgenquake.com

                    Comment


                    • I packaged it up so you guys can check it out. Simply download the apk, dump it somewhere on your android device, go to apps, open file manager, navigate to the apk, select it, when prompted choose package installer, done.

                      The controls are simple. Touching the screen moves you forward. Moving your finger in any direction on the screen points the camera in that direction. If you run out of screen to move your finger, simply lift your finger and place it somewhere else on the screen.

                      https://drive.google.com/open?id=0B_...mQ0UEh1QmlsVW8

                      *note: android devices have a confusing folder hierarchy that doesn't always contain what you'd expect. Personally, I have found that the easiest way to do this is to dump the apk in your downloads folder. Alternately you could probably just visit my link on your device and possibly directly install it... There is a setting on your android device in settings/security.. "untrusted sources". You should be able to install straight from my google drive if you have "untrusted sources" checked.

                      *note2: There is a small wait before the map completes it's load (on my device anyway). It will load and it will work. You just have to be patient for a minute. When you see the stats pop up, the map should be just a few seconds behind it. Once it loads it runs perfectly smooth. I say these things with a dab of confidence cause your device is probably a lot better than mine. If for some reason it doesn't run or run smoothly there is nothing I can do to help you at this point.
                      Last edited by MadGypsy; 04-28-2016, 02:29 PM.
                      http://www.nextgenquake.com

                      Comment


                      • And now I have a pure flash version, as well, running on my website. I think that covers everything except Mac but, I'm guessing the IOS version would also be compatible with mac? IDK, I know nothing about mac. I spent time today making all these different builds in order to see how consistent everything is from one platform to another. All the versions seem pretty consistent but, the pure flash version is a little clunky. For some reason you have to click the screen 2 times before it will gain focus in the flash version. I see no reason why this is true, but, it is true...

                        WorldSpawnBSP_flash

                        Aside: Apparently you can size the browser however you like and my engine will automatically fill the entire page. No scrollbars or movie overflowing out the border or any of that. However, I can't take any credit for that. Away3D must be handling it somewhere in their API. From now on I am mostly going to use my pure flash version to display my progress. This way there is nothing to download, just visit my page. I don't feel like the pure flash version is quite as good as my other versions but, it will suffice while I am still developing.
                        Last edited by MadGypsy; 04-28-2016, 05:23 PM.
                        http://www.nextgenquake.com

                        Comment


                        • Originally posted by MadGypsy View Post
                          [ Never mind. Was going to say it didn't work for me. Now it does. Go figure, maybe longer load time than I expected or something.]

                          That's not bad at all.
                          Quakeone.com - Being exactly one-half good and one-half evil has advantages. When a portal opens to the antimatter universe, my opposite is just me with a goatee.

                          So while you guys all have to fight your anti-matter counterparts, me and my evil twin will be drinking a beer laughing at you guys ...

                          Comment


                          • Yeah, there is about a 20 second lag before the game loads. :/ I can't really do anything about that. The same thing happens on mobile. My PC version has like a 1 second lag (for me). But hey, once it loads it runs pretty smooth.

                            I'm about to do my final rewrite of how I am constructing a BSP. After reading what spike said about interleavedBuffers I decided I'm going to utilize that method. The main reason is because it will force my batch meshes into smaller batches. Couple that with the PVS and I'll have this as optimized as I can possible make it. I like how it renders the whole world with no issue but, how long will that last once I start adding all the other stuff (enemies, etc) into the render... I'm thinking my current method is only awesome if you never intend to add a single thing to the world, much less hundreds of things.

                            Also, I need to invent something like SuperMesh or something. I realized that whereas my entities are singled out (ie not part of any other mesh) technically they are no better than world ATM. Consider a brush entity with 3 textures. For me this comes out as

                            _brush[n]
                            {
                            texture1{},
                            texture2{},
                            texture3{}
                            }

                            where each texture Object contains the mesh data for each texture. This is a big problem cause _brush[n] does not equal a brush at all, it equals a collection of meshes. Of course you can't have more than one texture per mesh so I can't just simply join these but, leaving them unjoined means I will have to apply code to every mesh of the brush. I need to find a way to create a "SuperMesh" (wow I just had some pretty aggressive de ja vu) that can act as a 1 shot container for controlling all of it's child meshes.

                            To put this in perspective. A lift utilizing three textures, as it is now, is no different than moving 3 lifts. I need to encapsulate these meshes some way.


                            Edit: WOW! I need sleep so fucking bad. This is so super easy it's embarrassing. ObjectContainer3D. or even better...

                            loop(n=0... _brush.length)
                            {
                            for(name in _brush[n]) ObjectContainer3D.addChild(_brush[n][name]);
                            }

                            I like SuperMesh better.


                            Honestly, everything leading up to now has been me figuring out what the hell I'm even doing AND how to do this thing that IDKWTF I am doing. My next rewrite will be final. I have learned so much in the past couple months. Having Baker and Spike helping me is pretty much the only reason I have gotten this far. I feel like I'm growing a familiarity with my API and Quake in general to where I should be a lot more independent in the coming revisions. I have a lot of ideas but I can't utilize them til I make an absolute and final method for constructing a BSP. I believe I may have finally come to that method. I don't regret the other gazillion writes though. Every last one of them taught me something. I will not stop doing this part over and over and over again until my method is elite. I may not be "remaking darkplaces" but my stuff is going to garner respect if it's the last thing I do in this world. If most people don't even realize/believe it's flash then I've finally done something right.
                            Last edited by MadGypsy; 04-28-2016, 06:57 PM.
                            http://www.nextgenquake.com

                            Comment


                            • iOS does not support Flash. It was a decision from day zero from Apple. Don't ask me why. It's about the only thing I could imagine is against this whole project. Sorry about that.

                              Comment


                              • I can absolutely export an IOS app from my API. I'm telling you guys every damn company in the world could suppress flash and I can still make apps for their devices. The companies can't suppress wrappers because they would have to get rid of their own executables to do so. It's simple.

                                device executable[ AIR captive runtime [ my app ] ]

                                AIR (adobe integrated runtime) is just "Super Flash"...flash with operating system level access.

                                My environment is up to date and the current release was only 4 days ago. They left in the IOS export. It's still possible.



                                Build an App for iPhone or iPad with FlashDevelop

                                Flash develop can handle a lot of languages so, lets skip to step 2 in that article to determine what we use to make this app.

                                2) Create a new project with FlashDevelop selecting AIR Mobile AS3 App

                                Adobe is a huge company. Do you really think they are going to let other companies obsolete their tech/products?

                                -------

                                I was curious about something. My environment is 3rd party. I was curious if Adobe directly supports ways to export to IOS. They do. It's built into Flash Professional (CS product) and Flash Builder (Adobe's "version" of FlashDevelop).

                                http://www.adobe.com/devnet/air/arti...n-ios-faq.html
                                Last edited by MadGypsy; 04-29-2016, 11:32 AM.
                                http://www.nextgenquake.com

                                Comment

                                Working...
                                X