Jump to content
The Dark Mod Forums

greebo

Root
  • Posts

    16733
  • Joined

  • Days Won

    51

Everything posted by greebo

  1. Ok, this is my part I guess. You say, you have the text entry field empty and tick the option when the shared_ptr assertion arises?
  2. This is a good decision, we would have asked you for that anyway if you applied for beta-mapping. You don't need to have the Dark Mod installed. You can map with vanilla Doom 3 just as fine as far as creating textures, models and such are concerned, which is enough to show off your capabilities. The textures can be very large, although our texture guidelines describe an upper limit of 1024x1024 (mainly to reduce the overall texture file size). However, it's recommended to save your ultra-high-resolution textures for future purposes, so don't discard them right away. Textures with 1024 are fine for diffusemaps, for normalmaps and specularmaps you might consider to stick with 512. The poly limits can't be carved into stone, because this heavily depends on the number of shadowcasting lights in your scene. Add one light in a stupid way and your framerate hits the ground.
  3. I'm currently working on it. When it's finished, it can be found here: http://www.thirdfilms.com/darkwiki/index.p...mpilation_Guide edit: the basic tutorial is there, please tell me if problems occur when following this guide.
  4. I get that GTK assertion as well. I thought it was about my specific GTK+ setup, but it seems that there are some non-standard items used here?
  5. The X1600 is definitely ok for Doom3/Darkmod, which I suppose is the only game you'd play ever on this thing.
  6. Yeah, it's comitted to the sourceforge code repository, but not released yet. You can always build DarkRadiant from source, if you're inclined to do such things (I could post instructions on the Wiki if there is demand). If you do not want to mess around with compilers, you're better off waiting for the next release.
  7. I will do a full compile on angua's machine and see if it's reproducible there. I assume you cleared the usual suspects (user.xml and consorts)?
  8. That issue should be resolved on SVN. (However, I opened another issue in the bugtracker that concerns the redesign of the handling of the various mainframe layouts, as there are a tad too much if-clauses around for my taste.)
  9. Strange, nothing here. I even tried to change the background image, but everything seems ok. Maybe there is an old module residing in your install folder (textures.dll)? I removed that one lately.
  10. When does it occur? I'll see if I can reproduce it with the latest SVN. edit: Everything seems to work on my end.
  11. I got a linker error caused by references to g_FileChooser_nativeGUI, which only applied to Win32 builds, therefore I disabled these lines. What are your plans regarding the file chooser? Will it be replaced in the future by a VFS file chooser? If not, I could imagine that having the native Win32 file dialog had some charm for a lot of Win32 mappers (including me, but I could live with the GTK one as well). Having the Win32 context menu during file open is one of the admittedly few advantages.
  12. Ah, interesting. For me the question arises why the system is designed in such a way that the destruction of an object triggers its re-insertion, but I don't know any of the implementation details of course, so it may really be necessary. Sounds weird nonetheless. This means that there is only one HashedCache left, in renderstate.cpp, if I recall correctly.
  13. I know a German site that's listing some 3DMark statistics for the various mobile graphics cards (http://www.notebookcheck.com/Vergleich-mob...rten.358.0.html). They are listed like this:
  14. Okay, the TexturesCache is gone now. All the important functionality has been moved into the Shaders module, including a method to load a texture from disk as needed by the Overlay module. A bit of tidying remains to do in the CShaders class, but this should be all that's left.
  15. I'm finished with the image post-processing, this is what I've changed: The texture gamma is applied after the texture has been scaled to reduce the amount of pixels to be scanned (won't have much impact, but it's a start). I'm pretty sure that the GtkRadiant mip map creation code was broken (it appeared to pass the wrong dimensions, so it ended up having a full-sized texture and a single 1x1 mip, but nothing in between). I changed this to use the gluBuild2DMipmaps command that lets the driver/hardware take care of the calculations. This may hopefully reduce the delay when loading the textures and may even have some influence on rendering speed. Next step is to remove the old TexturesCache module, as it's not needed anymore.
  16. Exactly. The TexturePtrs are stored in a std::map, therefore the use_count is never reaching 0. If a shader is destroyed, it calls the checkBindings() method, which throws out the unique()==true TexturePtrs, which in turn triggers the destructor. True, the rendering itself is not affected by this, but the switching. After I'm through with the changes the very first switch between rendering modes will still take some time, but afterwards it can almost happen at no cost. A 16x16 image has 256 pixels in total, and I step over them with a 20 pixel increment. I already had that thought when choosing the increment. And I figured that there wouldn't be any 4x4 images. Yet, if there were any of them, it wouldn't do any harm, as the flatShadedColour will take the colour of the very first pixel.
  17. Time to report what I've done so far: I implemented the GLTextureManager and the ShaderLibrary as indicated by the diagram. The CShader object realises/instantiates its textures as soon as they are demanded by the Rendering system by using the TextureConstructors. I replaced all qtexture_t* with TexturePtr (boost::shared_ptr of course). All IShader* pointers have been converted to IShaderPtr. The Textures are unloaded from OpenGL in the destructor of the Texture structure now, which makes more sense IMO. The content of the file shaders.cpp has been refactored into the Doom3ShaderSystem (which owns the ShaderLibrary and the GLTextureManager, btw.) or into the ShaderFileLoader class (parseShaderDecl). The file shaders.cpp is gone. The internal reference counting of Textures is gone and the refcounting of CShaders is deactivated. This is de facto done by the boost::shared_ptr class now (by calling unique() or use_count()). Shader textures are not removed and reloaded from/into graphics memory when switching render modes >> major performance boost, see below. What's missing: The TextureConstructors are very basic, no MapExpression evaluation yet (this is another task). The Image Post-Processing is only partly implemented (I'm working on that one right now). The TexturesCache is still there (and soon to be removed, when I'm done with the latter point). Regarding the Image Post-Processing: The "old" system realised and unrealised every single texture when switching between rendering modes, which is the main reason for the huge delay. Not only the shader system reloaded every texture from disk, the image post-processing was performed in each step, which is insane IMO. This is resolved now, but I still have to do some tests for possible "texture leaks". The image processing itself was definitely not optimised in the old system. The gamma for each texture was applied even at gamma values of 1.0 (which means no change), which basically meant that every single pixel was taken and re-saved into the pixel storage without any change (of course this was done before a possible texture downsize). Now multiply this by the number of textures in bonehoard or mansion_alpha and you know what to expect. Another thing was the calculation of the flatshade colour, which basically takes the mean value of the RGB channel of the image. The old system took every single pixel in account, which is totally unnecessary - I changed it to use every 20th pixel (which could still be reduced, but I didn't perform any test-runs) and it yields good results. So far, so good.
  18. I know, I was the one who transferred it ;-)
  19. @Ishtvan: Let me know when you're finished with posting this, then it really should go into the Wiki, as this thread will surely get lost.
  20. I also saw that announcement, but I gave it a try anyway, because I didn't want to bother with ZoneAlarm any longer (and I don't care about the non-continued support, I don't need it ). It'll do for the next few months, maybe I'll switch to something else when it's really outdated.
  21. It's on FTP now: http://208.49.149.118/TheDarkMod/Files/ATI...nator_v1.21.zip
  22. I wouldn't recommend ZoneAlarm, it tends to bloat and become slower and slower over time. Last month I switched to Sygate and I'm rather pleased so far.
  23. Should I upload this version to our FTP in case the links die or the 1.21 Compressonator becomes hard to obtain?
  24. @Ascottk: Which version of The Compressonator have you been using? There is only one specific version of the ATI tool that's working for Doom3. (You probably knew this, but I wanted to make sure.)
  25. The camera drag was always there and should exist in DoomEdit as well. @Paste Texture Coordinates: This casts the UV coords from one patch to onto another patch of equal dimensions (without changing the shader). I never used it in practice, but it was easy to implement and it may come in handy. There is already a DarkRadiant section on DarkWiki, and I guess it make most sense to leave those two in the same wiki.
×
×
  • Create New...