Jump to content
The Dark Mod Forums


Development Role
  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by nbohr1more

  1. The default design ignores depth in an attempt to address cases where the player can from things inside containers but cannot see them ( see the many missions where cash boxes are under the bar counter at a pub where you cannot see all the coins ). To avoid this "frob highlight through objects" behavior, simply set "r_frobIgnoreDepth 0"
  2. That seems to be a reasonable option. It still doesn't fully answer how we should handle image program behavior. This would ideally incorporate Orbweaver's idea of pre-processing the raw data with image programs prior to compression and I guess we would need to generate new images for all permutations that are required by alternate material defs that are included in mission packages ( could be pretty bloated storage wise )
  3. Yes, it shouldn't be hard. We are first asking that players and developers test the frob color settings to see if an ideal or agreeable new default can be found, here is my current configuration:
  4. @stgatilov When you converted normals to BC5 did you generate mipmaps? If so, may I kindly request that you commit these converted textures to SVN in batches?' Did you get a chance to measure mission load times with BC5 ?
  5. I'll have to test but I believe the entire Doom 3 workflow is currently available including the forceHighQuality material keywords, so option 1 may already be in place as long as we keep at least some of the uncompressed images around. Probably glass and water normal maps would be the most important but I guess there might be few AI that have heightmap materials that might need to be kept this way?
  6. Yes, there are 3 options to resolve this: 1) Retain an uncompressed copy for the subset of materials that need image programs against DDS ( default Doom 3 workflow ) 2) Decompress on image load if image programs are found in the material 3) Replace the image programs with GLSL shaders that accept DDS I have no objections to option 2 as long as you can also force loading of the uncompressed image if desired. I guess if someone implements a solution, they get the privilege of choosing.
  7. It is hard to know when a trap is being created and springing to action without consideration is a recipe for entrapment. Both of the participants in this "argument" have also exhibited "troll style" posts. It could very well be that whatever action we take here might be used as propaganda to claim "The TDM community is toxic because, etc..." "TDM Admins are draconian..." We have typically followed a forum moderation model similar to TTLG where discussions are largely unfettered unless they are outright offensive to people. Up to this point, most TDM members have been sophisticated enough where even heated discussions are filled with informative debate. There have been newer participants who have made things more challenging, and have provoked discussions about whether more strict moderation practices need to be enforced. In the "good ole days" if anything controversial was posted the community would quickly swoop in to hash it out logically either agree to disagree or debate until boredom set in.
  8. @OrbWeaver @greebo can the release include the new RGTC preview fix? The sooner we have all the pieces in the wild the sooner we can fully validate it for production use. ?
  9. Not sure how detrimental such a change would be? Missions like "The Painter's Wife" and "Behind Closed Doors" have an auto-map so what is the distinction about this approach that makes it "better"? Conversely, if the capability is added to core TDM for either "mini-map" or "auto-map" ( or both ) with the option for mappers to offer this with a clean implementation does it really harm the overall project? Won't the community reaction to authors use of these features determine whether they see wide adoption? In fear of "too much hand-holding" can we not just ensure that there is a cvar to disable this feature completely regardless of it's inclusion in a mission? I guess my point of view is "If having an optional feature makes a number of mission authors happy and that encourages them to create more content, then it may be worth it regardless of any negative feedback from hardcore fans. The hardcore are smart enough to use settings to disable such features." Of course, I would be 100% against the ability for players to add this to existing missions via a menu setting. If it were ever implemented it's availability should depend both on the mission author preference and player setting together.
  10. As long as you aren't hiding monster clip, path finding should work as expected.
  11. It looks like the auto-generated rotation hack models aren't loading. Try deleting and re-installing the FM. Make sure that TDM has read \ write permissions in the folder where it's deployed.
  12. Well, here is the reference anyway: https://docs.microsoft.com/en-us/windows/win32/direct3d10/d3d10-graphics-programming-guide-resources-block-compression
  13. Ouch. Yeah, probably not worth it for whatever quality uptick you supposedly get.
  14. @OrbWeaver can you also add "GL_COMPRESSED_SIGNED_RG_RGTC2" import support? According to Microsoft the signed format is better suited for normals?
  15. Can you try the latest SVN? Orbweaver commited a fixed BC5 loader
  16. I believe that the current design works like: image_useNormalmapCompression 0 Image is loaded uncompressed, environmental variable is passed to glprogs, GLSL standard branch renders shader image_useNormalmapCompression 1 ( default ) Image is compress on load, environmental variable is passed to glprogs, GLSL RGTC branch renders the shader The open question is: What happens if the cvar is set to 1 and the texture is already compressed? If the detection code bypasses the attempt to compress the image or the compressor recognizes the image as matching the desired destination format the the shader should do it's job since the correct branch is activated
  17. Isn't that what r_shadowMapSinglePass 2 currently does for normal lights? Maybe create a cvar that does the same but only for ambient lights?
  18. OK, sorry to be a pain but I did ponder one other sort of ugly use-case. The mission "Sir Talbot's Collatoral" uses many small and subtle lights referred to as "bounce lights" to improve the look of ambient lit areas. If some author went crazy with this lighting method, performance could be very bad in a large open area such as a warehouse, courtyard or foyer. Turning off these bounce lights at a distance would be far less noticeable since they only add subtle effect anyway.
  19. A little "No Honor Among Thieves" v4 preview. Not too revealing. See if you can spot the new texture I created...
  20. Sorry, forgot one critical edit. Still this answer makes sense. Doom 3 is a fillrate monster and continues to be one in the TDM engine era.
  21. In all previous Doom 3 discussions, one of the main areas of concern was that this renderer is "not deferred" meaning that regardless of whether the entities are visible the engine will attempt to render them. This means that if a light touches entities that are obstructed by geometry but are not culled via visportal then the engine will perform all the setup of all that unseen geometry. This is generally offered as one explanation for why a scene with good visportal design performs dramatically better than one with no visportals that is visually identical. So? Of course we would wish that all maps have perfect visportal design but even if this comes true what about the case where shadow casters exist behind obstructive geometry? As I recall, vanilla Doom 3 treats lights as having possibly infinite shadow casting range ( I believe Doom 3 BFG constrains the potential shadowed area to the light volume cube ). This means that many shadows that are invisible but extend away from the player behind an occluder can consume resources. It has been said that there is no "solution" to shadows behind occluders since there is the potential for incorrect shading on the unobstructed sides behind the occluder but I would offer that mappers might want to try that tradeoff? Especially if the difference is a negligible sliver of light or shadow being removed. ( See a previous discussion about some mappers wanting "antiportals" \ "func_occluder" entities. ) Does this make sense? Of course, Shadow Maps also play a part in removing some of these factors but since shadow map mode is still hybrid the stencil shadow concerns still arise.
  22. To be clear, it is my understanding that lights still perform the following actions: 1. If a light is visible in the view frustum, visleaf, portal view, and scissor calculations 2. Gather all vertex data within the bounding box of the light 3. Skin all vertices gathered in step 2 ( expensive CPU wise ) 4. Upload all skinned triangles to the GPU ( expensive regarding bus data ) 5. Fill pixels for all triangles uploaded in step 4 via a depth pass 6. Calculate object silhouettes on the CPU 7. Extrude the shadow vertices on the GPU 8. Fill the stencil buffer 9. Fill the pixels that are now visible after steps 1 through 8 via the light shader If so, it is your assertion that step 9 is more expensive that steps 3, 4, 5, 6, 7, 8 ?
  23. WOO!!! @Dragofer can you test BC5 normal maps again?
  • Create New...