Jump to content
Forum Login Changes ×
The Dark Mod Forums

OrbWeaver

Active Developer
  • Posts

    8577
  • Joined

  • Last visited

  • Days Won

    63

Posts posted by OrbWeaver

  1. 16 hours ago, Zerg Rush said:

    Yes, I'm familiar with this sort of junk-science "analysis" assembled by journalists or random tech companies counting stuff in a database and using it to form some kind of conclusion.

    Side note: one of the dumbest articles I ever read was some lazy tech journalist trying to decide which Steam games were popular based entirely on the average total play time (in hours and minutes). He concluded that everybody hated "HL2: The Lost Coast" because the average play time was about 15 minutes, without bothering to check that The Lost Coast is actually a short tech demo that can be completed in a few minutes, so obviously people aren't going to rack up hundreds of hours playing it.

    For example, consider these numbers:

    Quote

    And Debian, a flavor of Linux, was top of the table with 3,067 vulnerabilities over the last two decades. Reasonably close behind was Android on 2,563 vulnerabilities, with the Linux kernel in third place having racked up a count of 2,357. Apple’s macOS was only slightly behind that with 2,212, with Ubuntu in fifth place on 2,007.

    So they count "Debian", which is an entire distro with thousands of packages, separately from "the Linux kernel" which is one component of a Linux system and already included in every other Linux distro. Does that mean the 2357 kernel vulnerabilities need to be subtracted from the 3067 Debian vulnerabilities, or have they already done that? Do the Debian vulnerabilities include only the kernel, core packages, or every package in the distribution (including Firefox, Thunderbird etc)? The article doesn't say, and the source data is not available since this is just a second-hand report of an "analysis" done by a random VPN company, not a proper scientific study.

    In any case, comparing an entire Linux distro with just "Windows" isn't a valid comparison, because a Linux distro includes thousands of third-party packages. In order to make that a fair comparison you'd also need to include Microsoft Office and everything in the Microsoft store under the "Windows" heading.

    Quote

    As for Microsoft’s operating systems, Windows 7 bore 1,283 vulnerabilities, and Windows 10 carried 1,111. If you add those together, you get a total of 2,394 for the past decade, roughly – given that Windows 7 came out in 2009, and handed the baton to Windows 10 in 2015.

    I realise that everybody hated Windows 8, but I'm fairly sure that it didn't somehow magically vanish from history.

    Quote

    Although note that some of the other figures mentioned represent a full two decades of existence – like Debian, which has been around since 1993

    So they're potentially including a full 16 years of extra vulnerabilities to Debian, by ignoring all versions of Windows released before 2009? Yeah, I'm sure that makes absolutely no difference to the analysis.

    Quote

    so it’s difficult to make direct comparisons in that respect.

    No shit, Sherlock.

    Quote

    Still, this serves to underline that Windows security is perhaps not as shaky as you might believe, at least historically, and indeed that Linux and Mac users shouldn’t be complacent.

    They got something right at least. Nobody should be complacent about security, since all modern operating systems and software are affected by vulnerabilities, and need to be kept up-to-date with security patches.

    • Like 3
  2. On 4/27/2023 at 11:26 AM, Zerg Rush said:

    Currently, although it may not seem like it, Windows is the most resistant OS against Viruses and Malware

    I never realised Bill Gates was a member of these forums. Welcome to the community! I hope you enjoy The Dark Mod. Perhaps your Foundation could help pay for the server hosting or fund the development of some new features?

    • Like 1
    • Haha 2
  3. On 4/22/2023 at 4:42 PM, stgatilov said:

    I guess footstep sounds are played from within source code.
    So their volume can only be changed there without information loss.

    There are already many cvars about that, but a brief glance shows that perhaps they are ignored...

    That's odd, because when I was working on the footsteps years ago, I was definitely adding volume decls to lower the volume of sounds. Perhaps something has changed since then regarding how the code interacts with sound shader keywords.

    I do recall that there are problems with using sound shaders to increase volume, as others have reported, which is why it's a good idea to make sure your original sound files are fully normalised (volume maximised) before they go into the mod.

  4. I'm not really up to speed on exactly what goes into an xData file, but do you mean that each readme would include its own copy of the scroll buttons and their required functionality? Because that's definitely the wrong solution to this particular problem from an engineering perspective.

    If a readme is only intended to include text, then that's all that should appear in the file, not text plus a load of GUI boilerplate which will be identical in every readme and will probably just have to be copy-pasted from somewhere else. It should be up to the game engine to display the text in an appropriate way, including adding a scroll mechanism if it is needed.

  5. On 4/8/2023 at 6:34 PM, Daft Mugi said:

    @kin Here are more details about how I reduce footstep sound volumes.

    I extract the footstep sounds from tdm_sound_sfx02.pk4.

    While that may be an acceptable solution for you, it is the worst possible way to reduce the volume of sounds. You are introducing serial recompression artifacts for no benefit, and the process is unnecessarily cumbersome if you want to experiment with several different volume levels.

    Instead, you should just edit (or add) the volume field in the respective .sndshd files, which changes the volume in-game without touching the sound files. For example, "volume -3" will make the sound approximately half as loud. This is a one-line change which is quick and easy to test and does not introduce any compression artifacts.

    • Like 3
  6. Language models are a mirror, reflecting the collected works of humanity back at us.

    Some people look in that mirror, see their own reflection, and conclude "there is a artificial person behind this sheet of glass that looks and behaves exactly like me... our days as humans are numbered!". But it's not true. It's just a reflection. It can't create anything that humans couldn't (or haven't) created to begin with.

    I have no doubt that one day, artificial human-like intelligence will exist, but it will require a lot more than just a language model remixing stuff on the internet. If you're a cargo cult programmer copy-pasting junk code off Stack Overflow, or a hack blog writer churning out articles with titles like "20 dumb things Trump has said", AI is coming for your job — but that's because your job wasn't worth anything to begin with.

  7. On 2/22/2023 at 10:40 PM, MirceaKitsune said:

    Does anything come to mind in terms of things I can try myself? I always compile DR from Git: If a specific line of code comes to mind feel free to suggest a change and I could recompile and test with it.

    The relevant code is in CamWnd.cpp (starting with the CamWnd::startCapture method) which makes use of a class called FreezePointer which can be found in the wxutil library. It's not possible to be any more specific than that because I have no idea what would be causing the problem.

    Presumably it's either (1) a logic error in our code which is being exposed by Wayland handling mouse events in a subtly different way, or (2) a fundamental incompatibility between wxWidgets, GTK and Wayland over which we have no control.

  8. I don't generally use Wayland myself due to the numerous applications which have problems with it, but I tried logging into my Ubuntu GNOME desktop using the Wayland session and I do not see any problems with the view rotation or mouse capturing in DR. Which unfortunately means this will probably only be solved if a developer with a system similar to yours is able to investigate it.

  9. Does the bright area appear to move over the texture as you move the camera around, or does it remain in the same place?

    The defining feature of specular lighting is how it varies based on the positions of the viewer, surface and light source. If the bright area appears to move, then this is specular lighting; if it is fixed in place, it is diffuse lighting.

    Specular lighting is not defined by brightness. It is possible to have a very bright diffuse texture which will max out to full white under a light source (as in your image), just as it is possible to have a very dull specular texture which is difficult to see even in darkness.

    5 minutes ago, Epifire said:

    I find this is mainly due to only being able to set light color (not actually refining it with an intensity setting).

    From a rendering perspective, there is no real distinction between "intensity" and "color" other than the fact that "intensity" affects RGB channels equally, without changing the apparent hue. There would be no increase in render quality by having a separate intensity value that was tracked and calculated independently of color.

    However, recent DarkRadiant versions add a slider into the Light Inspector which allows mappers to vary the brightness of one or more lights without having to use the color chooser or risk changing the hue. This is purely a user convenience feature, and does not unlock any new rendering possibilities (the intensity changes are just baked into the RGB color applied to the light entity).

    • Like 1
  10. Actually I might be confusing two different things.

    What the latest LWO exporter fixes is the smoothing angle. Previously this was hard-coded at some weird value slightly less than 90°, but this can now be configured to smooth everything, smooth nothing, or use the Autosmooth Angle setting on the object.

    I have no idea if explicit smooth groups are supported, or if this is even a thing in Blender.

  11. On 1/23/2023 at 7:27 PM, motorsep said:

    Is there an LWO exporter that works with Blender 3.4.1 ? 

    As far as I know the most up-to-date one is the script I maintain (there is a single tdm_export script which supports both ASE and LWO export). However I haven't specifically tested with the latest Blender 3.4 series, so it's possible that it will need an update.

    On 1/23/2023 at 7:53 PM, stgatilov said:

    As far as I remember, the engine drops smoothing information from LWO file and applies automatic determination of smooth groups depending on some hardcoded angle.
    So I'm not sure these smoothing settings will help in TDM or Doom 3.

    I believe this information is out of date. The problem of LWO losing smoothing information was caused by the Blender exporter itself ignoring object-specific data and enforcing a hard-coded smoothing angle. This is now fixed in my latest version, although the old behaviour is selectable at the time of export if you don't want to deal with object smooth groups. As far as I can recall, when I was testing this, the smoothing options did take effect in the engine (although I couldn't say whether they were 100% mathematically correct).

    • Like 1
  12. Would you be able to attach or upload the map (or any other map which shows the same issue, if you don't want to share a WIP)?

    Even if the map has become corrupted, we ought to be able to handle this more gracefully and perhaps recover whatever data we can read even if some is missing. Under no circumstances should we hard crash regardless of what sort of corruption is in the map file.

  13. 4 hours ago, stgatilov said:

    I think @OrbWeaver tried to load precompressed RGTC in TDM recently and failed, and after that we fixed something.

    This was the relevant SVN change:

    r9525 | orbweaver | 2021-07-30 21:17:43 +0100 (Fri, 30 Jul 2021) | 11 lines
    
    Use correct format for uploading precompressed ATI2 normal maps
    
    When the 'ATI2' FOURCC is seen in a precompressed DDS file, use the
    GL_COMPRESSED_RG_RGTC2 internal format (matching the behaviour of
    image_useNormalCompression on uncompressed source images). The legacy code was
    using GL_COMPRESSED_LUMINANCE_ALPHA_LATC2_EXT.
    
    This is enough to get plausible looking normal maps with DDS files exported in
    3Dc/BC5 format using the GIMP DDS plugin. Not tested with files exported from
    any other tools; it is conceivable that other FOURCCs might need handling too.

    Note in particular the comment about this only being tested with the GIMP DDS plugin, and that formats used by other tools might need additional changes.

  14. This exact behaviour can be created using a blendLight using the blend filter blend mode.

    The light becomes a volume which simply multiplies (darkens) its projected texture with the texture of contained objects, almost like a 3D volumetric decal.

    • Like 4
  15. The actual crash occurs on this line of LightInspector.cpp:

       85  	void LightInspector::shaderSelectionChanged()
       86  	{
       87  	    // Get the selected shader
    -> 88  	    auto ishader = _texSelector->getSelectedShader();

    The _texSelector member is still NULL here, so this is a segfault.

    The initialisation of _texSelector happens in setupTextureWidgets(), which is called unconditionally in the LightInspector constructor, so a missing method call is not the issue. However, the value of _texSelector is a newly-constructed MaterialSelector object, and this object takes a function pointer to the shaderSelectionChanged() in its constructor:

        _texSelector = new MaterialSelector(parent,
                                            std::bind(&LightInspector::shaderSelectionChanged, this),
                                            MaterialSelector::TextureFilter::Lights);

    It appears that this is creating a race condition: the constructor of MaterialSelector is able to call the shaderSelectionChanged callback (presumably as a result of selection changes during its Populate() method), but it isn't actually safe to call shaderSelectionChanged() until after the MaterialSelector is fully constructed and assigned to the _texSelector member.

    I think my preferred solution would be to remove the "selectionChanged" function parameter to the MaterialSelector constructor entirely, and replace it with a public signal_selectionChanged() which client code could connect to. This would have two advantages:

    1. No possibility of race condition, since you would have to construct the MaterialSelector before connecting to the signal.
    2. Signals are more powerful than manual std::function callbacks (e.g. they can auto-disconnect if the target object is destroyed) and are widely used throughout the codebase.

    However, if anything currently relies on receiving selectionChanged callbacks during the MaterialSelector construction, this would no longer work. @greebo what do you think?

    • Thanks 1
×
×
  • Create New...