Jump to content
The Dark Mod Forums

Search the Community

Searched results for '/tags/forums/reason/' or tags 'forums/reason/q=/tags/forums/reason/&'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General Discussion
    • News & Announcements
    • The Dark Mod
    • Fan Missions
    • Off-Topic
  • Feedback and Support
    • TDM Tech Support
    • DarkRadiant Feedback and Development
    • I want to Help
  • Editing and Design
    • TDM Editors Guild
    • Art Assets
    • Music & SFX

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. I think there is no problem with noshadows parallel lights: they are well-defined in the current engine. It can be used as local light to brighten the window, as @HMart does. I hope there is no reason to have shadows in this case? The parallel and parallelSky are almost the same thing, except that parallelSky traces light beams from areas containing portalsky world surfaces, while parallel traces light beams from the area where light origin is located, which is never what you need for a global parallel light (like moonlight). There are many missions where this issue is hacked around, and all of them result in issues like double lighting if door is open, or lack of lighting in some outdoors areas. And there is no way to fix the engine to make these missions work properly --- the maps themselves are wrong. If you want to do a global parallel light, the parallelSky is surely what you want to have, and parallel is most likely not. But note that to make parallelSky work you also need to follow some rules. The issue with local parallel light is that objects outside light volume can cast shadow over objects inside light volume. This is pretty weird by itself: you move object closer to light volume, and at some point its shadow instantly turns on. The engine determines whether object intersects light volume approximately (using bounding boxes and such stuff), so whether you get shadow from an object close to light volume or not is implementation-defined. Today you have no shadow and it looks nice, tomorrow culling is changed and the scene gets unexpected shadow. So the bottom line is: Global parallel lights should use parallelSky and follow some rules. Local parallel lights should be noshadows. All the rest is not well-defined: you'll avoid a lot of trouble by avoiding it altogether.
  2. Well, yeah, the problem of slow adoption is precisely what I'm complaining about. This is kind of a circular argument - if nobody adopts it, nobody is going to use it and it's going to have little support. I do realize that this is a difficult problem to solve because companies have to basically volunteer to "passively" support it and hope it brings them advantages in the long term. There are other aspects that I omitted as well, like for example HEAACv2 being slightly more demanding to decode (which is also the reason why Bluetooth historically used poor SBC encoding), but those are largely moot nowadays as well (even integrated bluetooth chips and cheapest SoCs can decode pretty much any audio format with no issue. Your point regarding storage and network costs does not cover the whole situation, I see three issues with it. Firstly mobile internet is still slow in many places in the world and it's often limited. Where I live it's difficult to get more than 5 GB monthly limit for a reasonable price. Secondly most websites are bloated as hell and any way to speed up their loading helps. This was the main reason for webp creation and adoption. Thirdly for services like music and video streaming bandwidth is expensive and the difference between using 160 kbps and 96 kbps for the same quality can save a lot of money. That was the reason why Youtube and Soundcloud switched to more modern codecs. And not even that, Netflix in some apps already switched to one of webp successors for movie thumbnails, because in such a large scale even serving the thumbnails in a decent image quality costs a lot of money. Btw in my case webp was certainly not useless, it was just an annoying situation. I use it as backup for my own use (seems to be about 1/5 of png size so far for photo-like files), so I just downloaded a codec from google to get explorer thumbnails and I don't need other people to support it. Although the fact that Facebook messenger converts webp files to... static gifs of all things, is kind of annoying (and funny).
  3. Hello, everyone! In this multi-part, comprehensive tutorial I will introduce you to a new light type that has been available in The Dark Mod since version 2.06, what it does, why you would want to use it and how to implement it in your Fan Missions. This tutorial is aimed at the intermediate mapper. Explanations of how to use DarkRadiant, write material files, etc. are outside of its scope. I will, however, aim to be thorough and explain the relevant concepts comprehensively. Let us begin by delineating the sections of the tutorial: Part 1 will walk you through four, distinct ways to add ambient light to a scene, the last way using irradiance environment maps (or IEMs). Lighting a scene with an IEM is considered image-based lighting. Explaining this concept is not in the scope of this tutorial; rather, we will compare and contrast our currently available methods with this new one. If you already understand the benefits IBL confers, you may consider this introductory section superfluous. Part 2 will review the current state of cubemap lights in TDM, brief you on capturing an environment cubemap inside TDM and note limitations you may run into. Three cubemap filtering applications will be introduced and reviewed. Part 3 will go into further detail of the types of inputs and outputs required by each program and give a walkthrough of the simplest way to get an irradiance map working in-game. Part 4 will guide you through two additional, different workflows of how to convert your cubemap to an irradiance map and unstitch it back to the six separate image files that the engine needs. Part 5 will conclude the tutorial with some considerations as to the scalability of the methods hitherto explained and will enumerate some good practices in creating IEMs. Typical scenes will be considered. Essential links and resources will be posted here and a succinct list of the steps and tools needed for each workflow will be summarized, for quick reference. Without further ado, let us begin. Part 1 Imagine the scene. You’ve just made a great environment for your map, you’ve got your geometry exactly how you want it… but there’s a problem. Nobody can appreciate your efforts if they can’t see anything! [Fig. 1] This will be the test scene for the rest of our tutorial — I would tell you to “get acquainted with it” but it’s rather hard to, at the moment. The Dark Mod is a game where the interplay between light and shadow is of great importance. Placing lights is designing gameplay. In this example scene, a corridor with two windows, I have decided to place 3 lights for the player to stealth his way around. Two lights from the windows streak down across the floor and a third, placeholder light for a fixture later to be added, is shining behind me, at one end of the corridor. Strictly speaking, this is sufficient for gameplay in my case. It is plainly obvious, however, that the scene looks bad, incomplete. “Gameplay” lights aside, the rest of the environment is pitch black. This is undesirable for two reasons. It looks wrong. In real life, lights bounce off surfaces and diffuse in all directions. This diffused, omni-directional lighting is called ambient lighting and its emitment can be termed irradiance. You may contrast this with directional lighting radiating from a point, which is called point lighting and its emitment — radiance. One can argue that ambient lighting sells the realism of a scene. Be that as it may, suppose we disregard scary, real-life optics and set concerns of “realism” aside… It’s bad gameplay. Being in darkness is a positive for the player avatar, but looking at darkness is a negative for the player, themselves. They need to differentiate obstacles and objects in the environment to move their avatar. Our current light level makes the scene illegible. The eye strain involved in reading the environment in these light conditions may well give your player a headache, figurative and literal, and greatly distract them from enjoying your level. This tutorial assumes you use DarkRadiant or are at least aware of idtech4’s light types. From my earlier explanation, you can see the parallels between the real life point/ambient light dichotomy and the aptly named “point” and “ambient” light types that you can use in the editor. For further review, you can consult our wiki. Seeing as how there is a danger in confusing the terms here, I will hereafter refer to real life ambient light as “irradiant light”, to differentiate it from the TDM ambient lights, which are our engine’s practical implementation of the optical phenomenon. A similar distinction between “radiant light” and point lights will be made for the same reason. Back to our problem. Knowing, now, that most all your scenes should have irradiant light in addition to radiant light, let’s try (and fail, instructionally) to fix up our gloomy corridor. [Fig. 2] The easiest and ugliest solution: ambient lights. Atdm:ambient_world is a game entity that is basically an ambient light with no falloff, modifiable by the location system. One of the first things we all do when starting a new map is putting an ambient_world in it. In the above image, the darkness problem is solved by raising the ambient light level using ambient_world (or via an info_location entity). Practically every Dark Mod mission solves its darkness problem1 like this. Entirely relying on the global ambient light, however, is far from ideal and I argue that it solves neither of our two, aforementioned problems. Ambient_world provides irradiant light and you may further modulate its color and brightness per location. However, said color and brightness are constant across the entire scene. This is neither realistic, nor does it reduce eye strain. It only makes the scene marginally more legible. Let’s abandon this uniform lighting approach and try a different solution that’s more scene-specific. [Fig. 3] Non-uniform, but has unintended consequences. Our global ambient now down to a negligible level, the next logical approach would be hand-placed ambient lights with falloff, like ambient_biground. Two are placed here, supplementing our window point lights. Combining ambient and point lights may not be standard TDM practice, but multiple idtech4 tutorials extol the virtues of this method. I, myself, have used it in King of Diamonds. For instance, in the Parkins residence, the red room with the fireplace has ambient lights coupled to both the electric light and the fire flame. They color the shadows and enrich the scene, and they get toggled alongside their parent (point) lights, whenever they change state (extinguished/relit). This is markedly better than before, but to be honest anything is, and you may notice some unintended side-effects. The AI I’ve placed in the middle of the ambient light’s volume gets omnidirectionally illuminated far more than any of the walls, by virtue of how light projection in the engine works. Moving the ambient lights’ centers closer to the windows would alleviate this, but would introduce another issue — the wall would get lit on the other side as well. Ambient lights don’t cast shadows, meaning they go through walls. You could solve this by creating custom ambient light projection textures, but at this point we are three ad hocs in and this is getting needlessly complicated. I concede that this method has limited use cases but illuminating big spaces that AI can move through, like our corridor, isn’t one of them. Let’s move on. [Fig. 4] More directional, but looks off. I have personally been using this method in my WIP maps a lot. For development (vs. release), I even recommend it. A point light instead of an ambient light is used here. The texture is either “biground1” or “defaultpointlight” (the latter here). The light does not cast shadows, and its light origin is set at one side of the corridor, illuminating it at an angle. This solves the problem of omnidirectional illumination for props or AI in the middle of the light volume, you can now see that the AI is lit from the back rather than from all sides. In addition, the point light provides that which the ambient one cannot, namely specular and normal interaction, two very important features that help our players read the environment better. This is about as good as you can get but there are still some niggling problems. The scene still looks too monochromatic and dark. From experience, I can tell you that this method looks good in certain scenes, but this is clearly not one of them. Sure, we can use two, non-shadowcasting point lights instead of one, aligned to our windows like in the previous example, we can even artfully combine local and global ambient lights to furnish the scene further, but by this point we will have multiple light entities placed, which is unwieldy to work with and possibly detrimental to performance. Another problem is that a point light’s movable light origin helps combat ambient omnidirectionality, but its projection texture still illuminates things the strongest in the middle of its volume. I have made multiple experiments with editing the Z-projection falloff texture of these lights and the results have all left me unsatisfied. It just does not look right. A final, more intellectual criticism against this method is that this does not, in a technical sense, supply irradiant light. Nothing here is diffuse, this is just radiant light pretending the best it can. [Fig. 5] The irradiance map method provides the best looking solution to imbuing your scene with an ambient glow. This is the corridor lit with irradiance map lights, a new lighting method introduced in The Dark Mod 2.06. Note the subtle gradients on the left wall and the bounced, orange light on the right column. Note the agreeable light on the AI. Comparing the previous methods and this, it is plainly obvious that an irradiance environment map looks the most realistic and defines the environment far better than any of the other solutions. Why exactly does this image look better than the others? You can inform yourself on image-based lighting and the nature of diffuse irradiance, but images speak louder than words. As you can see, the fact of the matter is that the effect, subtle as it may be, substantially improves the realism of the scene, at least compared to the methods previously available to us. Procuring irradiance environment maps for use in lighting your level will hereafter be the chief subject of this tutorial. The next part will review environment cubemap capture in TDM, the makeIrradiance keyword and three external applications that you can use to convert a TDM cubemap into an irradiance map. 1 “ Note that the color buffer is cleared to black: Doom3 world is naturally pitch black since there is no "ambient" light: In order to be visible a surface/polygon must interact. with a light. This explains why Doom3 was so dark ! “ [source] Part 2 Cubemaps are not new to The Dark Mod. The skybox materials in some of our prefabs are cubemaps, some glass and polished tile materials use cubemaps to fake reflections for cheap. Cubemap lights, however, are comparatively new. The wiki page linked earlier describes these two, new light types that were added in TDM 2.05. cubicLight is a shadow-casting light with true spherical falloff. An example of such a light can be found in the core files, “lights/cubic/tdm_lampshade_cubic”. ambientCubicLight is the light type we will be focusing on. Prior to TDM 2.06, it acted as a movable, on-demand reflection dispenser, making surfaces in its radius reflect a pre-set cubemap, much like glass. After 2.06, the old behavior was discarded and ambientCubicLight was converted to accept industry standard irradiance environment maps. Irradiance environment maps (IEMs) are what we want to make, so perhaps the first thing to make clear is that they aren’t really “handmade”. An IEM is the output of a filtering process (convolution) which requires an input in the form of a regular environment cubemap. In other words, if we want to make an IEM, we need a regular cubemap, ideally one depicting our environment — in this case, the corridor. I say a snapshot of the environment is ideal for lighting it because this emulates how irradiant light in the real world works. All radiating surfaces are recorded in our cubemap, our ambient optic array as it were, then blurred, or convoluted, to approximate light scatter and diffusion, then the in-game light “shines” this approximation of irradiant light back to the surfaces. There is a bit of a “chicken and the egg” situation here, if your scene is dark to begin with, wouldn’t you just get a dark irradiance map and accomplish nothing? In the captured cubemap faces in Fig. 6, you may notice that the environment looks different than what I’ve shown so far. I used two ambient lights to brighten up the windows for a better final irradiance result. I’ve “primed the pump”, so to speak. You can ignore this conundrum for the moment, ways to set up your scenes for better results, or priming the pump correctly, will be discussed at the end of the tutorial. Capturing the Environment The wiki has a tutorial on capturing cubemaps by angua, but it is woefully out of date. Let me run you through the process for 2.07 really briefly. To start with, I fly to approx. the center of the corridor with noclip. I then type “envshot t 256” in the console. This outputs six .tga images in the <root>/env folder, simply named “t”, sized 256x256 px and constituting the six sides of a cube and depicting the entire environment. This is how they look in the folder: [Fig. 6] The six cube faces in the folder. Of note here is that I do not need to switch to a 640x480 resolution, neither do I need to rename these files, they can already be used in an ambientCubicLight. Setting Up the Lights For brevity’s sake, I’ll skip explaining material definitions, if you’ve ever added a custom texture to your map, you know how to do this. Suffice it to say, it is much the same with custom lights. In your <root>/materials/my_cool_cubemaps.mtr file, you should have something like this: lights/ambientcube/my_test_IEM_light { ambientCubicLight { forceHighQuality //cameraCubeMap makeIrradiance(env/t) cameraCubeMap env/t colored zeroClamp } } We’ll play with the commented out line in just a bit. Firstly, let’s place the actual light in DarkRadiant. It’s as simple as creating a new light or two and setting them up in much the same way you would a regular ambient light. I select the appropriate light texture from the list, “my_test_IEM_light” in the “ambientcube” subfolder and I leave the light colored pure white. [Fig. 7] The corridor in DR, top view, with the ambient cubic lights highlighted. I can place one that fills the volume or two that stagger the effect somewhat. Remember that these lights still have a spherical falloff. Preference and experimentation will prove what looks best to you. Please note that what the material we defined does is load a cubemap while we established that ambientCubicLights only work with irradiance maps. Let’s see if this causes any problems in-game. I save the map and run it in game to see the results. If I already have TDM running, I type “reloadDecls” in the console to reload my material files and “reloadImages” to reload the .tga images in the /env folder. [Fig. 8] Well this looks completely wrong, big surprise. Wouldn’t you know it, putting a cubemap in the place of an irradiance map doesn’t quite work. Everything in the scene, especially the AI, looks to be bathed in slick oil. Even if a material doesn’t have a specular map, it won’t matter, the ambientCubicLight will produce specular reflections like this. Let’s compare how our cubemap .tga files compares with the IEM .tgas we’ll have by the end of the tutorial: [Fig. 9] t_back.tga is the back face of the environment cubemap, tIEM_back.tga is the back face of the irradiance map derived from it. As you can see, the IEM image looks very different. If I were to use “env/tIEM” instead of “env/t” in the material definition above, I would get the proper result, as seen in the last screenshot of part 1. So it is that we need a properly filtered IEM for our lights to work correctly. Speaking of that mtr def though, let’s not invoke an irradiance map we haven’t learned to convert yet. Let’s try an automatic, in-engine way to convert cubemaps to IEMs, namely the makeIrradiance material keyword. makeIrradiance and Its Limitations Let’s uncomment the sixth line in that definition and comment out the seventh. cameraCubeMap makeIrradiance(env/t) //cameraCubeMap env/t Here is a picture of how a cubemap ran through the makeIrradiance keyword looks like: [Fig. 10] Say ‘Hi’ to our friend in the back, the normalmap test cylinder. It’s a custom texture I’ve made to demonstrate cubemap interactions in a clean way. Hey now, this looks pretty nice! The scene is a bit greener than before, but you may even argue it looks more pleasing to the eyes. Unfortunately, the devil is in the details. Let’s compare the makeIrradiance keyword’s output with the custom made irradiance map setup seen at the end of part 1. [Fig. 11, 12] A closer look at the brick texture reveals that the undesired specular highlighting is still present. The normal map test cylinder confirms that the reason for this is the noisy output of the makeIrradiance keyword. The in-engine conversion is algorithmic, more specifically, it doesn't allow us to directly compare .tga files like we did above. Were we able to, however, I'm sure the makeIrradiance IEM would look grainy and rough compared to the smooth gradient of the IEM you’ll have by the end of this tutorial. The makeIrradiance keyword is good for quick testing but it won’t allow you fine control over your irradiance map. If we want the light to look proper, we need a dedicated cubemap filtering software. A Review of Cubemap Filtering Software Here I’ll introduce three programs you can produce an irradiance map with. In the coming parts, I will present you with a guide for working with each one of them. I should also note that installing all of these is trivial, so I’ll skip that instructional step when describing their workflows. I will not relay you any ad copy, as you can already read it on these programs’ websites. I’ll just list the advantages and disadvantages that concern us. Lys https://www.knaldtech.com/lys/ Advantages: Good UI, rich image manipulation options, working radiance/specular map filtering with multiple convolution algorithms. Disadvantages: $50 price tag, limited import/export options, only available on Windows 64-bit systems. cmftStudio https://github.com/dariomanesku/cmftStudio Advantages: Available on Windows, OSX and Linux, free, open source software, command line interface available. Disadvantages: Somewhat confusing UI, limited import options, missing features (radiance/specular map filtering is broken, fullscreen doesn’t work), 32-bit binaries need to be built from source (I will provide a 32-bit Windows executable at the end of the tutorial). Modified CubeMapGen https://seblagarde.wordpress.com/2012/06/10/amd-cubemapgen-for-physically-based-rendering/ Advantages: Free software, quickest to work with (clarified later). Disadvantages: Bad UI, only Windows binaries available, subpar IEM export due to bad image adjustment options. Let’s take a break at this point and come back to these programs in part 3. A lot of caveats need to be expounded on as to which of these three is the “best” software for making an irradiance map for our purposes. Neither of these programs has a discreet workflow; rather, the workflow will include or exclude certain additional programs and steps depending on which app you choose to work with. It will dovetail and be similar in all cases. Part 3 The aim of this tutorial is twofold. First, it aims to provide the most hands-free and time-efficient method of converting an envshot, environment cubemap to an IEM and getting it working in-game. The second is using as few applications as possible and keeping them all free software that is available for download, much like TDM itself. The tutorial was originally going to only cover IEM production through Lys, as that was the app I used to test the whole process with. I soon realized that it would be inconsiderate of me to suggest you buy a fifty dollar product for a single step in a process that adds comparatively little to the value of a FM, if we’re being honest (if you asked me, the community would benefit far more from a level design tutorial than a technical one like this, but hey, maybe later, I’m filling a niche right now that nobody else has filled). This led me to seek out open-source alternatives to Lys, such as Cubemapgen, which I knew of and cmftStudio, which I did not. I will preempt my own explanations and tell you right away that, in my opinion, cmftStudio is the program you should use for IEM creation. This comes with one big caveat, however, which I’m about to get into. Six Faces on a Cross and The Photoshop Problem Let’s review. Taking an envshot in-game gives you six separate images that are game-ready. Meaning, you get six, split cubemap faces as an output, you need six, split irradiance map faces as an input. This is a problem, because neither Lys nor cmftStudio accept a sequence of images as such. They need to be stitched together in a cube cross, a single image of the unwrapped cube, like this: [Fig. 13] From Lys. Our cubemap has been stitched into a cross and the “Debug Cube Map Face Position” option has been checked, showing the orientations of each face. In Lys only panoramas, sphere maps and cube maps can be loaded into the program. The first two do not concern us, the third specifically refers to a single image file. Therefore, to import a TDM envshot into Lys you need to stitch your cubemap into a cross. Furthermore, Lys’ export also outputs a cubemap cross, therefore you also need to unstitch the cubemap into its faces afterwards if you want to use it in TDM. In cmftStudio you can import single map faces! Well… no, you can’t. The readme on GitHub boasts “Input and output types: cubemap, cube cross, latlong, face list, horizontal and vertical strip.” but this is false. The UI will not allow you to select multiple files on import, rendering the “face list” input type impossible.2 Therefore, to import a TDM envshot into cmftStudio you need to stitch your cubemap into a cross. Fortunately, the “face list” export type does work! Therefore, you don’t need to unstitch the cubemap manually, cmftStudio will export individual faces for you. In both of these cases, then, you need a cubemap cross. For this tutorial I will use Adobe Photoshop, a commercial piece of software, to stitch our faces into a cubemap in an automated fashion (using Photoshop’s Actions). This is the big caveat to using cmftStudio, even if you do not want to buy Lys, PS is still a prerequisite for working with both programs. There are, of course, open source alternatives to Photoshop, such as GIMP, but it is specifically Photoshop’s Action functionality that will power these workflows. GIMP has its own Actions in the form of Macros, but they are written with python. GIMP is not a software suite that I use, neither is python a language I am proficient with. Out of deference for those who don’t have, or like working with, Photoshop, I will later go through the steps I take inside the image editor in some detail, in order for the studious reader to reconstruct them, if they so desire, in their image editing software of choice. At any rate, and at the risk of sounding a little presumptuous, I take it that, as creative types, most of you already have Photoshop on your computers. 2 An asterisk regarding the “impossibility” of this. cmftStudio is a GUI for cmft, a command line interface that does the same stuff but inside a command prompt. I need to stress that I am certain multiple faces can be inputted in the command line, but messing with unwieldy prompts or writing batch files is neither time-saving nor user-friendly. This tutorial is aimed at the average mapper, but a coder might find the versatility offered in cmft interesting. The Cubemapgen Workflow You will have noticed that I purposefully omitted Cubemapgen from the previous discussion. This is because working with Cubemapgen, wonderfully, does not need Photoshop to be involved! Cubemapgen both accepts individual cubemap faces as input and exports individual irradiance map faces as output. Why, then, did I even waste your time with all the talk of Lys, cmftStudio and Photoshop? Well, woefully, Cubemapgen’s irradiance maps look poor at worst and inconsistent at best. Comparing IEMs exported from Lys and cmftStudio, you will see that both look practically the same, which is good! An IEM exported from Cubemapgen, by default, is far too desaturated and the confusing UI does not help in bringing it to parity with the other two programs. If you work solely with Cubemapgen, you won’t even know what ‘parity’ is, since you won’t have a standard to compare to. [Fig. 14] A comparison between the same irradiance map face, exported with the different apps at their respective, default settings. Brightened and enlarged for legibility. This may not bother you and I concede that it is a small price to pay for those not interested in working with Photoshop. The Cubemapgen workflow is so easy to describe that I will in fact do just that, now. After I do so, however, I will argue that it flies in the face of one of the aims of this tutorial, namely: efficiency. Step 1: Load the cubemap faces into Cubemapgen. Returning to specifics, you will remember that we have, at the moment, six .tga cubemap faces in a folder that we want to convert to six irradiance map faces. With Cubemapgen open, direct your attention to these buttons: [Fig. 15] You can load a cubemap face by pressing the corresponding button or using the hotkey ‘F’. To ensure the image faces the correct way, you must load it in the corresponding “slot”, from the Select Cubemap Face dropdown menu above, or by pressing the 1-6 number keys on your keyboard. Here is a helpful list: X+ Face <1> corresponds to *_right X- Face <2> corresponds to *_left Y+ Face <3> corresponds to *_up Y- Face <4> corresponds to *_down Z+ Face <5> corresponds to *_forward Z- Face <6> corresponds to *_back ...with the asterisk representing the name of your cubemap. With enough practice, you can get quite proficient in loading cubemap faces using keyboard shortcuts. Note that the ‘Skybox’ option in the blue panel is checked, I recommend you use it. Step 2: Generate the Irradiance Map [Fig. 16] The corridor environment cubemap loaded in and filtered to an irradiance map. The options on the right are my attempt to get the IEM to look right, though they are by no means prescriptive. Generating an IEM with Modified CubeMapGen 1.66 is as easy as checking the ‘Irradiance cubemap’ checkbox and hitting ‘Filter Cubemap’ in the red panel. There are numerous other options there, but most will have no effect with the checkbox on. For more information, consult the Sébastien Lagarde blog post that you got the app from. I leave it to you to experiment with the input and output gamma sliders, you really have no set standard on how your irradiance map is supposed to look, so unfortunately you’ll have to eyeball it and rely on trial and error. Two things are important to note. The ‘Output Cube Size’ box in the red panel is the resolution that you want your IEM to export to. In the yellow panel, make sure you set the output as RGB rather than RGBA! We don’t need alpha channels in our images. Step 3: Export Irradiance Map Faces Back in the green panel, click the ‘Save CubeMap to Images’ button. Save the images as .tga with a descriptive name. [Fig. 17] The exported irradiance map faces in the folder. These files still need to be renamed with appropriate suffixes in order to constitute a readable cubemap for the engine. The nomenclature is the same as the table above: “c00” is the X+ Face, to be renamed “right”, “c01” is the X- Face and so on. Right left, up down, forward and back. That’s the order! This is all there is to this workflow. A “cameraCubeMap env/testshot” in the light material will give us a result that will look, at the very least, better than the inbuilt makeIrradiance material keyword. [Fig. 17] The map ended up being a little bright. Feel free to open Fig. 4 and this in seperate tabs and compare the Lys/cmft export with the cubemapgen one. A Review of the Workflow Time for the promised criticism to this workflow. I already stated my distaste for the lack of a standardised set of filtering values with this method. The lack of any kind of preset system for saving the values you like makes working with Cubemapgen even more slipshod. Additionally, in part 2, I said that Cubemapgen is the fastest to work with, but this needs to be qualified. What we just did was convert one cubemap to an irradiance map, but a typical game level ought to use more than a single IEM. Premeditation and capturing fake, “generic” environment cubemaps (e.g. setting up a “blue light on the right, orange on the left” room or a “bright skylight above, brown floor” room, then capturing them with envshot) might allow for some judicious reuse and keep your distinct IEM light definition count down to single digits, but you can only go so far with that. I am not arguing here for an ambient cubic light in every scene either, certainly only those that you deem need the extra attention, or those for which the regular lighting methods enumerated in Part 1 do not quite work. I do tentatively assume, though, that for an average level you would use between one and two dozen distinct IEMs. Keep in mind that commercial games, with their automated probe systems for capturing environment shots, use many, many more than that. With about 20 cubemaps to be converted and 6 faces each to load into Cubemapgen, you’ll be going through the same motions 120 whole times (saving and renaming not included). If you decide to do this in one sitting (and you should, as Cubemapgen, to reiterate, does not keep settings between sessions), you are in for a very tedious process that, while effective, is not very efficient. The simple fact is that loading six things one by one is just slower than loading a single thing once! The “single thing” I’m referring to is, of course, the single, stitched cubemap cross texture. In the next part, I will go into detail regarding how to make a cubemap cross in Photoshop in preparation for cmftStudio and Lys. It will initially seem a far more time-consuming process to you than the Cubemapgen workflow, but through the magic of automation and the Actions feature, you will be able to accomplish the cubemap stitch process in as little as a drag-and-drop into PS and a single click. The best thing is that after we go through the steps, you won’t have to recreate them yourself, as I will provide you with a custom Actions .atn file and save you the effort. I advise you not to skip the explanations, however. The keen-eyed among you may have noticed that you can also load a cube cross in Cubemapgen. If you want to use both Cubemapgen and Photoshop together to automate your Cubemapgen workflow, be aware that Cubemap gen takes crosses that have a different orientation than the ones Lys and cmftStudio use. My macros (actions) are designed for the latter, so if you want to adjust them for Cubemapgen you would do well to study my steps and modify them appropriately. For the moment, you’ve been given the barebones essentials needed to capture an envshot, convert it to an irradiance map and put it in your level at an appropriate location, all without needing a single piece of proprietary software. You can stop here and start cranking out irradiance maps to your heart’s content, but if you’re in the mood for some more serious automation, consider the next section.
  4. Well it's not that bad is it? I thought multiple people have finished playing it already. No reason to remove a mission that has some issues, but can still be played and finished. You can put a warning in your mission description, but let players decide if it's worth it or not. I think removal is not something anyone will be happy with. I think it would be better if you start on a new mission and with the (hopefully positive) experience that you gained, you can fix things in this mission later.
  5. First of all, ChatGPT , independent of the version, is a language model to be able to interact with the user, imitating being intelligent. It has a knowledge base that dates back to 2021 and adds what users contribute in their chats. This means, first of all, that it is not valid if you are looking for correct answers, since if it does not find the answer in its base, it has a tendency to invent it with approximations or directly with false or obsolete answers. With this, the future will not change, it will occur with AI of a different nature, on the one hand with search engines with AI, since they have access to information in real time, without needing such complex language models and for this reason, they will gradually search engines are going to add AI, not only Bing or Google, but before these there was Andisearch, like the first of all, Perplexity.ai, Phind.com and You.com. Soon there will also be DuckDuckGoAI. On the other hand, generative AI to create images videos and even aplications, music and other, like game assets or 3D models., The risk with AI came up with Auto GPT, initially a tool that seemed useful, but it can be highly dangerous, since on the one hand it has full access to the network and on the other hand it is capable of learning on its own initiative to carry out tasks that are introduced as if it were a Text2Image app out there, what was demonstrated with ChaosGPT, the result of an order introduced in Auto GPT to destroy humanity, which it immediately began to develop with extraordinary efficiency, first trying to access the missile silos nuclear weapons and to fail, luckily, trying to get followers on Twitter with a fake account that he created and where he got more than 6000 followers, hiding later, realizing the danger that can be blocked or deactivated on the network. Currently nothing is known about it, but it is still a danger not exactly to be ruled out, it can really become Skynet. AI is going to change the future, but not ChatGPT which isnt more than a nice toy.
  6. Google mobile is on the forums online user lists (in addition to google)... never seen that before

    1. nbohr1more

      nbohr1more

      He's a pretty cool guy. You ask him to find stuff and he'll usually turn up the goods. A little nosy though...

  7. FXAA is cheap but looks awful. Supersampling AA is strictly more expensive than multisampling, no reason to use it if multisampling looks OK. Temporal AA requires major changes in the engine in order to be used, plus it kinda requires motion blur to hide its uselessness on fast camera motions. So don't expect multisampling to be replaced soon.
  8. https://www.theregister.com/2023/06/15/amazon_echo_disabled_allegation/?td=rt-3a The cloud strikes again! I.e. You can be instantly locked out of an entire ecosystem at any time for any reason, even if you have done nothing wrong at all.
  9. I'm seeing the improvement with shadow maps too, albeit I keep their quality at lowest. Indeed MSAA is still costly even so, I kept it disabled even now for that reason. Fingers crossed the next release may get shader-based Anti-Aliasing: From my tests in Tesseract / Redeclipse, FXAA / SSAA / TAA all tend to be cheaper, hope we get at least one someday.
  10. Thanks for clarifying. Unfortunately there's still a bug as the barrier never goes away: When I first enter the apartment the husband says "is she hurt, please put her on the bed"... I do so and the objective is completed, but after that the husband says nothing else and the barrier never goes away. I looked around the room and tried frobbing him, but nothing ever happens for some odd reason. Wonder if some triggers or signals got broken in latest dev?
  11. For some reason I thought this FM was just released, didn't realize it's 3 years old already! Just finished it at last with 5100 loot and 3 secrets found, not bad for something of this scale. I was stuck on one of the objectives but finally found it so I removed my question. This has to be the most structurally intense FM ever made: I don't think a city of quite this size and complexity was ever done for TDM in one map before, the parkour and little hidden areas are insane! This is nice albeit mentally straining as it's impossible not to miss something or properly keep track; In many cases I had to noclip to discover how to get to certain areas, reloaded and went there without cheats afterward but don't know how I could have found some areas otherwise. Easy to get lost but this is compensated by the extremely useful feature of the map highlighting where you are so you don't have to guess using signs. Ran into a few bugs. Most noteworthy is a breaking glitch that makes it impossible to continue without noclip: Other than that nothing too significant: Managed to catch a case of a door that opens too wide and goes through the railing, there was a hatch that did it too but I forgot that one. I can also confirm the black box bug... first thought it's caused by my mod to remove spiders because I have arachnophobia, they set the entity to null however so it shouldn't be a box per say.
  12. you remind me of shadowhide for some reason, shadowhide is that you? :D

  13. The spoiler tags aren't for you, it's for the rest of us reading this thread so WE don't get spoiled. Thanks for spoiling the frobbable book on a shelf for the rest of us. EDIT: I'll be unsubscribing to this thread to avoid further spoilers. Maybe I'll check back after I've finished the mission Amadeus
  14. You can try my alternative footstep sounds package which addressed the things you described together with a lot of other footstep sounds both for player and AI if you want to. https://forums.thedarkmod.com/index.php?/topic/17631-new-footstep-sounds/
  15. it's worth trying whatever you want to try and then see what the beta testers think. Just have a plan B ready to implement in case it goes over like a lead balloon. If there is no loot goal, the first thing people are going to ask is 'where is the loot goal?', so you just need to have an explanation that makes sense and provide that in the release thread, or even in the mission briefing (e.g. don't rob innocent people or something). The only reason I make loot goals optional is to make players like myself happy, who are terrible at finding loot and don't want to spend hours searching everywhere before the mission completes.
  16. Mods can this moved again? @Acolytesix- can you make sure you post in the beta thread instead of this one please (this one is public, the beta thread is only for logged-in forum members): https://forums.thedarkmod.com/index.php?/topic/21822-beta-testing-high-expectations/
  17. Welcome to the New Mappers Workshop! This is a communal workshop for new mappers who have never made a TDM mission before. Each week or two I will make a tutorial video and help to guide everyone through the process of creating a small, complete mission. I'm hoping the participants will feel free to ask questions, no matter how small--we're all here to learn from and encourage each other. Since I expect this thread to get fairly busy, I'm going to be heavy-handed about removing off-topic content. ====================================================================================================================================================================== Lessons often include links or other written instructions; direct links to the lessons are collected below: Lesson 1: Planning http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/?p=407999 Lesson 2: Visportals http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/?p=408253 Lesson 3: Your First Room http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/?p=408484 Lesson 4: Decorating Your Rooms http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/?p=408785 Lesson 5: Connecting Your Rooms http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/?p=409215 Lesson 6: Outdoor "Rooms" http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/?p=409322 Lesson 7: Creating Doors http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/?p=409547 Lesson 8: Functional Props (ie, entities) http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/?p=409731 Lesson 9: Immersive Details (sound/particles) http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/?p=410258 Lesson 10: Advanced Brushwork (ladders/water) http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/?p=410667 Lesson 11: AI http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/?p=411192
  18. Not all players respond to loot the same way I suspect. For players who principally enjoy exploring, the loot objective doesn't serve a reward function at all. Instead, for them it is mostly a handy barometer for how close they are to seeing the whole level. For this group having a specific number to target that is at least 70-80% of the total loot on the map is important, but they wouldn't care if it is optional. Then there are the power-fantasy roleplayers' whose joy is living out the dream of being a master thief. I think those players actually do want an obligatory objective and a specific target number, but they don't care as much about what that number is. They just get satisfaction from hitting a required target. Conversely, players who come to roleplay or otherwise experience the story might be annoyed by having a loot goal at all. Picking up treasure gets in the way of them experiencing the story. In their minds it should be entirely up to them what they do or don't want to pick up. And of course there are also completionists, who don't need loot goals for the exact opposite reason. They will grab absolutely everything in the level of their own accord. You can't make all of these groups happy no matter what you do. In the Thief games I'd wager loot objectives existed partially to make sure everyone picked up enough money to buy gear for the next level, but in TDM that mostly does not apply. So unless you are putting equipment sellers in your mission like Iris and reward looting that way, I don't think there is a right answer. People will do what they want and someone will feel like their toes are being stepped on no matter what you do. So do whatever makes you happy.
  19. sure - I would only ask that you follow the thread to make sure you don't report stuff that has already been mentioned: https://forums.thedarkmod.com/index.php?/topic/21822-beta-testing-high-expectations/
  20. I refer to Doom 3 in general, at least the original game engine from 2003. All comparisons are operations which are part of expressions. In order to evaluate expression, you need to store temporaries somewhere, and combine them with each other through a sequence of operations. In case of GUI scripts, the temporaries are called "registers". These registers are floats, they simply cannot hold a string. Actually, I have just found the commit where I removed the comparison, and here is what it says: Revision: 16537 #5869. Removed incorrect comparisons with empty string ( == ""). The scripting language does NOT have string values, it only has float values. So when you reference some string variable in expression, it gets value 0.0 if string is empty and 1.0 otherwise. For this reason, there is no sense in comparing to "" (which is actually replaced with 0 because no variable with empty name is found). So the "!=" operator is just a weird way of writing != operator, doublequotes don't change anything. And a string value inside expression is immediately converted into float with 0 or 1 depending on whether it is empty or not. There is no string comparison, but there is checking for emptyness. Multiline macros work as they did, in fact the change is about fixing them! There are several different but related things, and they get into a confusing mix here: Token concatenation used for multiline macros, which comes directly from C/C++ language. String literal concatenation with backslash is optional feature of idLexer --- it does not exist in C/C++ language. String literal concatenation by writing them one after the other --- standard feature of C/C++, disabled in D3 GUI. Here is the full commit message: Revision: 10031 #5869. Don't enable string concatenation by backslash (\) in GUI code. GUI code uses doublequotes everywhere to make names like gui::my_var atomic, since otherwise parser would break them into many tokens. For this reason, string concatenation is disabled in lexer (that's when you can write "hello" " " "world", and C will treat it as single string "hello world"). However, ID enabled special string concatenation via backslash to substitute for it, like this: "hello" \ " " \ "world" The problem with such usage is that it breaks multiline macros. Due to limitations of GUI code, we have to write whole window (even with subwindows) into a macro in order to make it reusable. If any line in such macro ends with a string literal (that's very likely in GUI code), then parsing breaks and whole macro does not work. This can be worked around by moving the first token on the next line to the end of the current line, but it is very messy and totally not obvious. Better just forbid string concatenation altogether. Buy the way, the following way of concatentating string works (at least inside macro): "hello" ## " " ## "world" Note that ## is a standard C way of concatenating tokens, although in C it works on TOKENS, not on string literals --- but here they are mixed anyway. Also, it is even possible to do this: #define HELLO_MESSAGE(index) "Hello, " ## #idx ## "-th user!" Which is very useful to construct variable names inside macros =) The work is finished, although it might get some changes if any related bugs are reported e.g. during beta. I don't think we can give access to single thread. If you miss some info, just ask me and I can copy/paste or explain in my own words. The two references you gave here are about: expression evaluation order fix, expressions in Set command, change in register disabling behavior. They are described in this public post too:
  21. That was immediately obvious to me when I saw it. Which was part of the reason I asked the original question, because, it seemed like an old mission to me which he continued to work on.
  22. Don't want to comment on that chatbot / spambot thing, to me it's just the latest mass hysteria created overnight to further shove the world into madness. But like I said the main reason for my idea is I find the lack of a permanent alert level too unrealistic, even by game character standards; It would be nice if this could be solved without altering difficulty but universally to all FM's. I'm just hoping there's a satisfactory way to avoid having guards literally chase you, you hide and wait 3 minutes for the whole crew to calm down, then 5 minutes after you were just being chased a guard will calmly go "what was there in the shadows, probably just the rats"... that behavior makes them almost as dumb as "chat GPT" For now I wonder: The current behavior to boost NPC acuity after a level 3 alert... is there a spawnarg to customize the amount or is it hard-coded? It would help if at least the FM can increase the offset and make a guard super-alert once they saw you. I believe another suggestion I made long ago might also be relevant: We have difficulty settings for AI sight and hearing in the menu, but could we have a third option to multiply how quickly enemies give up on searching for you? If you're impatient you could set it to low so AI forget you in just a minute, whereas if you want maximum realism have them still looking even 10 minutes later! Wouldn't be a fix to the unawareness issue once they calm down but this could improve it.
  23. heh i was thinking the same though it might just have been a glitch when writing the names are pretty similar. But for correctness it is called the dark engine and the newer version that allows us to run these beauties on win10/11 is called newdark. newdark is kinda interresting as it just suddenly popped up on a french forum some time ago by an anonymous developer with the alias le corbeau who allegedly got his hands on the original source code and started updating it for modern OS. this was the original thread i believe -> https://www.ttlg.com/forums/showthread.php?t=140085 bikerdude was on that forum to when the patch hit i noticed hehe.
×
×
  • Create New...