Jump to content
The Dark Mod Forums

Search the Community

Searched results for '/tags/forums/input/' or tags 'forums/input/q=/tags/forums/input/&'.

Didn't find what you were looking for? Try searching for:

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • General Discussion
    • News & Announcements
    • The Dark Mod
    • Fan Missions
    • Off-Topic
  • Feedback and Support
    • TDM Tech Support
    • DarkRadiant Feedback and Development
    • I want to Help
  • Editing and Design
    • TDM Editors Guild
    • Art Assets
    • Music & SFX

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 6150 results

  1. Thought it would be a good idea to collate a useful list for new and old mappers alike and this post will update as we go. Abandoned works: Any WIP projects that were abandoned by the original author - http://forums.thedarkmod.com/topic/12713-abandoned-works/Darkradiant & Darkmod shortcut settings: Some example settings for new mappers - http://forums.thedarkmod.com/topic/15152-darkradiant-and-darkmod-shortcut-folder-settings/Darkradiant howto, must knows, tips and faqs - http://forums.thedarkmod.com/topic/12558-usefull-important-editing-links/?do=findComment&comment=272581Info for Beginners: Newbie DarkRadiant Questions - http://forums.thedar...iant-questions/Dark Radient Must Know Basic Intro - http://wiki.thedarkm...now_Basic_IntroEditing Tips for Beginners - http://wiki.thedarkm...s_for_BeginnersEditing FAQ (Troubleshooting & How-To) - http://wiki.thedarkmod.com/index.php?title=Editing_FAQ_-_Troubleshooting_%26_How-ToSotha's excellent Mapping Tutorial series: http://forums.thedarkmod.com/topic/18680-lets-map-tdm-with-sotha-the-bakery-job/Springheel's New Mapper's Workshop: http://forums.thedarkmod.com/topic/18945-tdm-new-mappers-workshop/ Inspiration: Collection of screenshots and images people have found online - http://forums.thedarkmod.com/topic/11610-darkmod-inspiration-thread/Mapping Resources: List of Voice actors available for voice recording - http://modetwo.net/d...6-voice-actors/Lengthy collection of city reference pictures - http://modetwo.net/d...rence-pictures/Collection of texture resource sites - http://modetwo.net/d...ture-resources/Free Ambient Tracks - http://skeksisnetlabel.wordpress.com/2009/12/30/10-songs-for-free-download-vol-10-full-moon-over-noricum/Mapping Tools: 3 useful tools for texture creation - http://forums.thedarkmod.com/topic/18581-must-have-tools-for-the-descerning-mapper/Modular Building: What is Modular building - http://forums.thedarkmod.com/topic/14832-modular-building-techniques/Working example tutorial on modular building - http://forums.thedarkmod.com/topic/18680-lets-map-tdm-with-sotha-the-bakery-job/Springheels new modular models - http://forums.thedarkmod.com/topic/18683-using-springheels-205-modules/Some related mapper recipies -Easy Vaults - http://forums.thedarkmod.com/topic/14859-easy-vault-recipe/?hl=%2Beasy+%2BrecipeEasy Outdoors - http://forums.thedarkmod.com/topic/16159-easy-outdoors-recipe/?hl=%2Beasy+%2BrecipeEasy Caverns - http://forums.thedarkmod.com/topic/14469-quick-caverns-recipe/?hl=recipeEasy Alert Ai - http://forums.thedarkmod.com/topic/17157-easy-alert-ai-recipe/?hl=%2Beasy+%2BrecipeEasy Alert Ai Custom Behavour - http://forums.thedarkmod.com/topic/17160-easy-alert-ai-custom-behavior-recipe/?hl=recipeTutorials: Collection of video tutorials for DR - http://modetwo.net/d...in-darkradiant/Using Lighting and detail effectively: - http://forums.thedar...l-and-lighting/Voice Actors list: List of available voice actors - http://forums.thedarkmod.com/topic/12556-list-of-available-voice-actors/Usefull Console commands: A list of console commands for testing in-game - http://wiki.thedarkm...Useful_Controls
  2. So I hear that id has dropped the megatexture approach despite it becoming more feasible with the next-gen consoles (using very fast SSDs and dedicated I/O features that are like adding mulitple cores beyond the nominal 8x Zen 2 core count found in both XSX and PS5). Next-Gen SSD Tech Could Mean the Resurgence of Megatextures Technique, Says Splash Damage VP of Tech Could Megatextures really make a comeback now that hardware has caught up with Carmack's original vision? Not only are the SSDs in PlayStation 5 and Xbox Series X much faster, both consoles also feature a fully customized Input/Output interface and onboard compression/decompression blocks whose purpose is to virtually eliminate any run-time decompression overhead. The PS5 supports both zlib and the slightly faster Oodle Kraken protocol from RAD Game Tools, while the Xbox Series X supports zlib for general textures and a new, reportedly very fast compression system called BCPack, tailored specifically to handle GPU textures. All of this may do wonders for a revised Megatextures-like technique. Will any major studio make this bet, though? Time will tell, but the possibility is certainly intriguing. Inside PlayStation 5: the specs and the tech that deliver Sony's next-gen vision
  3. In order to thank the team (and other mappers) for their relentless efforts in contributing, I hereby give the community.... Ulysses: Genesis A FM By Sotha Story: Read & listen it in game. Link: https://drive.google.com/file/d/0BwR0ORZU5sraTlJ6ZHlYZ1pJRVE/edit?usp=sharing http://www4.zippysha...95436/file.html Other: Spoilers: When discussing, please use spoiler tags, like this: [spoiler] Hidden text. [/spoiler] Mirrors: Could someone put this on TDM ingame downloader? Thanks!
  4. DarkRadiant 2.8.0 is ready for download. Next to a considerable number of bugfixes and improvements, a nice amount of new editing features made it into this build. There's a new way to select items by filter, it's possible to retain grouping information when copy-pasting between DarkRadiant sessions, a new "Select Parent Entities" command has been added, and more. A new DarkRadiant User Guide is being worked on and available on the website https://www.darkradiant.net/userguide (also accessible through the About > User Guide menu option). Windows and Mac Downloads are available on github: https://github.com/codereader/DarkRadiant/releases/tag/2.8.0 and of course linked from the website https://www.darkradiant.net Thanks go out to all who helped testing this release! Please report any bugs or feature requests here in these forums, following these guidelines: Bugs (including steps for reproduction) can go directly on the tracker. When unsure about a bug/issue, feel free to ask. If you run into a crash, please record a crashdump: Crashdump Instructions Feature requests should be suggested (and possibly discussed) here in these forums before they may be added to the tracker. Changes since 2.7.0 Feature: Selection by Filter Feature: Preserve grouping information when copy-pasting between multiple open instances of DR Feature: Add Portable Map Format storing map and additional data in one single file Feature: Preserve grouping information in prefabs Feature: Model chooser now lists .md3 models Feature: The Dark Mod: Let DR look for either 32-bit or 64-bit TDM .exe's Feature: The Dark Mod: Added ability to edit difficulty names Feature: The Dark Mod: Add Move up/down buttons to Conversation Editor Feature: Add type-to-search ability to AI vocal set and head chooser dialogs Feature: Let DR remember the shader in ShaderClipboard after closing Feature: Add option to show "Other Materials" in Texture Browser Fixed: Edit Objectives dialog doesn't adjust its height when selecting a component Fixed: DR can't find materials whose names start with table Fixed: It's possible to delete the classname spawnarg through the context menu Fixed: All spawnargs with the same value as the "name" spawnarg iterate when the entity is cloned Fixed: Removing a Stim/Response entry can break the remaining entries Fixed: Loading error caused by non-Latin character in filename Fixed: Texture Tool: crashes on brush surfaces with texture scale 0 Fixed: Undo after thickening a cylinder cap along vertex normals causes crash Fixed: Readable Editor Crash Fixed: Python 3.8 does not get linked (Linux) Fixed: ConnectNamespacedWalker Warning when cloning selection containing links and targets Fixed: Show English difficulty names in the Difficulty Editor instead of the #strNNNNN IDs Fixed: 'Reload models' reveals hidden models Fixed: 'Show help' etc. checkboxes in entity inspector aren't remembered by DR Fixed: 'Show help' tooltip text updates incorrectly Fixed: Changing classname ties entity's visibility to 'Default' layer Fixed: Resized Models lose their scale in auto-saved maps Fixed: Autosaves don't save last camera angle & position Fixed ToggleFullScreenCamera command for Embedded and RegularLeft layouts Tweak: Removed/disabled close button from all dialogs Tweak: Allow fixed Subdivisions to go higher than 32 Tweak: Targeting projected lights: let the line point to the light source instead of the light volume's midpoint Tweak: Collapsing a folder in a treeview now collapses all child elements too The list of changes can be found on the our bugtracker changelog. Have fun mapping!
  5. Ah thank you fpr so much input on FMs that use scripted sequences and on the creation process. I remember doing some main character voice sets when I was building my FM, but those didn't evolve NPCs, so I only had to record it (actually a helpful member of the forums did it - thank you @betpet) and write a shader. I read through the wiki and saw that there is a convenient tool for creating NPC dialogues, that at least from the looks of it doesn't really seem to be that much work considering the huge benefits those conversations are able to give to an FM's atmosphere and story telling. However somehow I never noticed some FMs you were mentioning. The Accountant is a very nice example of how you can use NPC dialogues or even to give new objectives ingame without text boxes or books. I will make sure to check the other FMs as well. Thank you for all your suggestions! What is your opinion on making "pre-defined" conversations? Would it make sense to have some audio files available for mappers to setup custom conversations? I remember in the game Half-Life NPCs had some kind of stock material for random chatter in the way of "Question from NPC 1 -> Answer from NPC 2". For TDM there could be audio material for each of the current voice sets and mappers could set up small conversations apart from the automatic monologues we currently have for NPCs. So a big chuck for generating dialogues would be erased (recording and normalizing speeches and creating their shaders). To try to keep them non-repetive for multiple FMs these pre-recorded lines could be like in Half-Life where each chunk of a dialogue can be interchangeable with the question -> answer setting. I think this would make it easier for mappers to create a little variety with NPCs and make them more lifelike without a big hassle. What do you guys think?
  6. Inn Business It's business, at an inn, over three nights. Development screenshots: Download: https://drive.google...dit?usp=sharing Update 1.48 uploaded March 8th, 2014, one change: patches key rarely not being frobable in one of its possible spots Big thanks to my beta testers: Airship Ballet, Kyyrma and AluminumHaste! Development supporters of note: Sotha, Springheel and Obsttorte. Also thanks Sotha, for urinating in my mission. ;-) And thanks Kyyrma for the title screen! My appreciation to all forum/wiki contributors, without whom, this wouldn't exist. Thanks to positive commenters on my previous mission too, extra motivation helps! :-) Note this uses campaign features, what you use the first night, impacts subsequent nights. And to quote a tester, "...the level is maybe best experienced in more than one sitting". If you do pause between nights, please be sure to save, you can't begin partway through effectively. (If you accidentally start a night you already completed, just fail the kill objective to switch to another night.) If your frame rates are too low facing the cemetery, please reduce your "Object Details LOD" setting. It was designed with "AI Vision" set to "Forgiving", to be able to sneak through with minimal reactions, if you want more/less, adjust your settings accordingly. There are several random, conditional aspects, and ways of going about things, so others might have slightly different experiences. Post here if you discover hidden objectives for extra points! My condolences to loot completionists, I made a bit on the third night hard, you've got your challenge cut out for you! Speaking of which, there's a TDM bug that mission complete totals too high, here are the real amounts per night: 2026/970/202. Oh, there is something that in the U.S. would be rated PG, in case you play with kids in earshot. I hope you enjoy playing it, feel free to let me know you did, and I'm glad to respond to inquiries (like how stuff was done, nothing was scripted). (Note which night you are referring to if it's something specific.) (Please remember spoiler tags to not expose things meant to be discovered by playing.) Like so: [spoiler]secrets[/spoiler] Developed for TDM 2.01. PS: Thiefette, good news, no spiders! Springheel, if you find an optional objective you can skip...you might find it immersion breaking. Others, no undead! There are a couple other interactive critters though. :-) Edit note: Some posts below were from users of an unreleased version of TDM 2.02 which broke several things, they do not reflect regular game-play.
  7. We didn't make the holidays (such a busy time of year) so here's a New Year's gift, an unusual little mission. Window of Opportunity Recover an item for a regretful trader out in a wilderness setting, and discover more! Available within the in-game mission downloader or: Download: http://www.thedarkmo...ndetails/?id=79 Alternative: https://drive.google...WTMzQXZtMVFBSG8 Some unorthodox gameplay on regular/ghost difficulties. (Arachnophobes might prefer short mode...) Please expect to need your lantern in regular and ghost modes! Short ("easy") mode is a smaller map, so if you are looking for areas others reference below, or 100% of the loot, you'll need to play on another mode. I wanted to create my first mission before I became influenced by too many others' ideas, and limited myself to what has been done before. As such, this mission is not set in a city/town, and has some features that are likely to be provocative. There's a section some really like, which others don't, either way I kept it short to not last too long. That being said, I hope you do find it fun! :-) Special thanks to those who provided valuable testing and feedback: Goldwell, Kyyrma, plotzzz, 161803398874989, PPoe & Bikerdude (who also contributed a sound). (Please remember spoiler tags to not expose things meant to be discovered by playing.) Like so: [spoiler]secrets[/spoiler] If you are having trouble finding the main objective, here's what to pay attention to in the mission for hints: There is a spot it's possible to get stuck on the ground in the corner by the cliff/rockfall where there's a rope laying on the ground, please take care if you poke around there!
  8. Description: Fleece the mansion of a former colleague that betrayed you. This is a relatively small, straightforward thieving mission set in an upper-class neighborhood. Version: 1.0 File: The mission is now available via the in-game downloader. If you need to download it separately, the link is below: PK4: https://drive.google...dit?usp=sharing Beta Testers: Airship Ballet RJFerret Thanks: First, to my excellent beta testers, who were prompt with responses and thorough with their testing. Several small but important gameplay tweaks came about as a result of beta testing, and the mission is better off for them. Second, to Fidcal, who I've not had the pleasure of "meeting" on the forums. His incredible tutorial made this mission possible. Third, to everyone who helped me out in small ways on the forums. Grayman graciously fixed a couple problems for me via PM, and several of you answered my questions in the Newbie thread. Notes: This is my first FM for any Thief game, and incorporates many of the techniques learned in Fidcal's A-Z tutorial (as well as a couple others I picked up along the way or during beta testing). It was intended as equal parts playable, fun mission, and as a way for me to learn how to make a functional level using DR. I'm happy with the result, and I hope you are too! As a secondary note, from installing Dark Radiant to typing this sentence, it couldn't have been than two months. My hobby-level skills as a graphic designer and programmer undoubtedly helped some, but honestly most of it was just slogging through Fidcal's tutorial in my free time. Anyone can learn this quickly and create working, fun levels. Preview Shots: {note, some of these locations will look slightly different in the final version, as these were taken before beta testing} http://imgur.com/a/AuxWJ
  9. I can confirm all is working as intended ! I've literally unplugged the GeForce from the monitor input but I'm getting some solid 55-56 FPS in TDM (in the scene above) With NO wizardry in Windows at all, it's just there.
  10. So just to be sure: You've made transcripts for multiple vocal sets (there are no audios & shaders yet) on a chatter system, but you can't find them currently? If yo could find them, the next step would be to reach out for the still-active forum members who contributed voice sets and ask them if they could record the necessary lines. After this someone has to put them into their respective folders and write shaders for them. Finally they would be usable for Dark Radiant map creators. Is that correct? If yes, then could we help searching the transcripts (if they're online somewhere) or are they somewhere on your personal computer? Thank you for your input on this matter anyway!
  11. There's now an editable fan mission list on the wiki, for the sake of tracking missions made for The Dark Mod. Please read and follow the guidelines, and help keep the list up to date! Direct link, but also accessible from the wiki title page: http://wiki.thedarkmod.com/index.php?title=Fan_Missions_for_The_Dark_Mod Discussion of changes (format, policies, entries, etc.) can take place in this thread. --------------------------------------- There is also now a wiki page to track upcoming fan missions: http://wiki.thedarkmod.com/index.php?title=Upcoming_Fan_Missions Submissions, progress, and any discussion for missions under construction can take place here: http://forums.thedarkmod.com/topic/11639-upcoming-fan-missions/
  12. Screenshots: Mission: Breaking out the Fence Version: 1.00 Released: 17/03/2014 Theme: City Author: kyyrma Testing and additional content: RJFerret, Airship Ballet, Goldwell Breaking out the Fence continues the story from my first FM In a Time of Need. The player once again takes the role of an anonymous Thief who is getting tangled with an organization called the League of Seafaring Merchants. This time you will be sneaking your way into an import shop down at the South Docks and getting your fence Terry out from a pinch you have indirectly got him into. Prior knowledge of the story is not strictly necessary, and the level works just as well as a one shot stand-alone mission. Note that the story briefing is quite long - if you don't appreciate walls of text in your game, all the relevant information is on the last page. However if you want to set the mood and enjoy a good tale of skullduggery then give it a read. The difficulty level is moderate to high, and compared to my last mission it is much larger. The average play time should be around 1 - 2 hours depending on how much you decide to explore the level. You will start the level with only a blackjack, compass, spyglass and a map. You have to find the rest of your equipment while on the job. If you can't find a pair of lockpicks, remember to keep your eyes peeled for keys! The only differences between difficulty settings are the objectives, with higher difficulties requiring either a no-kill or a full ghost playthrough. Injection needles have been made a loot item, so remember to pick them up if you are going for loot completition I've succumbed under peer pressure and decided to have a poll where you can rate the level, but I would really appreciate your comments as well! Please get back to me if you have any questions or critique regarding the mission. Help! I'm stuck! How do I find the key to the closed off area in the sewers? Download links: If the mission is not yet available from the in-game downloader, you can use the following link: Private mirror Installation: Drop breakingout.pk4 in you darkmod/fms/ folder Run TDM and select New Mission. Breaking out the Fence should now appear on the list of downloaded missions. Install and play. Known issues: AI pathfinding can be slightly erratic at times Outdoors can cause performance issues on low-end hardware Sleeping AI can glitch through the bed if KO'd or killed There remain some audio issues such as pop-in and incorrect sound propagation Thanks: My testers RJFerret and Airship Ballet for their invaluable input and help during the creation of the level Goldwell for providing his vocal talent for the characer Terry All the TDM devs for creating such a great game All the people who have answered my newbie questions at the Editor's Guild
  13. Hello, everyone! In this multi-part, comprehensive tutorial I will introduce you to a new light type that has been available in The Dark Mod since version 2.06, what it does, why you would want to use it and how to implement it in your Fan Missions. This tutorial is aimed at the intermediate mapper. Explanations of how to use DarkRadiant, write material files, etc. are outside of its scope. I will, however, aim to be thorough and explain the relevant concepts comprehensively. Let us begin by delineating the sections of the tutorial: Part 1 will walk you through four, distinct ways to add ambient light to a scene, the last way using irradiance environment maps (or IEMs). Lighting a scene with an IEM is considered image-based lighting. Explaining this concept is not in the scope of this tutorial; rather, we will compare and contrast our currently available methods with this new one. If you already understand the benefits IBL confers, you may consider this introductory section superfluous. Part 2 will review the current state of cubemap lights in TDM, brief you on capturing an environment cubemap inside TDM and note limitations you may run into. Three cubemap filtering applications will be introduced and reviewed. Part 3 will go into further detail of the types of inputs and outputs required by each program and give a walkthrough of the simplest way to get an irradiance map working in-game. Part 4 will guide you through two additional, different workflows of how to convert your cubemap to an irradiance map and unstitch it back to the six separate image files that the engine needs. Part 5 will conclude the tutorial with some considerations as to the scalability of the methods hitherto explained and will enumerate some good practices in creating IEMs. Typical scenes will be considered. Essential links and resources will be posted here and a succinct list of the steps and tools needed for each workflow will be summarized, for quick reference. Without further ado, let us begin. Part 1 Imagine the scene. You’ve just made a great environment for your map, you’ve got your geometry exactly how you want it… but there’s a problem. Nobody can appreciate your efforts if they can’t see anything! [Fig. 1] This will be the test scene for the rest of our tutorial — I would tell you to “get acquainted with it” but it’s rather hard to, at the moment. The Dark Mod is a game where the interplay between light and shadow is of great importance. Placing lights is designing gameplay. In this example scene, a corridor with two windows, I have decided to place 3 lights for the player to stealth his way around. Two lights from the windows streak down across the floor and a third, placeholder light for a fixture later to be added, is shining behind me, at one end of the corridor. Strictly speaking, this is sufficient for gameplay in my case. It is plainly obvious, however, that the scene looks bad, incomplete. “Gameplay” lights aside, the rest of the environment is pitch black. This is undesirable for two reasons. It looks wrong. In real life, lights bounce off surfaces and diffuse in all directions. This diffused, omni-directional lighting is called ambient lighting and its emitment can be termed irradiance. You may contrast this with directional lighting radiating from a point, which is called point lighting and its emitment — radiance. One can argue that ambient lighting sells the realism of a scene. Be that as it may, suppose we disregard scary, real-life optics and set concerns of “realism” aside… It’s bad gameplay. Being in darkness is a positive for the player avatar, but looking at darkness is a negative for the player, themselves. They need to differentiate obstacles and objects in the environment to move their avatar. Our current light level makes the scene illegible. The eye strain involved in reading the environment in these light conditions may well give your player a headache, figurative and literal, and greatly distract them from enjoying your level. This tutorial assumes you use DarkRadiant or are at least aware of idtech4’s light types. From my earlier explanation, you can see the parallels between the real life point/ambient light dichotomy and the aptly named “point” and “ambient” light types that you can use in the editor. For further review, you can consult our wiki. Seeing as how there is a danger in confusing the terms here, I will hereafter refer to real life ambient light as “irradiant light”, to differentiate it from the TDM ambient lights, which are our engine’s practical implementation of the optical phenomenon. A similar distinction between “radiant light” and point lights will be made for the same reason. Back to our problem. Knowing, now, that most all your scenes should have irradiant light in addition to radiant light, let’s try (and fail, instructionally) to fix up our gloomy corridor. [Fig. 2] The easiest and ugliest solution: ambient lights. Atdm:ambient_world is a game entity that is basically an ambient light with no falloff, modifiable by the location system. One of the first things we all do when starting a new map is putting an ambient_world in it. In the above image, the darkness problem is solved by raising the ambient light level using ambient_world (or via an info_location entity). Practically every Dark Mod mission solves its darkness problem1 like this. Entirely relying on the global ambient light, however, is far from ideal and I argue that it solves neither of our two, aforementioned problems. Ambient_world provides irradiant light and you may further modulate its color and brightness per location. However, said color and brightness are constant across the entire scene. This is neither realistic, nor does it reduce eye strain. It only makes the scene marginally more legible. Let’s abandon this uniform lighting approach and try a different solution that’s more scene-specific. [Fig. 3] Non-uniform, but has unintended consequences. Our global ambient now down to a negligible level, the next logical approach would be hand-placed ambient lights with falloff, like ambient_biground. Two are placed here, supplementing our window point lights. Combining ambient and point lights may not be standard TDM practice, but multiple idtech4 tutorials extol the virtues of this method. I, myself, have used it in King of Diamonds. For instance, in the Parkins residence, the red room with the fireplace has ambient lights coupled to both the electric light and the fire flame. They color the shadows and enrich the scene, and they get toggled alongside their parent (point) lights, whenever they change state (extinguished/relit). This is markedly better than before, but to be honest anything is, and you may notice some unintended side-effects. The AI I’ve placed in the middle of the ambient light’s volume gets omnidirectionally illuminated far more than any of the walls, by virtue of how light projection in the engine works. Moving the ambient lights’ centers closer to the windows would alleviate this, but would introduce another issue — the wall would get lit on the other side as well. Ambient lights don’t cast shadows, meaning they go through walls. You could solve this by creating custom ambient light projection textures, but at this point we are three ad hocs in and this is getting needlessly complicated. I concede that this method has limited use cases but illuminating big spaces that AI can move through, like our corridor, isn’t one of them. Let’s move on. [Fig. 4] More directional, but looks off. I have personally been using this method in my WIP maps a lot. For development (vs. release), I even recommend it. A point light instead of an ambient light is used here. The texture is either “biground1” or “defaultpointlight” (the latter here). The light does not cast shadows, and its light origin is set at one side of the corridor, illuminating it at an angle. This solves the problem of omnidirectional illumination for props or AI in the middle of the light volume, you can now see that the AI is lit from the back rather than from all sides. In addition, the point light provides that which the ambient one cannot, namely specular and normal interaction, two very important features that help our players read the environment better. This is about as good as you can get but there are still some niggling problems. The scene still looks too monochromatic and dark. From experience, I can tell you that this method looks good in certain scenes, but this is clearly not one of them. Sure, we can use two, non-shadowcasting point lights instead of one, aligned to our windows like in the previous example, we can even artfully combine local and global ambient lights to furnish the scene further, but by this point we will have multiple light entities placed, which is unwieldy to work with and possibly detrimental to performance. Another problem is that a point light’s movable light origin helps combat ambient omnidirectionality, but its projection texture still illuminates things the strongest in the middle of its volume. I have made multiple experiments with editing the Z-projection falloff texture of these lights and the results have all left me unsatisfied. It just does not look right. A final, more intellectual criticism against this method is that this does not, in a technical sense, supply irradiant light. Nothing here is diffuse, this is just radiant light pretending the best it can. [Fig. 5] The irradiance map method provides the best looking solution to imbuing your scene with an ambient glow. This is the corridor lit with irradiance map lights, a new lighting method introduced in The Dark Mod 2.06. Note the subtle gradients on the left wall and the bounced, orange light on the right column. Note the agreeable light on the AI. Comparing the previous methods and this, it is plainly obvious that an irradiance environment map looks the most realistic and defines the environment far better than any of the other solutions. Why exactly does this image look better than the others? You can inform yourself on image-based lighting and the nature of diffuse irradiance, but images speak louder than words. As you can see, the fact of the matter is that the effect, subtle as it may be, substantially improves the realism of the scene, at least compared to the methods previously available to us. Procuring irradiance environment maps for use in lighting your level will hereafter be the chief subject of this tutorial. The next part will review environment cubemap capture in TDM, the makeIrradiance keyword and three external applications that you can use to convert a TDM cubemap into an irradiance map. 1 “ Note that the color buffer is cleared to black: Doom3 world is naturally pitch black since there is no "ambient" light: In order to be visible a surface/polygon must interact. with a light. This explains why Doom3 was so dark ! “ [source] Part 2 Cubemaps are not new to The Dark Mod. The skybox materials in some of our prefabs are cubemaps, some glass and polished tile materials use cubemaps to fake reflections for cheap. Cubemap lights, however, are comparatively new. The wiki page linked earlier describes these two, new light types that were added in TDM 2.05. cubicLight is a shadow-casting light with true spherical falloff. An example of such a light can be found in the core files, “lights/cubic/tdm_lampshade_cubic”. ambientCubicLight is the light type we will be focusing on. Prior to TDM 2.06, it acted as a movable, on-demand reflection dispenser, making surfaces in its radius reflect a pre-set cubemap, much like glass. After 2.06, the old behavior was discarded and ambientCubicLight was converted to accept industry standard irradiance environment maps. Irradiance environment maps (IEMs) are what we want to make, so perhaps the first thing to make clear is that they aren’t really “handmade”. An IEM is the output of a filtering process (convolution) which requires an input in the form of a regular environment cubemap. In other words, if we want to make an IEM, we need a regular cubemap, ideally one depicting our environment — in this case, the corridor. I say a snapshot of the environment is ideal for lighting it because this emulates how irradiant light in the real world works. All radiating surfaces are recorded in our cubemap, our ambient optic array as it were, then blurred, or convoluted, to approximate light scatter and diffusion, then the in-game light “shines” this approximation of irradiant light back to the surfaces. There is a bit of a “chicken and the egg” situation here, if your scene is dark to begin with, wouldn’t you just get a dark irradiance map and accomplish nothing? In the captured cubemap faces in Fig. 6, you may notice that the environment looks different than what I’ve shown so far. I used two ambient lights to brighten up the windows for a better final irradiance result. I’ve “primed the pump”, so to speak. You can ignore this conundrum for the moment, ways to set up your scenes for better results, or priming the pump correctly, will be discussed at the end of the tutorial. Capturing the Environment The wiki has a tutorial on capturing cubemaps by angua, but it is woefully out of date. Let me run you through the process for 2.07 really briefly. To start with, I fly to approx. the center of the corridor with noclip. I then type “envshot t 256” in the console. This outputs six .tga images in the <root>/env folder, simply named “t”, sized 256x256 px and constituting the six sides of a cube and depicting the entire environment. This is how they look in the folder: [Fig. 6] The six cube faces in the folder. Of note here is that I do not need to switch to a 640x480 resolution, neither do I need to rename these files, they can already be used in an ambientCubicLight. Setting Up the Lights For brevity’s sake, I’ll skip explaining material definitions, if you’ve ever added a custom texture to your map, you know how to do this. Suffice it to say, it is much the same with custom lights. In your <root>/materials/my_cool_cubemaps.mtr file, you should have something like this: lights/ambientcube/my_test_IEM_light { ambientCubicLight { forceHighQuality //cameraCubeMap makeIrradiance(env/t) cameraCubeMap env/t colored zeroClamp } } We’ll play with the commented out line in just a bit. Firstly, let’s place the actual light in DarkRadiant. It’s as simple as creating a new light or two and setting them up in much the same way you would a regular ambient light. I select the appropriate light texture from the list, “my_test_IEM_light” in the “ambientcube” subfolder and I leave the light colored pure white. [Fig. 7] The corridor in DR, top view, with the ambient cubic lights highlighted. I can place one that fills the volume or two that stagger the effect somewhat. Remember that these lights still have a spherical falloff. Preference and experimentation will prove what looks best to you. Please note that what the material we defined does is load a cubemap while we established that ambientCubicLights only work with irradiance maps. Let’s see if this causes any problems in-game. I save the map and run it in game to see the results. If I already have TDM running, I type “reloadDecls” in the console to reload my material files and “reloadImages” to reload the .tga images in the /env folder. [Fig. 8] Well this looks completely wrong, big surprise. Wouldn’t you know it, putting a cubemap in the place of an irradiance map doesn’t quite work. Everything in the scene, especially the AI, looks to be bathed in slick oil. Even if a material doesn’t have a specular map, it won’t matter, the ambientCubicLight will produce specular reflections like this. Let’s compare how our cubemap .tga files compares with the IEM .tgas we’ll have by the end of the tutorial: [Fig. 9] t_back.tga is the back face of the environment cubemap, tIEM_back.tga is the back face of the irradiance map derived from it. As you can see, the IEM image looks very different. If I were to use “env/tIEM” instead of “env/t” in the material definition above, I would get the proper result, as seen in the last screenshot of part 1. So it is that we need a properly filtered IEM for our lights to work correctly. Speaking of that mtr def though, let’s not invoke an irradiance map we haven’t learned to convert yet. Let’s try an automatic, in-engine way to convert cubemaps to IEMs, namely the makeIrradiance material keyword. makeIrradiance and Its Limitations Let’s uncomment the sixth line in that definition and comment out the seventh. cameraCubeMap makeIrradiance(env/t) //cameraCubeMap env/t Here is a picture of how a cubemap ran through the makeIrradiance keyword looks like: [Fig. 10] Say ‘Hi’ to our friend in the back, the normalmap test cylinder. It’s a custom texture I’ve made to demonstrate cubemap interactions in a clean way. Hey now, this looks pretty nice! The scene is a bit greener than before, but you may even argue it looks more pleasing to the eyes. Unfortunately, the devil is in the details. Let’s compare the makeIrradiance keyword’s output with the custom made irradiance map setup seen at the end of part 1. [Fig. 11, 12] A closer look at the brick texture reveals that the undesired specular highlighting is still present. The normal map test cylinder confirms that the reason for this is the noisy output of the makeIrradiance keyword. The in-engine conversion is algorithmic, more specifically, it doesn't allow us to directly compare .tga files like we did above. Were we able to, however, I'm sure the makeIrradiance IEM would look grainy and rough compared to the smooth gradient of the IEM you’ll have by the end of this tutorial. The makeIrradiance keyword is good for quick testing but it won’t allow you fine control over your irradiance map. If we want the light to look proper, we need a dedicated cubemap filtering software. A Review of Cubemap Filtering Software Here I’ll introduce three programs you can produce an irradiance map with. In the coming parts, I will present you with a guide for working with each one of them. I should also note that installing all of these is trivial, so I’ll skip that instructional step when describing their workflows. I will not relay you any ad copy, as you can already read it on these programs’ websites. I’ll just list the advantages and disadvantages that concern us. Lys https://www.knaldtech.com/lys/ Advantages: Good UI, rich image manipulation options, working radiance/specular map filtering with multiple convolution algorithms. Disadvantages: $50 price tag, limited import/export options, only available on Windows 64-bit systems. cmftStudio https://github.com/dariomanesku/cmftStudio Advantages: Available on Windows, OSX and Linux, free, open source software, command line interface available. Disadvantages: Somewhat confusing UI, limited import options, missing features (radiance/specular map filtering is broken, fullscreen doesn’t work), 32-bit binaries need to be built from source (I will provide a 32-bit Windows executable at the end of the tutorial). Modified CubeMapGen https://seblagarde.wordpress.com/2012/06/10/amd-cubemapgen-for-physically-based-rendering/ Advantages: Free software, quickest to work with (clarified later). Disadvantages: Bad UI, only Windows binaries available, subpar IEM export due to bad image adjustment options. Let’s take a break at this point and come back to these programs in part 3. A lot of caveats need to be expounded on as to which of these three is the “best” software for making an irradiance map for our purposes. Neither of these programs has a discreet workflow; rather, the workflow will include or exclude certain additional programs and steps depending on which app you choose to work with. It will dovetail and be similar in all cases. Part 3 The aim of this tutorial is twofold. First, it aims to provide the most hands-free and time-efficient method of converting an envshot, environment cubemap to an IEM and getting it working in-game. The second is using as few applications as possible and keeping them all free software that is available for download, much like TDM itself. The tutorial was originally going to only cover IEM production through Lys, as that was the app I used to test the whole process with. I soon realized that it would be inconsiderate of me to suggest you buy a fifty dollar product for a single step in a process that adds comparatively little to the value of a FM, if we’re being honest (if you asked me, the community would benefit far more from a level design tutorial than a technical one like this, but hey, maybe later, I’m filling a niche right now that nobody else has filled). This led me to seek out open-source alternatives to Lys, such as Cubemapgen, which I knew of and cmftStudio, which I did not. I will preempt my own explanations and tell you right away that, in my opinion, cmftStudio is the program you should use for IEM creation. This comes with one big caveat, however, which I’m about to get into. Six Faces on a Cross and The Photoshop Problem Let’s review. Taking an envshot in-game gives you six separate images that are game-ready. Meaning, you get six, split cubemap faces as an output, you need six, split irradiance map faces as an input. This is a problem, because neither Lys nor cmftStudio accept a sequence of images as such. They need to be stitched together in a cube cross, a single image of the unwrapped cube, like this: [Fig. 13] From Lys. Our cubemap has been stitched into a cross and the “Debug Cube Map Face Position” option has been checked, showing the orientations of each face. In Lys only panoramas, sphere maps and cube maps can be loaded into the program. The first two do not concern us, the third specifically refers to a single image file. Therefore, to import a TDM envshot into Lys you need to stitch your cubemap into a cross. Furthermore, Lys’ export also outputs a cubemap cross, therefore you also need to unstitch the cubemap into its faces afterwards if you want to use it in TDM. In cmftStudio you can import single map faces! Well… no, you can’t. The readme on GitHub boasts “Input and output types: cubemap, cube cross, latlong, face list, horizontal and vertical strip.” but this is false. The UI will not allow you to select multiple files on import, rendering the “face list” input type impossible.2 Therefore, to import a TDM envshot into cmftStudio you need to stitch your cubemap into a cross. Fortunately, the “face list” export type does work! Therefore, you don’t need to unstitch the cubemap manually, cmftStudio will export individual faces for you. In both of these cases, then, you need a cubemap cross. For this tutorial I will use Adobe Photoshop, a commercial piece of software, to stitch our faces into a cubemap in an automated fashion (using Photoshop’s Actions). This is the big caveat to using cmftStudio, even if you do not want to buy Lys, PS is still a prerequisite for working with both programs. There are, of course, open source alternatives to Photoshop, such as GIMP, but it is specifically Photoshop’s Action functionality that will power these workflows. GIMP has its own Actions in the form of Macros, but they are written with python. GIMP is not a software suite that I use, neither is python a language I am proficient with. Out of deference for those who don’t have, or like working with, Photoshop, I will later go through the steps I take inside the image editor in some detail, in order for the studious reader to reconstruct them, if they so desire, in their image editing software of choice. At any rate, and at the risk of sounding a little presumptuous, I take it that, as creative types, most of you already have Photoshop on your computers. 2 An asterisk regarding the “impossibility” of this. cmftStudio is a GUI for cmft, a command line interface that does the same stuff but inside a command prompt. I need to stress that I am certain multiple faces can be inputted in the command line, but messing with unwieldy prompts or writing batch files is neither time-saving nor user-friendly. This tutorial is aimed at the average mapper, but a coder might find the versatility offered in cmft interesting. The Cubemapgen Workflow You will have noticed that I purposefully omitted Cubemapgen from the previous discussion. This is because working with Cubemapgen, wonderfully, does not need Photoshop to be involved! Cubemapgen both accepts individual cubemap faces as input and exports individual irradiance map faces as output. Why, then, did I even waste your time with all the talk of Lys, cmftStudio and Photoshop? Well, woefully, Cubemapgen’s irradiance maps look poor at worst and inconsistent at best. Comparing IEMs exported from Lys and cmftStudio, you will see that both look practically the same, which is good! An IEM exported from Cubemapgen, by default, is far too desaturated and the confusing UI does not help in bringing it to parity with the other two programs. If you work solely with Cubemapgen, you won’t even know what ‘parity’ is, since you won’t have a standard to compare to. [Fig. 14] A comparison between the same irradiance map face, exported with the different apps at their respective, default settings. Brightened and enlarged for legibility. This may not bother you and I concede that it is a small price to pay for those not interested in working with Photoshop. The Cubemapgen workflow is so easy to describe that I will in fact do just that, now. After I do so, however, I will argue that it flies in the face of one of the aims of this tutorial, namely: efficiency. Step 1: Load the cubemap faces into Cubemapgen. Returning to specifics, you will remember that we have, at the moment, six .tga cubemap faces in a folder that we want to convert to six irradiance map faces. With Cubemapgen open, direct your attention to these buttons: [Fig. 15] You can load a cubemap face by pressing the corresponding button or using the hotkey ‘F’. To ensure the image faces the correct way, you must load it in the corresponding “slot”, from the Select Cubemap Face dropdown menu above, or by pressing the 1-6 number keys on your keyboard. Here is a helpful list: X+ Face <1> corresponds to *_right X- Face <2> corresponds to *_left Y+ Face <3> corresponds to *_up Y- Face <4> corresponds to *_down Z+ Face <5> corresponds to *_forward Z- Face <6> corresponds to *_back ...with the asterisk representing the name of your cubemap. With enough practice, you can get quite proficient in loading cubemap faces using keyboard shortcuts. Note that the ‘Skybox’ option in the blue panel is checked, I recommend you use it. Step 2: Generate the Irradiance Map [Fig. 16] The corridor environment cubemap loaded in and filtered to an irradiance map. The options on the right are my attempt to get the IEM to look right, though they are by no means prescriptive. Generating an IEM with Modified CubeMapGen 1.66 is as easy as checking the ‘Irradiance cubemap’ checkbox and hitting ‘Filter Cubemap’ in the red panel. There are numerous other options there, but most will have no effect with the checkbox on. For more information, consult the Sébastien Lagarde blog post that you got the app from. I leave it to you to experiment with the input and output gamma sliders, you really have no set standard on how your irradiance map is supposed to look, so unfortunately you’ll have to eyeball it and rely on trial and error. Two things are important to note. The ‘Output Cube Size’ box in the red panel is the resolution that you want your IEM to export to. In the yellow panel, make sure you set the output as RGB rather than RGBA! We don’t need alpha channels in our images. Step 3: Export Irradiance Map Faces Back in the green panel, click the ‘Save CubeMap to Images’ button. Save the images as .tga with a descriptive name. [Fig. 17] The exported irradiance map faces in the folder. These files still need to be renamed with appropriate suffixes in order to constitute a readable cubemap for the engine. The nomenclature is the same as the table above: “c00” is the X+ Face, to be renamed “right”, “c01” is the X- Face and so on. Right left, up down, forward and back. That’s the order! This is all there is to this workflow. A “cameraCubeMap env/testshot” in the light material will give us a result that will look, at the very least, better than the inbuilt makeIrradiance material keyword. [Fig. 17] The map ended up being a little bright. Feel free to open Fig. 4 and this in seperate tabs and compare the Lys/cmft export with the cubemapgen one. A Review of the Workflow Time for the promised criticism to this workflow. I already stated my distaste for the lack of a standardised set of filtering values with this method. The lack of any kind of preset system for saving the values you like makes working with Cubemapgen even more slipshod. Additionally, in part 2, I said that Cubemapgen is the fastest to work with, but this needs to be qualified. What we just did was convert one cubemap to an irradiance map, but a typical game level ought to use more than a single IEM. Premeditation and capturing fake, “generic” environment cubemaps (e.g. setting up a “blue light on the right, orange on the left” room or a “bright skylight above, brown floor” room, then capturing them with envshot) might allow for some judicious reuse and keep your distinct IEM light definition count down to single digits, but you can only go so far with that. I am not arguing here for an ambient cubic light in every scene either, certainly only those that you deem need the extra attention, or those for which the regular lighting methods enumerated in Part 1 do not quite work. I do tentatively assume, though, that for an average level you would use between one and two dozen distinct IEMs. Keep in mind that commercial games, with their automated probe systems for capturing environment shots, use many, many more than that. With about 20 cubemaps to be converted and 6 faces each to load into Cubemapgen, you’ll be going through the same motions 120 whole times (saving and renaming not included). If you decide to do this in one sitting (and you should, as Cubemapgen, to reiterate, does not keep settings between sessions), you are in for a very tedious process that, while effective, is not very efficient. The simple fact is that loading six things one by one is just slower than loading a single thing once! The “single thing” I’m referring to is, of course, the single, stitched cubemap cross texture. In the next part, I will go into detail regarding how to make a cubemap cross in Photoshop in preparation for cmftStudio and Lys. It will initially seem a far more time-consuming process to you than the Cubemapgen workflow, but through the magic of automation and the Actions feature, you will be able to accomplish the cubemap stitch process in as little as a drag-and-drop into PS and a single click. The best thing is that after we go through the steps, you won’t have to recreate them yourself, as I will provide you with a custom Actions .atn file and save you the effort. I advise you not to skip the explanations, however. The keen-eyed among you may have noticed that you can also load a cube cross in Cubemapgen. If you want to use both Cubemapgen and Photoshop together to automate your Cubemapgen workflow, be aware that Cubemap gen takes crosses that have a different orientation than the ones Lys and cmftStudio use. My macros (actions) are designed for the latter, so if you want to adjust them for Cubemapgen you would do well to study my steps and modify them appropriately. For the moment, you’ve been given the barebones essentials needed to capture an envshot, convert it to an irradiance map and put it in your level at an appropriate location, all without needing a single piece of proprietary software. You can stop here and start cranking out irradiance maps to your heart’s content, but if you’re in the mood for some more serious automation, consider the next section.
  14. Not so long ago I found what could make a pretty good profile picture and decided to try it out on these new forums. But I couldn't find a button anywhere that would let me change it. I asked on Discord and it seems Spooks also couldn't find anything anywhere. So I logged into an old alternative account and, lo and behold, that account has a button. This is on the first screen I get when I: 1) click on my account name in the top-right of the browser -> 2) click on 'profile'. Compared to my actual account: Are you also missing this button on your account? It'd be very much appreciated if that functionality could be restored to any of the affected accounts.
  15. Just so this is available in a stickied place https://forums.thedarkmod.com/index.php?/topic/20384-video-introduction-to-dark-radiant-for-new-mappers/
  16. Well, me. 'cause it's how a Windows 10 era application is meant to run (and really, there is no more input lag)
  17. Root cause is: wxgtk also tags incomplete releases, so the latest isn't offered to install.
  18. I had the temporary solution more in mind, in that case, but I don't mean to imply that there'd be a constant cursor update - that is to say that the texture on surfaces would change as you look around. I meant more like how r_materialoverride does it, if you want another surface under the cursor to get replaced you have to look at it and input the new command again. Just clearing that up in case there was misunderstanding. I do agree that another opinion would be valuable, how I would use it is not general enough to be extrapolated, I can see the use-case of this command being comfortably exhausted without the mapper having to reload their map again but I might be unduly generalizing. How I see its use is the ability to test one surface (a building facade, road, wallpaper) change against the context of others you already have put down in DR. Replacing multiple brush surfaces in-game might be extra flexible but like you said it would be harder to do and I think it overlaps into regular DR functionality at that point.
  19. I notice that the Issue has been Fixed. Can I ask which fix was implemented? Also, one of the comments by duzenko in the bugtracker mentions this thread: https://forums.thedarkmod.com/index.php?/topic/20365-script-string-length-128-workaround/&tab=comments#comment-446786. I don't have access to this thread for some reason. Is it important that I read this thread? If so, I'll need access. Thanks
  20. freyk


    A more recent topic about this, see https://forums.thedarkmod.com/index.php?/topic/18701-subtitles-in-intro
  21. So it's been about 4 years since VR's consumer launch and with Valve finally showing 4 HL:Alyx gameplay videos today (one, two, three, and four), I figured it would be a good time to share my current thoughts about the state of VR. I like VR quite a bit myself (it's largely the only thing I play) but I also think it's an immature technology. The hardware, software, and mechanics are very early. Just anecdotally, the userbase appears to be separated into two groups: a core group of enthusiasts that are pretty regular, and a larger more casual and high turnover (low retention) group whose usage follows exponential decay. Overall it reminds me of the early phase of other major technologies/new mediums (e.g. smartphones) but with abnormal degree of hardware subsidization (abnormal for this stage of the tech) and an "improper" amount of hype. VR is also abnormal in the sense that due to the intense spectacle it typically has an extended (between a month and a year) "honeymoon phase", but this phase doesn't seem to be representative of a stable/regular user over the long term. Benefits of gen 1 VR - These are the good aspects of the current technology, but basically all of them have significant downsides as well (noted below). Immersion - This is the obvious one and what most (unfortunately) only focus on--I mean the pure spectacle of VR. It simply gives you a better sense of being "in" the virtual world and amplifies the intensity of many aspects of the experience. For example, combat intensity (Onward), horror (Walking Dead: S&S), heights and speed (Windlands 2 and Jet Island), social connection (VRChat or really anything else with multiplayer), etc etc. Each hardware iteration improves this (e.g. Vive/Rift -> Index is very nice) and playing a flat game after using VR for a while is kind of like "watching yourself" play a game in that it feels "disconnected". As an example, I have a friend that lives approximately two hours away and I would typically visit him in person several times a year. However a couple of months ago we both realized that we hadn't actually seen each other in person in almost 4 years, and yet it "felt like" we'd been seeing each other all along. I can only attribute that to the social connection afforded by VR--it's not like we weren't playing games together before that. Perceptual enhancements - Similar to immersion but with respect to things that allow one to engage with the virtual world in useful or consequential ways specific to VR. For example, a head-mapped visual perspective allowing one to look around independently, the ability to better ascertain the depth of things with stereoscopic vision, a correct perspective from which hand interactions are viable/intuitive, proper head-relative 3d audio, and so on. E.g. I find it much more rewarding and natural to communicate with nearby teammates in VR FPSs because the experience is as if they're right there in the same room as me. Interaction and new mechanics - 6 DOF inputs for both your head and hands allows one to interact with the virtual world in some interesting ways. Over time I've personally come to find that this is the most interesting aspect of VR and appreciate today's (rather limited) VR visuals more for how they provide the correct perspective for this kind of interaction rather than for the pure spectacle. A basic example of this is the well tuned firearm interaction model of Onward which can be extremely rewarding to use and changes up the dynamics of first person shooters dramatically. More complex examples would be the physics based melee combat, climbing, and general interaction models of Blade & Sorcery and Boneworks. Rather than relying on simple QTEs and a small set of mapped inputs that play out largely the same way every time they are triggered, VR opens up the possibility of accomodating an enormous number of other interaction possibilities--if the simulation allows for them anyway. Over time I've felt like flat gaming was being limited by the interface through with you engage with the virtual world and VR appears to be a way of overcoming that. Issues with gen 1 VR - It might seem odd but the attributes of VR that I find the most compelling also happen to have aspects about them that I think are the most problematic. I think the biggest issues are in the areas of comfort, "perceptual limitations", and clunky interactions. Discomforts Ergonomic Discomforts - The headsets are hot, heavy, strapped tightly to your face, and tethered. Visual Discomforts - You can't change focus so there's basically only one depth plane that is perceived correctly at about 2 meters away from the user (closer objects appear out of focus and "medium distance" objects look decent but still "off"). And there are many issues with the visuals that people are sensitive to e.g. pupil swim, distortion, god rays, glares, chromatic aberration, etc etc. Physical Discomforts - Too many games require you to stand which is ultimately a losing proposition during the critical end-of-day gaming timeslot. How many gamers are actually going to stand up to play games after work/school? Not many I think. I personally suspect that the default mode of play will eventually settle on a seated mode with smooth vertical translation on the dominant hand's vertical joystick axis, accomodated by a seat that swivels. Simulator sickness - Simulator sickness is a problem and may always be a problem. I know most can overcome it with careful exposure (and thus getting your so called "VR Legs") but getting more casual users to that point is difficult. And yet not building up one's VR legs will leave them very limited in how they can experience VR (so limited that it understandably may not even be worthwhile). I think there's a chance that this problem will always constrain VR to the more "hardcore" end of gaming (a noteable exception being for a more niche group that uses VR for "active gaming" / exercise) Perceptual limitations - Pixel density, SDE, FOV, clarity, the fixed focus, poor black levels, etc etc impose frustrating limits on how you're able to interact with the environment. E.g. devs tend to avoid near field interactions like reading something in your hand because you can't actually change focus to that depth right now. Clunky interactions In Games - In many games, the input is at least as clunky as it is compelling because current controllers are lacking in feedback, i.e. think trying to navigate through everyday life with unfeeling hands. We take for granted how complex and informative the senses of feedback through the hands are and how critical they are to even the most basic of interactions. Object reorientation, grabbing objects, throwing objects, swinging melee objects, opening doors, etc etc are all quite hindered right now. Proprioception hardly compensates and visual feedback must be largely independent of this for viable input mechanisms that aren't frustrating (imagine trying to carefully watch your hands perform even the most inconsequential interaction). Another problem is with the software itself and how well their interaction systems are being implemented. The same general interaction concepts are being implemented in a multitude of different ways, with some being rather gimmicky/pointless (i.e. you may as well just press a button to execute them) while others truly offer a degree of depth that warrants the use of motion controllers. In Desktop interaction - Outside of and in between games you really feel crippled. There's no good way (yet) for key input and we need integrated eyetracking to move past this silly laser pointer/Minority Report UI phase. Right now all of these fatigues and frustrations lead to many people not feeling particularly motivated to use the headsets very much after the honeymoon phase. Until the hardware better replicates human vision and until it is somewhat comparable to the clarity and comfort of a desktop monitor, I think VR will continue to have major retention issues. And yes, to me it's currently worth it--especially with content that does something interesting with VR beyond spectacle, but I'm just a nutty enthusiast. And, sure, when it comes to what's missing from current gen VR, "content is king" and "cost" are the reflexive platitudes you hear from most people. However, at this point VR already has a good deal of engaging content (especially given the age of the medium) and it's also about as cheap as it will get for a decent experience and for the foreseeable future (what most people don't realize is that the Facebook, WMR, and Sony headsets are subsidized), but retention remains to be an issue. The problem I see is that even an extremely polished and grand experience (like HLA) is difficult to keep coming back to month after month if what you're experiencing it through is fatiguing/uncomfortable and frustrating, especially when it's competing with the wildly successful medium of flat gaming that has been refined over many decades and that has none of VR's comfort problems. There is a very high threshold that must be overcome here (there's a big difference between "tolerable" and "I actually want to use this every evening past the honeymoon phase") and getting to that point will take some incredible technology. I don't think it will be any different for Half-Life: Alyx. Future advancements - I'm trying to be really candid about all of the issues VR has but, to be clear, I'm really interested in the technology and it's what I invest much of my free time playing and hacking around with. I could see the following things dramatically improving the experience and gradually (with each incremental improvement) making the technology something more and more PC gamers will actually use regularly. Variable focus - This is the feature that I think everyone wants without actually realizing it (though not quite the same, I think if you disabled one's ability to change focus in real life the value of this would immediately become apparent). It will dramatically improve comfort, interactivity for the near field, and immersion. Facebook detailed their approach at the last Oculus conference and others have patents for similar solutions. Unfortunately I think we'll be very lucky if we see this in the next major headset refresh around 2024 or so. General visual improvements and foveated rendering - This is with respect to things like pixel density, screen door effect, field of view, lens quality, pixel persistence, refresh rate, etc etc. Having used the Valve Index for a while, I actually think we're getting pretty close to being "good enough" here. With another decent incremental improvement beyond today's specs (for which eyetracking and foveated rendering will be required), I think the primary concerns most users have with VR technology will be focused elsewhere. Smaller, lighter and wireless headsets - The use of things like pancake lenses will allow headsets to become much smaller and lighter (there are already some examples shown at this year's CES). 60GHz 802.11ay and foveated compression will allow us to transmit the data wirelessly and with negligible latency. This should dramatically improve comfort and make the technology less distracting to the experience. The less conscious you are of the thing strapped to your head, the better. Interaction (hardware) - Haptic feedback is a major area that I really hope the industry leaders (Valve, Facebook, Sony) are focusing on internally. The single dimension of vibration in each controller is insufficient for the reasons noted above and I think the lack of feedback is the major source of motion controller clunkiness. Right now I think you need something that gives users a sense of positional and rotational forces relative to at least a single point within the motion controller (e.g. something like this https://www.roadtovr.com/miraisens-3dhaptics-directional-haptic-feedback/ ). This would give the user a vision-independent spatial sense of how their hand is interacting with the environment, how wielded objects are behaving, and how to solve "violations" of the simulation (e.g. when one's virtual hand intersects with a solid object). Like any other input/feedback system used in gaming, I don't think the feedback manifested in one's hand needs to be 1-to-1 with the simulation--the brain just needs the information in an intuitive form over a sufficient number of dimensions. However, I don't think things like full hand simulation / finger level simulation (e.g. per-finger force-feedback and touch sensation) will be viable for gaming until such an input device can adequetly simulate joysticks, triggers, and buttons itself--and something like that is so far in the future that I don't think it's even worth thinking about in the consumer space at this point. I've seen the early implementations of such a thing but I don't think they're anywhere near being viable. Interaction (software) - When the hardware interface through which one interacts with a virtual world gives the user a higher capacity for control, the software simulation must also expand in tandem to respond to that capacity (actually, not doing so can make a VR experience feel rather lifeless). Over the past few years developers have been fleshing out VR interaction systems across a wide variety of contexts to show how motion controllers can be used (with each game typically focusing on one or two core mechanics e.g. firearm simulation, flying, swimming, climbing, swinging, etc etc). More recently you see developers like SLZ (Boneworks) and WarpFrog (Blade & Sorcery) trying to generalize the interaction systems through the physics system so all aspects of the simulation behave consistently and to allow for a much larger space of potential interactions. As these mechanics improve (and there is a lot of room for improvement), I can really see VR interaction being something that is highly coveted. Anyways, I generally write things like this up just to help organize my own thoughts but hopefully it's of use to someone else. Right now I'm enjoying what VR has to offer and I'm optimistic about future tech, but I'm also a bit nervous about hardware and software developers sticking this out for another 5 years or so due to the expectations they came in with. There are only about 1.3 million VR headsets connected for the Steam Hardware Survey, and I suspect many more have been purchased but just aren't being used. I think that back in 2016 the expectations about the speed of progress were completely unrealistic. When you look at the consumer launch of other mediums and how long it takes for them to get established, 10+ years isn't that unusual. The impression I got from the VR marketing and tech journalists was that this was being treated like the launch of a new console but this was obviously something quite different.
  22. see also our reactions at topic: https://forums.thedarkmod.com/index.php?/topic/19755-nice-game-development-series-war-stories/
  23. Things are pretty hectic over here, I wasn't on the forums for a week so even that ping might've not been a saving grace if I didn't log in to divert responsibilities from other things I gotta do... Anyway, yeah! The command seems to work alright, reloading the game properly resets the materials, but reloadModels all doesn't reset material changes on brushes, aptly enough. I'm not sure that's that important, though.
  24. The number of mirrors is not too high today, and new mirror would definitely be welcome. However, we have been split on the question of whether we could use a server as official mirror when its owner has zero posts on these forums. The TDM code does not properly verify downloads, so we must be sure that mirrors owners can be trusted. Sorry.
  25. Announcing the Release of 'Requiem' for The Dark Mod! Download Download the latest version of the Dark Mod here: http://www.thedarkmo...wnload-the-mod/ Download the mission here: Mediafire: http://www.mediafire...u89/requiem.pk4 Southquarter: http://www.southquar...ons/requiem.pk4 Fidcal.com: http://www.fidcal.co...ons/requiem.pk4 Create a folder in your Dark Mod install with the path "darkmod/fms/requiem" and place the downloaded .pk4 file inside. When you load up The Dark Mod, the mission will appear on the "New Mission" page. Requiem can also be found directly using the in-game loader. Gameplay Notes While this mission is playable in TDM 1.8, for an optimal experience please download and play in TDM 2.0 (or higher). Most inventory items in the game can be dropped, so no need to carry them around after they are no longer of any use. Note that If you use noclip or other console commands while playing, there is a good chance that you will break the intended flow of gameplay. Credits Mapping and Readables: Gelo R. Fleisher Voice Acting: Goldwell Additional scripting: Obsttorte Additional textures and assets: Flanders, Sotha, Grayman, Springheel, Bikerdude, Obsttorte Additional map optimizations: Bikerdude Testers: Bikerdude, Obsttorte, Gnartsch, AluminumHaste, Baal, nbohr1more, PPoe Custom Soundtrack: Leonardo Badinella - http://leonardobadinella.com/ Additional Music: Lee Rosevere - http://freemusicarch...c/Lee_Rosevere/ Marianne Lihannah - http://www.funeralsinger.net/ Vox Vulgaris - http://www.last.fm/music/Vox+Vulgaris/ A note from the author Hi all. While I've been involved in indie game development for a while now, I'm first and foremost a writer. My most recent project has been a novella that tries to capture the visual feel and tone of the Thief series (you can find the link below). As I was writing, I found myself playing a lot of Thief and Dark Mod fan missions, and got to thinking that maybe I wanted to make one myself, as a companion piece to the book. When I finished up writing, I had a bit of down time and decided to take the plunge. Having never done any serious mapping before, my plan was to make a small mission that I could bang out in a month or two and call it a day. Well, as sometimes happens, the project got a little bit bigger than I had planned. Ten months, and lots of elbow grease later, Requiem is finally ready for you to play. I'd like to thank everyone who helped pitch in to help make Requiem come alive, from those who took the time to answer my many questions on the forums to those who actively contributed to the FM. I especially want to thank Bikerdude who served as my mapping mentor, and Obsttorte whose clever scripts really turned what was in my head into the game that you are playing. Above all, I want to thank you for downloading and playing Requiem; I hope you enjoy it. Links of Interest Author's Blog: http://gfleisher.blogspot.com/ Companion Novella (Amazon): http://www.amazon.co...k/dp/B00BYEW02M Companion Novella (Smashwords): http://www.smashword...oks/view/298956
  • Create New...