Jump to content
The Dark Mod Forums

Search the Community

Searched results for '/tags/forums/blender lwo export/' or tags 'forums/blender lwo export/q=/tags/forums/blender lwo export/&'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General Discussion
    • News & Announcements
    • The Dark Mod
    • Fan Missions
    • Off-Topic
  • Feedback and Support
    • TDM Tech Support
    • DarkRadiant Feedback and Development
    • I want to Help
  • Editing and Design
    • TDM Editors Guild
    • Art Assets
    • Music & SFX

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. Is this still true now that 2.11 is out with this feature? I've been wondering about whether this might open up the possibility to use larger models for things other than terrain, for example modeling a building at a time in Blender and then assembling them into a street in DR. I'll probably at least test it out someday...
  2. I have seen this addon mentioned on the idtech 4 discord regarding exporting doom 3 brushes from blender in the .map format which might let you accomplish some of what you want as far as building level geometry in blender. Haven’t tested it personally for tdm: https://github.com/c-d-a/io_export_qmap
  3. Yes, this is possible but there are a few things that make it a little impractical. 1) Models do not seal the void so you need BSP brushes behind or inside models to properly seal a map. As I recall, there are doom 3 and quake map exporters that can export your initial block-out geometry as standard brushes so you could do both modeling and brushwork in blender. You’d just have to do it in phases 2) Visportal placement is crucial to performance so you will need to ensure you blender models are cut at portal boundaries so that you can setup brushes and visportals at choke points to prevent rendering outside the player view area 3) model geometry is often too complex for the physics engine. You will need to make invisible clip models or clip brushes that use a more simplified structure to ensure AI pathing and overall collision physics perform well. 4) Models need to be split at light boundaries. If a single light touches a model the engine will do light geometry calculation for the whole model. If enough lights hit a large model it can severely impact performance. Brushes will auto-split on light boundaries but often split in less optimal locations so it has become the convention to use func_static geometry to force brushes to split in the preferred locations 5) blender has no knowledge of specialty brushes like fog volumes, water, SEED auto dispersal etc So it’s possible to do the majority of mapping in blender but ultimately you will need to use Dark Radiant to do the finishing touches The current paradigm is to create modular content in blender and then assemble the modules together in Dark Radiant along with bsp caulking and vidportal placement
  4. Hi. I'm a long-time Blender user and was wondering if it's possible, in part or in whole, to use Blender to make levels, mostly the level geometry. I understand that NPC, scripting, partitioning, etc would be something better done in DarkRadiant, but I'd like to use Blender to make the buildings, streets, terrain, etc and to place them.
  5. Thank you for a great puzzle fm! I'm having real trouble with the TDM v 2.10-64bit on Linux Mint 21.1 Link to the guide supplied by V-Man339 https://forums.thedarkmod.com/index.php?/topic/15844-fan-mission-the-gatehouse-by-bikerdude-goldchocobo-updated-02112014/page/7/#comment-429405
  6. I agree. I'm copying this idea over to the https://forums.thedarkmod.com/index.php?/topic/21741-subtitles-possibilities-beyond-211/ thread. Related: I'll probably do some more experiments with the current TDM font & size, to understand max char count versus field width.
  7. Some years ago i succesfully downloaded the wiki with wget, for TDM dvd. I dont know if this method wikl work now. https://forums.thedarkmod.com/index.php?/topic/19998-tdm-collection-dvd/#comment-437887
  8. @datiswous, made that correction fm_test.subs --> fm_conversations.subs @stgatilov, about srt naming and file location, would you be OK with the following edit? New/changed stuff in italics: srt command is followed by paths to a sound sample and its .srt file, typically with matching filenames. An .srt file is usually placed either with its sound file or in a "subtitles" folder. The .srt file format is described e.g. [1]. The file must be in engine-native encoding (internationalization is not supported yet anyway) and have no BOM mark. It contains a sequence of text messages to show during the sound sample, each with start and end timestamps within the sample's timeline. It is recommended to use common software to create .srt files for sound samples, instead of writing them manually. This way is more flexible but more complicated, and it is only necessary for long sounds, for instance sound sample of a briefing video. It's a simple enough standard that it can be shown as an short example, demonstrating that subtitle segments can have time gaps between them. And the example can show correct TDM usage, without requiring a trip off-site and picking through features that TDM doesn't support. Specifically, the example shows how to define two lines by direct entry, rather than using unsupported message location tags (X1, Y1, etc.). And skips other unavailable SRT font markups like italics, mentioned in the wikipedia description. The example would also show the TDM-specific path treatment. The example could be inserted before the sentence "It is recommended to use common software...."
  9. I guess the best image-to-normal conversions I've seen here in the forums are via njob. I am curious about this AI thing though: https://github.com/HugoTini/DeepBump has to be installed into Blender as a plugin?
  10. DarkRadiant 3.8.0 is ready for download. What's new: Feature: Support new frob-related material keywords Improvement: Mission selection list in Game setup is not alphabetically sorted Improvement: Better distinction between inherited and regular spawnargs Improvement: Silence sound shader button Improvement: Add Reload Definitions button to Model Chooser Fixed: Model Selector widgets are cut off and flicker constantly on Linux Fixed: DarkRadiant will not start without Dark Mod plugins Fixed: GenericEntityNode not calculating the direction correctly with "editor_rotatable" Fixed: RenderableArrow not drawing the tip correctly for arbitrary rotations Fixed: Light Inspector crashes on Linux Fixed: Models glitch out when filtering then showing them Fixed: Skin Editor: models not centered well in preview Fixed: "Copy Resource Path" includes top level folders Fixed: Skin Editor: internal test skins are shown if Material Editor was open previously Fixed: Changing Game/Project doesn't update loaded assets correctly Fixed: Model Chooser: initially hidden materials aren't revealed when enabling them Fixed: Choosing AI entity class 'atdm:townsfolk_commoner_update' causes crash Fixed: Sporadic assertion failure on shutdown due to LocalBitmapArtProvider destruction Fixed: Prefab Selector spams infinite error dialogs on Linux Windows and Mac Downloads are available on Github: https://github.com/codereader/DarkRadiant/releases/tag/3.8.0 and of course linked from the website https://www.darkradiant.net Thanks to all the awesome people who keep using DarkRadiant to create Fan Missions - they are the main reason for me to keep going. Please report any bugs or feature requests here in these forums, following these guidelines: Bugs (including steps for reproduction) can go directly on the tracker. When unsure about a bug/issue, feel free to ask. If you run into a crash, please record a crashdump: Crashdump Instructions Feature requests should be suggested (and possibly discussed) here in these forums before they may be added to the tracker. The list of changes can be found on the our bugtracker changelog. Keep on mapping!
  11. TTLG? That's Through the Looking Glass Forums. A looking glass fan community. Has been around for a long, long time. https://www.ttlg.com/forums/
  12. No, not really, have only seen him on the forums many years ago. He deserves a lot of credit for providing the SVN infrastructure in the beginnings of the project, it's been only later when we transferred this to a hosted server. Memory is blurry, but I think my time working on TDM and his don't overlap that much - I had been getting more active, and he pulled back a bit. He bootstrapped quite a few important systems, IIRC he's been working on light gem and the first Stim/Response and Inventory implementations. From what I recall, he gave the project an organisational backbone in the technical department which is crucial to keep things together. Folks like Spring and NH who have joined long before me could give more insights, I guess. (Looking at my join date makes me feel old either way. From TDM's current view point, the year 2006 seems like right at the beginning, but actually the mod had already been existing for two years by that time I joined. With the first release in 2009, 2006 is rather in the middle. Heck, I haven't touched the mod code for at least 10 years - and I can still remember a few things, which is a testament of how much it occupied my thoughts back then).
  13. Last night we chatted on Discord about Vulkan support and PBR, bringing up a system for adding proper reflections once more. I suggested screenspace reflections but it was argued reflection probes would still be better than SSR in our case, a ticket for that is already open but I'm uncertain whether it's the best way: A manual approach would need new entities to be placed by the mapper... this requires extra effort and would exclude old FM's that are no longer updated, while the result will also be inaccurate and static meaning you won't see an AI reflected as they walk on a shiny metal plate for instance. If PBR with realistic graphics can be a hope for 2.12 or later, we'll definitely want to do it right rather than using a limited / limiting system. A technique came to mind that might just work for our engine and setup. I wanted to share it here before I forget the specifics; This might already be a common practice and even have a definition, for the purpose of this thread I'll just describe it as I originally imagined it and feel this would work for our engine. The idea is we'd use reflection probes but in an automated fashion: A probe is automatically spawned in every valid area (within bounds) in the player's view, at a given grid unit size. For example: If the grid scale is 16, a probe may exist at position '0 -48 16' another at ''0 -48 32' and so on. Every point projects its result on all surfaces in its radius which contain a specular channel masking it, the best alternative presently available till we were to convert all textures for PBR support. The cool thing is the same cubemap can also be projected as a light source, allow for global illumination in addition to just reflections! This would be similar to how the Irradiance Volume works in Blender / Eevee except each dot renders a little cubemap from its perspective. I already know what everyone is rightfully thinking: This is going to kill performance! After all each probe needs to produce a render from its perspective, and being a 360* panorama it will open portals in all directions. Normally that would be insane, but I thought of various ways in which the impact could be greatly minimized to very bearable amounts. The frame buffer of each probe will be at a very small resolution by default since much detail shouldn't be needed. Even 64x64 per cube face might do. Each probe only needs a draw distance double its grid size, given it only has to see as much as is necessary to fill the gaps between it and its neighbors. So if the grid size is set to 64, each probe would only have a draw distance of 128 to cover the space in between its neighbors, nothing beyond that would exist to it. Only probes the player can see would ever be spawned and calculated; If the view frustum doesn't overlap the virtual cube who's corners touch that probe's neighbors, the probe is dropped from memory. Probes are also only spawned in a valid visible space, never out of bounds including rooms culled by portals. A draw distance after which probes are removed or not spawned can also be included. This would further help by making any probe further than X distance be ignored, slowly fading away as to not noticeably pop in and out of existence. Reflections / GI are a discrete effect you'll only see up close. Similar to lights and shadows, the result of a probe should be cached and not calculated unless necessary. This means that unless something moves within radius of that probe its cubemap won't be rendered again. Probes would only be updated either when they first come into the player's view, or if something touching their cube has moved. Note that particles and lights with animated textures would have to count as you may see them in a detailed reflection, candles and torches would force constant updates per-frame for probes they intersect. If with all that performance is still affected, frame skipping is also a solution: Reflection probes can update at a lower frame rate to further decrease their impact. If you have a 60 Hz monitor and are running TDM at 60 FPS max, reflections could run at 30 / 20 / 10 FPS without looking out of place. They could in fact be defined as a fraction of your average framerate, so for the FPS you get you can decide whether it's going to be 1/2 or 1/4 or so on of that... this would have the added advantage of exponentially gaining back FPS the lower your FPS goes. There are several reasons why I believe this would be better than mappers manually placing new probe entities: Extra work is required for the mapper, who needs to figure out how to cover each area in probe lights. Every piece of the map would need to be encompassed in a reflection / GI probe otherwise you won't get shiny surfaces or bounce lighting which will look out of place. Most existing FM's will never be updated to use this: Only maps created or updated after the feature is introduced would benefit, anyone playing old missions will get boring visuals without reflections / GI which will be inconsistent to new ones. I strongly believe this should be done as an universal effect like SSAO. Single large probes will produce inaccurate results; The larger a cubemap is, the more drift and a fake results you get with distance from its center. This can be mitigated by using parallax corrected cubemaps which should be used for automated cubemaps too... none the less you get a single point of vision for a large room which makes the result inaccurate the further you go... with an automated approach you could have many probes in a dense grid (if your hardware allows it) for a much more accurate result at any position and angle. What are your thoughts on this solution, do you think it's realistic and can work out? I do believe it should be either this or screenspace reflections the way they're done in Godot or Blender / Eevee. If SSR isn't the right choice for our engine, reflections and global illumination alike could be captured using a global grid of capture points shining within their respective areas.
  14. There are other complications though. How much fall damage should player take, if they decide to jump off a rope with the body? Should the player let go of the body or not? Also, right now it's much harder to jump off the rope with the body than without it. Why? And last but not least, how would you teach players these things, possibly without much hand-holding and text prompts explaining the rules? I guess I'm with @STiFUon this one, if you restrict dropping the body, you'll save yourself (and mappers) a lot of headaches. But even that doesn't solve all the problems, I know I'm in the minority in these forums, but as a player, I really appreciate the beauty and efficiency in simplicity of the design. Not overthinking everything and adding more and more rules for the sake of realism (or anything else).
  15. I think the reason the dev forums exist is to provide a place where the implementation of features can be discussed without getting mixed up with other debates when someone believes what the devs are doing is wrong. We often post public discussion threads for features with subjective elements like the frob outline, because community feedback is very important. But there will always be vocal defenders with strong views for or against certain features, or how exactly it should be implemented in their opinion. At some point a decision has to be made and be carried through, which is what the dev forums are for. Almost all of the threads are very technical, basically explaining and discussing recent or potential code changes with other devs. Its hard to say. Its a hobby the devs do in their spare time, so people come and go when they're in the mood and when they have the time. The team page is mostly accurate except for some relatively newer additions like myself.
  16. This is basically "do include my work ASAP because I worked so hard, or else *sulk*". This is similar case: https://forums.thedarkmod.com/index.php?/topic/21679-beta-testing-211/page/10/#comment-482352 This is neither a commercial product, nor a phishing email. That sense of rush and pressure is artificial. These releases typically do take long, and even then, there are often many things broken by mistake or omission. Often there aren't enough people to test stuff, or they're not competent enough, etc, etc. There's little point in hurry.
  17. Actually I might be confusing two different things. What the latest LWO exporter fixes is the smoothing angle. Previously this was hard-coded at some weird value slightly less than 90°, but this can now be configured to smooth everything, smooth nothing, or use the Autosmooth Angle setting on the object. I have no idea if explicit smooth groups are supported, or if this is even a thing in Blender.
  18. As far as I know the most up-to-date one is the script I maintain (there is a single tdm_export script which supports both ASE and LWO export). However I haven't specifically tested with the latest Blender 3.4 series, so it's possible that it will need an update. I believe this information is out of date. The problem of LWO losing smoothing information was caused by the Blender exporter itself ignoring object-specific data and enforcing a hard-coded smoothing angle. This is now fixed in my latest version, although the old behaviour is selectable at the time of export if you don't want to deal with object smooth groups. As far as I can recall, when I was testing this, the smoothing options did take effect in the engine (although I couldn't say whether they were 100% mathematically correct).
  19. Edit: in post 5 I discovered Whisper which does this task MUCH better. So don't use vosk. Some of the info till post 5 is still relevant for subtitle editing in Kdenlive in general. I previously posted about this in a status update. To make it a bit more in-view for the future I post the info also in this topic. I recently figured out how to make subtitles work for missions following this wiki guide: https://wiki.thedarkmod.com/index.php?title=Subtitles You can type in the subtitle-text manually either in the .subs or .srt files (in a text-editor) or use an video editor for that (recomended for .srt). What is also possible on some advanced editors including the free and open source multiplatform (Windows, Linux and Intel-Mac) Kdenlive editor is to auto generate the subtitle text for you from the audio or video file. You can then export to an .srt file that works directly in tdm. If you want to use the subs files for shorter sentences, you can just copy text from the .srt files. In Kdenlive you can install speech to text libraries from VOSK. For this to work you have to download and install Python. Info how to do the process of installation and usage can be seen in the following video (6.5 minutes): To sum it up: Configure first time: Install Python. (on Windows) During setup, you have to select Advanced Options and there mark Add Python to environment variables (super important!). In Kdenlive go to menu settings, click on configure Kdenlive. In that configure window, click in the left menu on Speech to text. There you click on the link to download speech models. On the website ( https://alphacephei.com/vosk/models ) you can click on a model download link, but keep the click pressed and move your mouse with the link to the configure Kdenlive window. Kdenlive then asks to install the model from url. vosk-model-en-us-0.22-lgraph is probably decent for most use cases. but you can install and test them all. To use it: First load an audio or video file into the view by dragging the file in one of the audio or video bars at the bottom (video: v1, v2 or audio: a1, a2). Click on menu Project > Subtitles > Edit Subtitle tool. You see an extra Subtitles bar on top. Now you select the audio or video file (it is sellected when it is outlined with an orange border) in the specific bar and then click on menu Project > Subtitles > Speech recognition. In the Speech recognition dialog, you select the correct language model and choose option Selected clip. After generation, you can preview the generated subtitles via the top right window. Make sure it is at starter position for playback. Using an audio file, you see a black background with the subtitles on top. Now you can tweek the position and edit the text directly in the Subtitles bar. This takes up the most time. Unfortunatelly the generation is not flawless, so you have to correct some words. Tweeking the subtitles for Requiem took me hours, becouse I wanted them to line up differently. Usually the subtitles are not generated as full senteces. This looks sloppy. If you want to add subtitles quickly without spending much time on it, it can be done this way. If you want to do it right, it still takes a lot of time in my experience. To export to .srt is shown in the following video: Although actually it's just one step: Click on menu Project > Subtitles > Export subtitle file. Alternativelly you can just save the kdenlive project and then the srt is exported as well. Every save will update the srt file. I might create a wiki article about it later. Kdenlive edit window:
  20. What?! That is news to me, I use .lwo for all my static models and smoothing imports to idtech4 just fine! Thou I most say, I don't use Blender for anything but MD5 models and still use old Modo 601, the same tool Seneca Menard a idSoftware artist uses, and also use his custom Modo plugging's, that he made public, exactly to streamline making static models for Doom 3 and Rage. In modo you can set smoothing angle or edge smoothing (hard or soft edge) and it works in idTech 4, perhaps blender and other tools handle .lwo differently? After all modo was made by the same people that created lightwave 3D, so they literally created the .lwo file system.
  21. Solid 1, if no collision surface is available, will use the model geometry itself for collisions (search for combat model in the code also), so yes is not the best approach, doing a custom collision model is always preferable for performance, but doing it on DR is something I personally don't do and IMO is at most, best for simple box like collision models. Like I said the best is to create the collision surfaces in a 3D Tool like blender, you can use the shadow model for it, just change the material from "shadow" to "collision" or even the lower poly LOD model, can't be easier. Btw isn't AFEntity for ragdool physics? I thought that was only for skinned MD5 models not static ASE or LWO models.
  22. Where can I get LWO exporter for Blender 3.4.x to give it a try ?
  23. As far as I remember, the engine drops smoothing information from LWO file and applies automatic determination of smooth groups depending on some hardcoded angle. So I'm not sure these smoothing settings will help in TDM or Doom 3.
  24. Is there an LWO exporter that works with Blender 3.4.1 ? I use some rogue ASE exporter and while it seems to work fine on small low poly models, it seems that it doesn't export properly larger higher poly models
  25. Afaik yes .cm files (means collision model) can be used as a separate collison model, for a ingame entity, I think that there's even a special func entity for that but the fact is, it was hardly used by idsoftware for that purpose, majority of Doom 3 models have the collision surface incorporated in the model itself (like the shadow model) they did it on Maya, 3DS Max and Modo. So motorsep I ask, why would you want to make a separate collision model in DR? Can't you edit the model in Blender for example?
×
×
  • Create New...