Jump to content
The Dark Mod Forums

stgatilov

Active Developer
  • Posts

    6774
  • Joined

  • Last visited

  • Days Won

    232

Everything posted by stgatilov

  1. I hoped for something simple, but it indeed looks complicated. As expected Why do you suggest using pull requests? Reviewing map changes is futile in many cases. Wouldn't it be easier to work on one branch in one repo and push directly? Or is it a workaround for git's ability to easily lose data? Why do you suggest rebasing? That's an easy way to lose data. Better just merge everything.
  2. And if you start TDM afresh, the paper still shows up when you load that save?
  3. Ok, let's dream a bit Suppose that we have one server, which stores the map in something like SVN repo: initial revision, and a sequence of patches. The server is going to work as the only source of truth about map state. Suppose that all assets are stored on a mounted network disk. As far as I understand, google drive can be mounted to local disk. I guess such a simple approach is enough. Suppose that we have a framework for map diffs/patches with following properties: It is enough to find diffs on per-entity and per-primitive level, so that a change in entity spawnargs stores all spawnargs in the updated entity, and a change in primitive stores the full text of the updated primitive. No need to go on the level of "add/remove/modify a spawnarg". A modification inside patch should store both the old and the new state. Simultaneously changing spawnargs on the same entity or changing the same primitive results in a conflict, which can be detected reliably by looking at patches. Sequence of compatible patches can be quickly squashed together --- might be convenient in some cases. They can also be reversed easily. It is possible to quickly save a diff between current DR state and the previous such moment. I guess that's what hot reload already does, except that perhaps it is too careless with primitives now. It is possible to quickly apply a patch, as long as it does not result in a conflict (conflict happens e.g. when we modify entity spawnargs which are different from the old state stored in the patch). Since DarkRadiant is a complex thing, applying patches cannot happen at arbitrary moment. If we have a dialog opened, and we apply some patch which adds/removes some item from this dialog, that could easily result in a crash. Cleaning all the code for such cases is too unreliable and too much work. It is much easier to simply say that patches can only be applied when no dialogs are active, no edit box is active, etc. Now in order to turn DarkRadiant into a multiplayer game, we only need to invent some protocol The server simply receives all patches from all clients and commits them to its repository in order or arrival (creating new revisions). Every diff should have unique identifier (not just hash of content, but identifier of client + sequence number of diff on his side). If applying a patch results in a conflict, then new revision is still created, but it is marked with "rejected" flag and it does not change anything (empty diff). Storing rejected revisions is necessary in order for the client to know that his patch was rejected. Also, the server remembers which was the last revision sent to each client. It actively sends patches of newer revisions to the clients. The client has a persistent thread which communicates with server in an endless loop. It has a queue of outgoing patches, which it sends to server sequentally, and a queue of incoming patches, which it populates as it receives more data from server. This thread never does anything else: the main thread interacts with the queues when it is ready. The main thread of the client works mostly as usual. When user changes something, we modify all the usual data + add a patch into some patch queue. Note that this is different from the outgoing patch queue in communication thread. Also, this queue is divided into two sections: some beginning of the queue is "already sent to server", and some tail is "not sent to server yet". When user changes something, we compute diff and add a patch to the "yet-unsent" tail of the queue. We establish a "sync point" when 1) no dialog is open (i.e. DR is ready to be updated), and 2) patch queue is not empty or 5 seconds has passed since previous sync point. The sync point is going to be complicated: Pull all the received ingoing patches from the communication thread. Compute the patch from current state to the latest state received from the server. Take our patch queue and reverse it, then concatenate ingoing patches, finally squash the whole sequence into one patch. Apply this patch to the current DR state, and you get server revision. That's the most critical point, since all data structures in DR must be updated properly. Look which of the "already sent" patches in our queue are contained among the ingoing patches. These ones were already incorporated on the server, so we drop them out from our queue. If some of our patches were committed but rejected, we should quickly notify user about it (change dropped). Go sequentally over the remaining patches in our queue, and apply them to the current state of DR, updating all structures again. If some patch cannot be applied due to conflict, then we drop it and quickly notify user. The already-sent patches remain already-sent, the not-yet-sent patches remain not-yet-sent. Copy all not-yet-sent patches into the outgoing patch queue of the communication thread, and mark them as "already sent". Of course, when users try to edit the same thing, someone's changes are dropped. That's not fun, but should be not a big problem for fast modifications. The real problem is with slow modifications, like editing a readable: user can edit it for many minutes, during which no sync points happen. If someone else edits the same entity during this time, then all his modifications will be lost when he finally clicks OK. I think this problem can be gradually mitigated by adding locks (on entity/primitive) into the protocol: when user starts some potentially long editing operation, the affected entity is locked, and the lock is sent to server, which broadcasts it to all the other clients. If client knows that entity is locked, it forbids both locking it with some dialog and editing it directly. Note that synchronization of locks does not require any consistency in time: it can be achieved by sending entity names as fast as possible without some complex stuff like queues, sync points, etc. The system sounds like a lot of fun. Indeed, things will often break, in which case DR will crash, and map will become unreadable. Instead of trying to avoid it at all costs, it is much more important to ensure that server stores the whole history, and provides some recovery tools like "export map at revision which is 5 minutes younger than the latest one". I'm pretty sure starting/ending a multiplayer session would be a torture too. Blocking or adjusting behavior of File menu commands is not obvious too. UPDATE: added illustration:
  4. In principle, there are three known approaches to collaborate on a project: Lock + modify + unlock. When someone decides to change a file, he locks it, then changes it, uploads a new version, and unlocks it. If file is already locked, then you have to wait before editing it. Or you can learn who holds the lock and negotiate with him. Copy + modify + merge. When you want to modify a file, you get a local copy of it, change it, then try to upload/commit the changes. If several people have modified the same file in parallel, then the whoever did it last has to merge all changes into single consistent state. Merging is usually possible only for text documents: sometimes it happens automatically (if the changes are independent), sometimes it results in conflicts that people have to resolve manually. Realtime editing. The document is located on the central server, and everyone can edit it in realtime. Basically, that's how Google Docs work. I'd say internally it is lock+modify+unlock approach, but with very small changes and fast updates. @MirceaKitsune asks for approach 3. I guess it would require tons of work in DarkRadiant. Approach 2 can be achieved by using some VCS like SVN, mercurial, git, whatever. But it requires good merging in order to work properly. Text assets like scripts, xdata, defs, materials should merge fine with built-in VCS tools. Binary assets like images, models, videos, are completely unmergeable: for them approach 2 is a complete failure. As far as I understand, @greebo wants to improve merging for map files, which are supposedly the most edited files. Approach 1 can be achieved by storing FM on cloud disk (e.g. Google Drive) and establishing some discipline. Like posting a message "I locked it" on forum, or in a text file near the FM on Google Drive. SVN also supports file locking, so it can be used for unmergeable assets only and on per-file basis. Git also has extension for file locking, as far as I remember.
  5. SVN also has feature of file locking. I think mappers implement manual locking when they work on a map on google drive: someone says "I'm editing it now", then upload his changes and say "I have done with it, the map is free". That's exactly the locking paradigm, but on the whole map. SVN locking was designed for working with files which are unmergeable (like models or images). By the way, does anyone know what's the status of betamapper SVN repo? Who are the intended users of this repo? Speaking of map diff, wouldn't it be enough to normalize order of entities and primitives, and run ordinary diff afterwards? UPDATE: By the way, hot-reload feature already computes some diff for map file, but it is only interested in entities, whcih can be easily matched by name --- so it is very basic. I think asking mappers to learn git is overkill. If you wrap the whole git into simple commands, then it has some chance. There are some non-programmer VCS, and they are usually very simple. For instance, Sibelius stores versions in the same file, allows switching between them and viewing diffs. Did not even find any merging there. Word and SharePoint also seems to use sequence of versions, but Word at least has merging (two documents without base version, I suppose). It is strange to suggest mappers learning the VCS which is hardest to learn/use out of all possibilities. Git does not provide any additional capabilities over mercurial, but pushes a lot of terms over user which he regularly bumps into and has to learn eventually. Plus it is very easy to lose data. Git ninjas know that deleted branches are not lost: just look at reflog... but for non-ninja it is the same as "lost".
  6. It is often missed that Grayman was also Lead Programmer of TDM team for six years.
  7. Of course, it is only about the two missions which were not released. Nobody ever talked about removing already released FMs. I have added the question to my previous post.
  8. Grayman repeatedly said that if he does not have time to release these two FMs, then the story of William Steele ends on the fifth mission. I specifically asked him about it in March and here is his answer: So unless anyone wants to ignore his will, WS6 and WS7 are dead too.
  9. @Caverchaz, I have added some retry capabilities to the installer. Ideally, it should be able to overcome the problem you had. Unfortunately, I cannot test it myself: I installed TDM afresh without any issue yesterday. The problem simply does not happen on my machine. But you have somewhat unique conditions that the problem happens on your side. So I'd like to ask you for help with testing. Here is the plan: Put tdm_installer into empty directory and run without any checkboxes set (just as you did initially when it hung up). If it hangs (does not show any progress for ~10 minutes, or simply does not finish in a few hours), then take a memory dump as I described above. Regardless of whether it succeeds or hangs, please find the logfile of the run and attach it here.
  10. The impact of such change is like adding a third supported platform on all levels. Any indication that X11 emulation costs performance? I suppose OpenGL calls hit the driver directly regardless of Wayland/X11, and GPU does not care either. The only difference could be some desktop-level compositing and input handling. I remember we argued a lot about "exclusive" fullscreen mode on Windows which should theoretically differ in additional compositing step, and we did not find any evidence of performance impact.
  11. Just to rule out more stupid possibilities and be sure it is actual downloading. @Caverchaz, could you please try to fresh-install TDM once again without any tweaks, wait until it hangs reliably, then open task manager, find tdm_installer.exe, Right+click on it and choose "create dump file" or something similar. Then find the file, 7zip it and share via some cloud storage. This wiki article has detailed instructions, but for the case when DarkRadiant crashes --- while we need to apply it to tdm_installer when it hangs.
  12. I think it depends. If HTTP error happens, then indeed it will stop and show error message. If connection is closed, then I hope it will detect it too. However, I don't think there is any protection from server silently falling to sleep, no protection against server not sending data or sending it too slow. Maybe some timeout somewhere stops connection in such a way that libcurl does not detect it. The good approach would be to set upper limit on size of one download request, set some lower limit on download speed, and retry (perhaps with progressively smaller chunks and softer limits). But I think I need to at least understand how it happens and be able to reproduce it. Any ideas?
  13. You can try this way as a workaround: 1) Open tdm_installer.ini, and find there the following lines: url_2=http://tdm.frydrych.org/mirror/zipsync weight_2=300 Add sharp character # at the beginning of both of these lines. 2) Try to use tdm_installer again, but check Advanced Settings and Skip config file download on the first page. This might help to download the game for the first time. If it helps, don't use it ever again: most likely you won't have this issue when switching between versions.
  14. What type of internet connection do you have? What is typical download speed? Do you have something which uses network constantly... like torrent? The downloading always hangs on the same mirror. Of course, we don't have too much mirrors yet (perhaps time to change that). But out of 20 attempts it is always the same mirror. @cabalistic, maybe you could look at something there?
  15. When does it happen? On the first page? Maybe it is just some large pk4 file... How long did you wait? Could you find file named like "tdm_installer_1621300601.log" near the installer and share it here? I guess the latest one should be enough.
  16. The migration to new FM database is over. FM additions and updates can no go as usual.
  17. Hotfix is released: now 2.09a is default version in tdm_installer.
  18. You mean darkmod.cfg? Are you sure you have installed the hotfix, and not the original 2.09 release? Which revision does the game show in lower-right corner of console ? Which version is written in .zipsync/lastinstall.ini ? Could you attach the log of tdm_installer? It looks like tdm_installer_XXXXXXXXXX.log.
  19. I think it should be possible to extract core dump with coredumpctl into a local file. When you have the actual core dump file, you can open it in gdb. Then you can load debug symbols, while being in gdb. And after that you will be able to execute "bt" (backtrace), and it will print current stack trace. Also you can switch between threads and do "bt" to retrieve all call stacks. But it would be much easier if you just compress the core dump and share it. I'll be able to see surrounding variables too. P.S. I don't develop on Linux normally, so I only know the bare bone stuff. Like gdb, debug symbols, opening core dump in gdb...
  20. Maybe something happened with config removal?
  21. If you take two different versions of source code and build them, their savegames will be incompatible with each other by default. Because someone can add a member somewhere and save it to savegame (and restore it of course). The savefiles are plain binary streams, like fread/fwrite all data. Most importantly, savegames are incompatible between official releases (e.g. 2.08 vs 2.09), and between all dev builds and beta releases too. If you take the same source code and build it on different environments (x86/x64, Windows/Linux, different compilers), the binaries will be compatible with each other in terms of savegames. Except the revision check you bumped into: it is additional check which can be disabled or worked around (just see the list I posted).
  22. Hotfix release for 2.09 is available for testing. You can obtain it with tdm_installer. Make sure you check "Get custom version" on the first page, then choose release209a on the second page. If you need 32-bit binary, you can download them in additional archive here. After a very short testing, this hotfix will become the new "default" in tdm_installer, unless any serious issues are detected. Contrary to the usual case, savegames are fully compatible both ways between original 2.09 release and 2.09a hotfix release. You don't need to finish playing current mission in order to update and even rollback in case of any issues. Here is the list of changes compared to the original 2.09: * Bindless textures are disabled by default ("r_useBindlessTextures 0"), since AMD drivers don't work properly with them. * Fixed in-game mission downloader with new FM database: you should no longer see obscure warning messages (5551). A bit better debug logging too. * Fixed crash when frobbing while having key/lockpick selected (5542). The most known case: crash in decontamination chamber in "Hidden Hands Anomaly". * Fixed heap corruption crash in ScriptTask-s (5538). The most known case: end of conversation at the beginning of "William Steele 3". * Lowered requirement on MAX_COMBINED_TEXTURE_IMAGE_UNITS to 32 (link). * Fixed saving game from script. * Most likely fixed crash with "r_useDebugGroups 1" on AMD drivers (5280). * Fixed out-of-bounds error when currentfm is empty in case of campaign. * Fixed very hypothetical issue in SSE2 Memcpy. * Fixed memory leak in AAS compilation, which was present since Doom 3 (5562). * Updated LICENSE file.
  23. It is mentioned at the end of compilation guide. Basically, the current SVN revision is automatically embedded into executable. It is saved to savegame files, and check when loading savegame. Without such a check loading incompatible save would result in crash in the best scenario (or in data corruption in a worst but rare scenario). Here is what you can do: Build TDM from SVN working copy. Set cvar "tdm_force_savegame_load 1": then you will have an additional option "force load" in the dialog you see. Find RevisionTracker::GetHighestRevision function in the code and make it return 9108. UPDATE: Yet another option is to not load savegames made by official TDM binary, if you are OK with it. The savefiles made by your binary with rev 0 will be compatible with your binary.
  24. Yes, a lot of people have noticed it already. I can probably say "use your lantern", although I guess it would make ghosting harder I guess @cabalistic is still considering highlighting the surface too, but it turned out to be not so easy. If Uncap FPS is off, then you are limited by 60 FPS, hence the value you set in Max FPS has no effect. The new behavior makes it more apparent.
×
×
  • Create New...