ptw23
Member-
Posts
10 -
Joined
-
Last visited
Reputation
4 Neutral-
Oh, I'm maybe confused - if I understand correctly, __declspec is MSVC specific, so I was unfamiliar -- I thought you were talking about syntax for function declarations in the DLL's Doom script header, rather than the actual DLL C++. On the DLL side, I had just expected they be exported with an `extern "C"` block, so any compiled language (or non-MSVC C++ compiler) can produce a valid DLL (since extern is ANSI standard), and it is up to the addon SDK or addon-writer manually to make the exposed functions TDM-compatible ("event-like" using ReturnFloat, etc.). E.g. my Rust SDK parses the "normal" Rust functions at compile-time with macros and twists them to be TDM-event-like, but simple C/C++ macros could do this for C++ addons too (or do it in TDM engine, but that adds code). To avoid discovery logic, DLL_GetProcAddress expects to find tdm_initialize(returnCallbacks_t cbs) in the DLL. This gets called by TDM (Library.cpp), passing a struct of any permitted TDM C++ callbacks, and everything is gtg. It means exactly one addon is possible per DLL, but that seems like a reasonable constraint. Going the other way, the DLL-specific function declarations are read by TDM from the Doom script header (a file starting #library and having only declarations), and loaded by name with DLL_GetProcAddress. That (mostly) works great with idClass, but instead of #library, we could call a const char* tdm_get_header() in the DLL to get autogenerated declarations. I am not 100% sure I understood this part, so apologies if I get this wrong - I think the benefit of DLLs is that they make chunky features optional and skip recompiling or changing the original TDM C++ core every time someone invents a compiled addon. Also, you wouldn't want (I think) a DLL to be able to override an existing internal event definition. So there isn't a very useful way that TDM C++ could take advantage of the DLL directly (well, maybe). However, as you say, Doom scripts are already a system for defining dynamic logic, with lots of checks and guardrails, so making DLL functions "Just An Event Call" that Doom scripts can use means (a) all the script-compile-time checking adds error-handling and stability, and (b) fewer TDM changes are required. Admittedly, yes, to make this useful means exposing a little more from TDM to scripts (mostly) as sys events - CreateNewDeclFromMemory for example: sys.createNewDeclFromMemory("subtitles", soundName, subLength, subBuffer); Right now, that's not useful as scripts can't have strings of >120chars, but if you can generate subBuffer with a DLL as a memory location, that changes everything. So even exposing just one sys event gives lots of flexibility - dynamic sound, subtitles, etc. etc. - and then there are existing sys events that the scripts can use to manipulate the new decl. No need to expose them to the DLL directly. Basically, it means the DLL only does what it needs to (probably something quite generic, like text-to-speech or generating materials or something), and the maximum possible logic is left to Doom scripts as it's stable, safe, dynamically-definable, highly-customizable, has plenty of sys events to call, has access to the entities, etc., etc. Yes, I think this is what I've got - I'd just added the #library directive to distinguish the "public header" Doom script that TDM treats as a definition for the DLL, vs distributed "Doom script headers" which script writers #include within their pk4 to tell the script compiler that script X believes a separate DLL will (already/eventually) be loaded with these function signatures, and to error/pass accordingly. Although I think no C++ headers are needed, as it would be read at runtime, and TDM can already parse the Doom script headers. I'm not familiar with import libs, but from a quick read, this generates a DLL "manifest" for MSVC, but I think it isn't strictly necessary, assuming there is a TDM-readable header, as TDM provides a cross-platform Sys_DLL_GetProcAddress wrapper that takes string names for loading functions? But if it is, then yes. Yep - bearing in mind that a DLL addon might be loaded after a pk4 Doom script pk4 that uses it, the need for the libraryfunction type is: firstly, to make sure the compiler remembers that this is func Y from addon X after it is declared, until it gets called in a script (this info is stored as a pair: an int representing the library, and an int representing the function) - and then remembers again from compiler to interpreter - i.e. emitting OP_LIBCALL with a libraryfunction argument. secondly, it is to reassure the compiler that in this specific case it is OK to have something that looks like a function but has no definition in that pk4 (as opposed to declaration, which must be present/#included), and, thirdly, to make sure that the DLL addon function call is looked up or registered in the right DLL addon, so any missing/incompatible definitions can be flagged when game_local finishes loading all the pk4s (i.e. when all the definitions are known and the matching call is definitely missing) Right now, each DLL gets a fresh "Library" instance that holds its callbacks, any debug information, and a DLL-specific event function table. It is an instance of idClass, so it inherits all the event logic, which is handy. Having an instance of Library per-DLL seems to me to be neater, as it is easier to debug/backtrace a buggy function to the exact DLL, and to keep its events grouped/ring-fenced from other DLLs. The interpreter needs to know which Library instance to pass an event call to, so the libraryNumber (an runtime index representing the DLL) has to be in the OP_LIBCALL opcode. As such, while (for example) a virtualfunction in Doom script is a single index in an event list, a libraryfunction is a pair of the library index and the index of the function in the Library's event list. --- But, suppose we try to remove libraryfunction as a type. The first issue (above) could be avoided by either: (A) adding DLL events directly to the main sys event list, but since event lists are static (C++ macro generated) then different "core" code would have to change; (B) passing the libraryNumber via a second parameter to OP_LIBCALL and (ab)using virtualfunction (or even int) for the function number, or, (C) making the Library a Doom script object, not namespace, so that the method call can be handled using a virtualfunction The second issue and third issue could be avoided by doing two compiler passes - a primary pass that loads all DLL addons from pk4s, so that the compiler treats any loaded DLL functions as happily-defined normal functions in every pk4 during a secondary pass for compiling Doom scripts, as it is guaranteed to have any valid DLL function definitions before it starts. Sorry, that got quite confusing to describe, but I wasn't sure how to improve on my explanation However, in conclusion, I'm not sure that those other approaches are less invasive than just having a new type, but open to thoughts and other options! Yep! That's nice, sounds practical - maybe even this could eventually evolve to semver, to allow some range flexibility?
- 24 replies
-
On security, I did have a brainwave - the way Helm charts for Kubernetes used to work and Conda Forge does now is by having a monorepo where any new "greenlisted" addons get PR'd. That way source is always available, the code can be reviewed before merging, the appropriate Github Actions can do the cross-platform builds and tests (and PR's get rejected on fail), the DLLs are guaranteed to be built from visible code, and any third-party dependencies can be seen immediately on PR. The comprehensive greenlist of checksums that TDM would allow (without some debugging flag, I guess) could then be exported from the Github Releases on that repo automatically. Clearly, that'd still be billed as a "community-maintained plugins, use at your own discretion" repo, not an official source, but it's a way of gatekeeping for security, like the list of downloadable missions. The big downside is that it means someone OKing a new release, but (a) how many addons are we really talking? 10-12 features script-writers could genuinely want to outsource to a DLL, and want to maintain for the community? (b) mostly this seems useful to wrap third-party libs like Piper or libsvg or something, so they should rarely need to be big or change frequently. Happy to volunteer to help, or at least PoC that, but appreciate I am just a randomer, so me saying "looks safe" is not hugely valuable (Btw, to be explicit, I'm not expecting anyone to buy into or want to incorporate any ideas/changes any time soon, it's just to see what might be an acceptable direction, if any, for a release down the line)
- 24 replies
-
Agreed, but arbitrary native extensions are, by definition, insecure. However, wasm would enable text-to-speech, say, and I double-checked that Piper can run as a wasm implementation before responding (and even Phi2-wasm exists), so generating assets on-the-fly or doing computation is still achievable. They are a good bit slower, but that's the trade-off. Btw, I did say speech-to-text above a couple times, while I meant the reverse, although both interesting (that might seem pointless in a stealth game but the idea of having to "con" a chatbot receptionist, say, seems maybe kinda cool). That sounds cool! Strictly, this is essentially the way the experiment works, plus-or-minus some guardrails. Would that function annotation syntax involve copy-pasting plugin(X) per definition though? And maybe "extern myplugina { ... }" namespace could be less ambiguous, as there's never a "is this the add event from myplugina or mypluginb?" when you have to write myplugina::add (and it fits the familiar "namespace" semantics/syntax that already exists in DoomScripts). But either work! The other way, which I did consider, was an object like sys (i.e. "myplugina.add"), but I thought that was a bigger change to the compiler and implied the addon should have state, which seemed an antipattern (state, aside from for caching/threading, seems like something TDM should control and well-written DLLs should assume that their internal state could disappear at any second). Also, I think we are aligned on this, but to be sure, in case we are talking cross-purposes - I think scripts would not normally want to have a packaged DLL when the functionality they want is already supplied by a widely-used DLL addon for a range of reasons - therefore scripts would need to shadow-declare external functions from a completely separately supplied DLL addon and expect TDM to confirm they match on load (e.g. my grab log demo is a tiny DoomScript-only pk4 that exports a logs to mod_web_browser based on @snatcher's naming logic, so it would seem brittle to copy-paste a duplicate libtherustmod_web.so into the pk4, and to trust signatures matched without checking, until TDM crashed mid-game). Since namespaces are there already, I guess you mean adding extern as a not-quite-namespace? I was thinking that's a smaller change than the other options above, but happy for feedback! In my mind, the bigger changes were: #library - a directive to create a header file, equivalent of (e.g.) a SWIG interface file - this could be replaced by mandating a DLL callback to return the interface declaration. A C++/C#/Rust SDK could even write that call into the DLL automatically by inspecting the exposed function definitions (in retrospect, this seems better than adding a directive) libraryfunction as an implicit type - this was to stick to the paradigm of function, virtualfunction and sys events, as implemented in TDM at the moment, which catches signature issues during initialization in the same TDM code that checks every event call right now. But this type also lets TDM capture which library was being called (and allow for the fact a definition might not yet be loaded) - this could be simulated by function/virtualfunction but it might mean more code changes to handle cases, rather than fewer, and I suspect the final check of "have the DLLs supplied all the callbacks with the signatures the scripts wanted?" would become messier bytes type - this seems the most controversial to me. I did think that this could be forced to not be variable-assignable (and so never appear explicitly in a script), ensuring any bytes return value is only every passed immediately into another event call. However, in terms of having it at all, the type itself seems essential unless DLLs would get direct C++ access to many TDM methods (which, as discussed, seems inherently undesirable for both backwards compatibility and runtime stability) int type - necessary for a DLL callback signature to ensure it only gets an int to an int parameter - could be worked around by writing a way for DLLs to declare callbacks with event args directly (as CallLibraryEvent doesn't care about float vs int), but that seems like more code to enable two ways of declaring callbacks, and seems more confusing for a script-writer when an extern/plugin declaration can't explicitly state the argument type a DLL callback will cast to (anyway, within a script, float/int remain interchangeable) [Ed] That said, if the lesser of two evils would be fewer/no interface/stability checks to minimize compiler/interpreter code-changes, at least to begin with, that would certainly change my logic above.
- 24 replies
-
No worries - thanks, and appreciate the follow-up! Just thought it was really important to be absolutely 100% clear that was not the intent (as I would agree that I would be totally out-of-line if so). I think I probably answered most of the questions in the message to @stgatilov (just saw yours after!) but the key idea was that changes could be made to the game to support DLLs in a way that is safer, as the only available approach currently isn't safe at all. However, @stgatilov has pointed out a different approach that is much safer, and works in any language!
- 24 replies
-
- 1
-
Thanks! I didn't realise I was going to do it until I started doing a piece of it, and then didn't seem to stop. Honestly, this is a way better idea I suppose one of my goals was to avoid any indirection overhead, but given that I couldn't see a usecase where it would matter, it was more a technical challenge - but I agree, wasm seems like the best of both worlds and fits with its intended usecase! Badly phrased on my part - somewhere in between! The idea is that only an extremely restricted set of functions are exposed to the library, not general access to the engine. It is a service specifically for scripts, rarely talking directly. The main (only) exception to this in my examples was dynamically loading a sound sample, as there isn't a way to do that from a script (nor can scripts create PCM in-memory, so that makes sense) - clearly, that's then the potentially-breakable interface, so the "LibraryABI.h" file, with that one callback (plus the return functions for basic types), is the complete definition of what's available, and only they are passed to the DLL. Coming back to your point 1, the main motivation is enabling use of stable third-party libraries that, on one hand, do need high performance for audio/visual/text generation on-the-fly and could involve generating asset-sized data, so copying must be minimized, but calls to them happen infrequently (or at least at least no more frequently than a script could be called). Essentially, to explain better, most motivating examples (like speech-to-text) could theoretically be implemented as out-of-process (fast) servers that scripts can hit via sockets (although that brings different downsides). To 2, yes indeed - I'd emphasize that with my approach, the SDK is a DLL-side tool that enables Rust (and it was fun to build!), but building a DLL in C++ against LibraryAPI.h wouldn't even need that SDK (I tested with plain C) - in fact, I did see a couple of Rust/C++ FFI approaches that would have been otherwise nicer, but required Rust-specific C++ code, so skipped them as I didn't think the engine should even know Rust (or any other DLL-side language) exists. However, wasm makes the whole thing a moot point Presumably (and I genuinely have no skin in the game, just curious), in your point 2 there would be no objection to using Rust/C#/etc. if it's getting compiled to wasm anyway, just that C++ is the most preferred? [Ed: as a side-note, I would still suggest required green-listing only of source-available compiled add-ons, as even within wasm, things like crypto miners are a risk]
- 24 replies
-
Thanks @snatcher! I had been really curious about the Visible Player Hands Mod thread, which looks like amazing work by @jivo, so am hoping for a Linux version to test pretty please If there's source, happy to try and help cross-compile?
- 24 replies
-
I don't. It already is "memory-safe" because a group of highly skilled developers have been doing that for over a decade. Quotes from my posts above: Note the bold, which was in the original text. To be again absolutely clear - the reason I picked Rust is not that I (or anyone else) has a reservation about the quality of the C++ code the team is writing, it is (a) a curiosity project and (b) because my assumption - right or wrong - is that people, especially the core team who are responsible for making sure [supported addon workflows and] new code does not break TDM, would not want to support DLLs because it is impossible to trust that the code of third-parties does not corrupt memory (accidentally, malicious code being a separate problem). The idea is that giving people a starter pack that auto-checks things the team are unlikely to trust is not a bad thing. I also put a list of limitations directly after that, very explicitly stating "Rust is safe" is a bad assumption. Rust is not a solution, it may not be sufficient, but I'm more than happy to do a C++ version, if that is not an issue in the first place - those changes work just as well for C++, Rust or Fortran for that very reason, and nobody wants TDM to start using Rust in core, that would be nonsensical when there is an solid, stable, well-known codebase in C++ and skilled C++ devs working on it. That aside, the OOP paradigm for C++ is probably (I suspect) better for game engine programming, but I'm not going to fight any Rust gamedevs over that. The point of this is that perhaps not all features are things that could or should be in the core, and that modularity could allow more experimentation without the related risks and workload adding them to the central codebase - happy to be corrected, but I doubt bringing new core engine features is just about developing them, it's also deciding when it's worth baking all new code, and functionality, right into the main repo to be maintained for ever more for every release and every user. Similarly, but I would appreciate if you responded to the points, and the caveats, I made, rather than stating something clearly unreasonable to suggest is a bad idea. It misleads others too. For the record, I have no issue about doing a C++ SDK instead, but I'm not sure that's the issue - we could just "script" in C++ if that wasn't a concern at all.
- 24 replies
-
Sorry, I missed one other point you made above - dealing with updates to the engine and keeping it maintained. This is a great point - for any Gnome users, I hate getting attached to plugins on extensions.gnome.org because they have to be re-released for every update of Gnome, and I'm fed up of components of my workflow keeping disappearing. This is why I had forced the LibraryABI.h approach, where there are a very limited number of allowed callbacks to TDM (currently 1, plus the event return calls for the basic C types). Everything else, I have done within scripts, and admittedly adding a couple of new sys script events, but a tiny inward DLL interface is not a huge limitation to addon writers if they accept it. Obviously, that moves the "event interfaces changing over time" issue to the scripts, but that's not a new situation.
- 24 replies
-
I'd not written that very well, mb - the idea is that Rust is a compiled language which provides a lot of flexibility, but its main motivating benefit is the decreased risk of accidental subtle memory bugs, aka being "memory-safe" -- essentially, by having a syntax that ensures traceability of lifetimes of all the variables and making sure you can't accidentally refer to a freed var or access unallocated memory (or it won't compile), a large class of hard-to-pinpoint fails that would break TDM in awful ways are avoided. As a result, I find that most of my debugging is in getting it to compile - once it compiles, then bugs are more likely logic bugs than coding issues or typos. This is sometimes mis-sold as "Rust is safe" - but (a) interacting with C++ voids those guarantees (at least at the boundary); (b) you can explicitly break the safety rails if you want; (c) if you don't handle an error, the code will still crash, (although it's more likely to do it where the error actually is, or somewhere obviously related, and give a useful backtrace, similar to scripting); (d) if third-party libraries explicitly do unsafe things internally (which is sometimes necessary), and there are bugs there, those can still bite you. So it's not a panacea, but while opinions vary on it as a language, it sidesteps the biggest footguns of most compiled languages. Fair points. In terms of the platform-agnostic issue, that is maybe the (slightly) more straightforward one - a big use-case for Rust is for Python extension modules. Obviously, the same issue exists there - when you install a library from PyPI or Conda, it should not matter what machine you run on, it should Just Work. The most common (free) approach is to airdrop a standard Github Action that, on push, fires up a matrix of architectures to build and test, and can create a Github Release with each of them suffixed by arch automatically. This can be set up without config, just with a file in the repo, so the simplest approach would be to add it to the template project. I noticed that idFileSystemLocal::FindDLL already implements this approach, so we could just piggyback. Tbh, I expect that encouraging cross-compatibility this way, in general, is why Apple lets Github spin up free OSX build runners. OTOH, that (a) doesn't help anyone not wanting to use Github (e.g. Gitlab requires Gitlab Premium for Mac builds, at least), and (b) won't cover rarer architectures like PPC or SPARC, which would require a self-hosted runner. The security point is tougher - I had been thinking about an approach but wasn't sure if this was overkill. If there was an allow-list (like the downloadable missions) and the TDM devs insisted on the use of the template repo and Action for any build, then you could auto-confirm checksums against the allow-list at runtime, so you knew for sure that binary X was built from code at github.com/blah/Y because the same SHA is in the automated release, (and you can see the CI log for the build, so you can see it's the visible source that made it). The other route is Reproducible Builds, but then you would need re-builds to verify, rather than just being able to map library+arch to DLL checksum. This still isn't perfect as (a) like the missions, someone on the core team would need to greenlight new library releases by adding the checksum (or a generated JSON array for the archs) to let TDM load them (by default), which is work, (b) people are unlikely to go through the code in detail when signing off a new release from a known contributor, so an xz style attack is possible (although that's true of any TDM dep too), and (c) if somebody's Github account is compromised and they can convince the team that they are the real person, they could get a checksum for a malicious release greenlit, at least for a while. The main mitigation for the hassle is that, after two weeks, I can only think of a few genuinely useful usecases for libraries - I'm sure others can think of more, but given that how ever many scripts can then hook in to a few libraries to apply the functionality in different ways, it's hard to see how there would be a growing workload from dozens of new libraries churning out. For instance, how many speech-to-text libraries would there be? Given every use-case can be a new script that uses the same STT engine... I'm guessing that doesn't really address those points for you fully, but I would imagine that such functionality would necessarily be opt-in or otherwise gated (and, hence, the point about syntax for being able to conditionally use libraries, so many scripts can fallback to standard behaviour without them).
- 24 replies
-
ptw23 started following Experiment with Rust in Scripts
-
Hi folks, and thanks so much to the devs & mappers for such a great game. After playing a bunch over Christmas week after many years gap, I got curious about how it all went together, and decided to learn by picking a challenge - specifically, when I looked at scripting, I wondered how hard it would be to add library calls, for functionality that would never be in core, in a not-completely-hacky-way. Attached is an example of a few rough scripts - one which runs a pluggable webserver, one which logs anything you pick up to a webpage, one which does text-to-speech and has a Phi2 LLM chatbot ("Borland, the angry archery instructor"). The last is gimmicky, and takes 20-90s to generate responses on my i7 CPU while TDM runs, but if you really wanted something like this, you could host it and just do API calls from the process. The Piper text-to-speech is much more potentially useful IMO. Thanks to snatcher whose Forward Lantern and Smart Objects mods helped me pull example scripts together. I had a few other ideas in mind, like custom AI path-finding algorithms that could not be fitted into scripts, math/data algorithms, statistical models, or video generation/processing, etc. but really interested if anyone has ideas for use-cases. TL;DR: the upshot was a proof-of-concept, where PK4s can load new DLLs at runtime, scripts can call them within and across PK4 using "header files", and TDM scripting was patched with some syntax to support discovery and making matching calls, with proper script-compile-time checking. Why? Mostly curiosity, but also because I wanted to see what would happen if scripts could use text-to-speech and dynamically-defined sound shaders. I also could see that simply hard-coding it into a fork would not be very constructive or enlightening, so tried to pick a paradigm that fits (mostly) with what is there. In short, I added a Library idClass (that definitely needs work) that will instantiate a child Library for each PK4-defined external lib, each holding an eventCallbacks function table of callbacks defined in the .so file. This almost follows the idClass::ProcessEventArgsPtr flow normally. As such, the so/DLL extensions mostly behave as sys event calls in scripting. Critically, while I have tried to limit function reference jumps and var copies to almost the same count as the comparable sys event calls, this is not intended for performance critical code - more things like text-to-speech that use third-party libraries and are slow enough to need their own (OS) thread. Why Rust? While I have coded for many years, I am not a gamedev or modder, so I am learning as I go on the subject in general - my assumption was that this is not already a supported approach due to stability and security. It seems clear that you could mod TDM in C++ by loading a DLL alongside and reaching into the vtable, and pulling strings, or do something like https://github.com/dhewm/dhewm3-sdk/ . However, while you can certainly kill a game with a script, it seems harder to compile something that will do bad things with pointers or accidentally shove a gigabyte of data into a string, corrupt disks, run bitcoin miners, etc. and if you want to do this in a modular way to load a bunch of such mods then that doesn't seem so great. So, I thought "what provides a lot of flexibility, but some protection against subtle memory bugs", and decided that a very basic Rust SDK would make it easy to define a library extension as something like: #[therustymod_lib(daemon=true)] mod mod_web_browser { use crate::http::launch; async fn __run() { print!("Launching rocket...\n"); launch().await } fn init_mod_web_browser() -> bool { log::add_to_log("init".to_string(), MODULE_NAME.to_string()).is_ok() } fn register_module(name: *const c_char, author: *const c_char, tags: *const c_char, link: *const c_char, description: *const c_char) -> c_int { ... and then Rust macros can handle mapping return types to ReturnFloat(...) calls, etc. at compile-time rather than having to add layers of function call indirection. Ironically, I did not take it as far as building in the unsafe wrapping/unwrapping of C/C++ types via the macro, so the addon-writer person then has to do write unsafe calls to take *const c_char to string and v.v.. However, once that's done, the events can then call out to methods on a singleton and do actual work in safe Rust. While these functions correspond to dynamically-generated TDM events, I do not let the idClass get explicitly leaked to Rust to avoid overexposing the C++ side, so they are class methods in the vtable only to fool the compiler and not break Callback.cpp. For the examples in Rust, I was moving fast to do a PoC, so they are not idiomatic Rust and there is little error handling, but like a script, when it fails, it fails explicitly, rather than (normally) in subtle user-defined C++ buffer overflow ways. Having an always-running async executor (tokio) lets actual computation get shipped off fast to a real system thread, and the TDM event calls return immediately, with the caller able to poll for results by calling a second Rust TDM event from an idThread. As an example of a (synchronous) Rust call in a script: extern mod_web_browser { void init_mod_web_browser(); boolean do_log_to_web_browser(int module_num, string log_line); int register_module(string name, string author, string tags, string link, string description); void register_page(int module_num, bytes page); void update_status(int module_num, string status_data); } void mod_grab_log_init() { boolean grabbed_check = false; entity grabbed_entity = $null_entity; float web_module_id = mod_web_browser::register_module( "mod_grab_log", "philtweir based on snatcher's work", "Event,Grab", "https://github.com/philtweir/therustymod/", "Logs to web every time the player grabs something." ); On the verifiability point, both as there are transpiled TDM headers and to mandate source code checkability, the SDK is AGPL. What state is it in? The code goes from early-stage but kinda (hopefully) logical - e.g. what's in my TDM fork - through to basic, what's in the SDK - through to rough - what's in the first couple examples - through to hacky - what's in the fun stretch-goal example, with an AI chatbot talking on a dynamically-loaded sound shader. (see below) The important bit is the first, the TDM approach, but I did not see much point in refining it too far without feedback or a proper demonstration of what this could enable. Note that the TDM approach does not assume Rust, I wanted that as a baseline neutral thing - it passes out a short set of allowed callbacks according to a .h file, so language than can produce dynamically-linkable objects should be able to hook in. What functionality would be essential but is missing? support for anything other than Linux x86 (but I use TDM's dlsym wrappers so should not be a huge issue, if the type sizes, etc. match up) ability to conditionally call an external library function (the dependencies can be loaded out of order and used from any script, but now every referenced callback needs to be in place with matching signatures by the time the main load sequence finishes or it will complain) packaging a .so+DLL into the PK4, with verification of source and checksum tidying up the Rust SDK to be less brittle and (optionally) transparently manage pre-Rustified input/output types some way of semantic-versioning the headers and (easily) maintaining backwards compatibility in the external libraries right now, a dedicated .script file has to be written to define the interface for each .so/DLL - this could be dynamic via an autogenerated SDK callback to avoid mistakes maintaining any non-disposable state in the library seems like an inherently bad idea, but perhaps Rust-side Save/Restore hooks any way to pass entities from a script, although I'm skeptical that this is desirable at all One of the most obvious architectural issues is that I added a bytes type (for uncopied char* pointers) in the scripting to be useful - not for the script to interact with directly but so, for instance, a lib can pass back a Decl definition (for example) that can be held in a variable until the script calls a subsequent (sys) event call to parse it straight from memory. That breaks a bunch of assumptions about event arguments, I think, and likely save/restore. Keen for suggestions - making indexed entries in a global event arg pointer lookup table, say, that the script can safely pass about? Adding CreateNewDeclFromMemory to the exposed ABI instead? While I know that there is no network play at the moment, I also saw somebody had experimented and did not want to make that harder, so also conscious that would need thought about. One maybe interesting idea for a two-player stealth mode could be a player-capturable companion to take across the map, like a capture-the-AI-flag, and pluggable libs might help with adding statistical models for logic and behaviour more easily than scripts, so I can see ways dynamic libraries and multiplayer would be complementary if the technical friction could be resolved. Why am I telling anybody? I know this would not remotely be mergeable, and everyone has bigger priorities, but I did wonder if the general direction was sensible. Then I thought, "hey, maybe I can get feedback from the core team if this concept is even desirable and, if so, see how long that journey would be". And here I am. [EDITED: for some reason I said "speech-to-text" instead of "text-to-speech" everywhere the first time, although tbh I thought both would be interesting]
- 24 replies
-
- 3