Jump to content
The Dark Mod Forums

Geep

Member
  • Posts

    1058
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by Geep

  1. Interesting idea, though I wouldn't want the text to be interpreted as objectives by the player. Glad that's working for you. Yeah, getting the data into shape is a pain. I did it for me with a lot of Excel in/out and text find/replace misadventures. Some potential for process improvement there, next time. Most guidance for subtitles would restrict them to the central 2/3rds of the screen, width-wise. So maybe narrowing the fields a bit throughout would be good. Need to think about this. I guess the lowest field would also obscure the health and breathe bars a bit.
  2. The subtitles for The Thug vocals have been posted to the bugtracker #6240 for review & eventual incorporation into 2.12dev. They are in FM form, embedded into QA-program testSubtitlesThug. The .pk4 is available here: https://drive.google.com/file/d/1VNP2fNge-2Ff3RkdUvCLagRENUESVW40/view?usp=sharing Easy instructions on using this FM can be found in both the Notes and Briefing. That info also sketches how the test program can be adapted by FM authors to present & test sets of their own custom subtitles. (I'm not claiming getting your data into this FM is real easy.) As I refined these subtitles, a particular style evolved, and particular tools/methods were used. I'm working on documenting this, but it won't be ready to release for a while. And things may mutate further as I tackle the next AI's subtitle. I'm thinking I need to do a character with longer speeches next, e.g., a noble.
  3. I don't know if this would solve it, or cause other problems, but you could try enveloping the key area in a force field that pushes the key against the wall.
  4. I wonder if this gawdawful hack works in readables: https://wiki.thedarkmod.com/index.php?title=Text_Decals_for_Signs_etc.#Signs_with_Illuminated_Colored_Letters Possibly a related need: in long term, could use color fonts in subtitles too.
  5. I changed over to finer-grained characters-per-second metric instead of words-per-minute. Using 20 cps, 3 of the above phrases could be left verbatim.
  6. For the Thug, out of 393 utterances, only 9 required subtitle editing to stay within a 240 words-per-minute reading rate, the highest anyone thinks reasonable. There were: Verbatim --> Shortened Let's have a look. --> Let's look. I'm ready for him. --> I'm ready. [only needed for shortest clip with this phrase; 2 other clips were fine verbatim.] How'd you like a taste of this? --> Like a taste of this? [2 clips] I'll piss where I want to piss. --> I'll piss where I want to. I guess I have to do it. --> Guess I have to do it I guess I have to do it. --> Guess I have to. [shortest clip] The son of a whore was right here. Look around! --> (curse) ...was right here. Look around! Look, you bastard! --> Look, bastard! If I used the still high but slightly lower reading rate of 200 WPM, about 21 additional edits would be needed. Edit: Actually, those last 2 don't need editing. Spreadsheet calculation problem fixed. Might revisit some criteria too.
  7. BTW, under Win11 File Explorer, you can see the duration (called "Length") of sound files in a directory, as well as sort on that. Use the "Details" view. Right-click on the columns header, and checkmark "Length" to include its column. Resolution is only rounded to the nearest second, tho. Adequate for making the "inline" vs "srt" choice. Not for calculating WPM.
  8. It's the duration that matters. If the clip exceeds 6 seconds, use srt. Individual segments within srt should be 1-6 seconds long. There's more to it than that. I'll DM you later today with a Word doc, a draft fragment of the style guide under development, that delves into this. Specifically, if there's too many words to read in the time that a subtitle is shown, you may need to edit out some words. That is, move away from a verbatim caption. The fragment has quantitative guidance on this.
  9. @datiswous, made that correction fm_test.subs --> fm_conversations.subs @stgatilov, about srt naming and file location, would you be OK with the following edit? New/changed stuff in italics: srt command is followed by paths to a sound sample and its .srt file, typically with matching filenames. An .srt file is usually placed either with its sound file or in a "subtitles" folder. The .srt file format is described e.g. [1]. The file must be in engine-native encoding (internationalization is not supported yet anyway) and have no BOM mark. It contains a sequence of text messages to show during the sound sample, each with start and end timestamps within the sample's timeline. It is recommended to use common software to create .srt files for sound samples, instead of writing them manually. This way is more flexible but more complicated, and it is only necessary for long sounds, for instance sound sample of a briefing video. It's a simple enough standard that it can be shown as an short example, demonstrating that subtitle segments can have time gaps between them. And the example can show correct TDM usage, without requiring a trip off-site and picking through features that TDM doesn't support. Specifically, the example shows how to define two lines by direct entry, rather than using unsupported message location tags (X1, Y1, etc.). And skips other unavailable SRT font markups like italics, mentioned in the wikipedia description. The example would also show the TDM-specific path treatment. The example could be inserted before the sentence "It is recommended to use common software...."
  10. @stgatilov, in the Subtitles_decls section of the Subtitles wiki page, you show example code, that includes an srt reference. Wouldn't it be better to guide people to place the corresponding .srt files within the "subtitles" tree (where their .subs live), rather than the "sound" tree? Also, should FM authors be encouraged (if not required) to prefix .srt files with "fm_" ? Like "fm_sound8_long.srt" ? Finally, in the "displayed text" section, under the srt command, there's a desperate need to see example .srt content, in this case, something made up for sound8_long.srt (or fm_sound8_long.srt) like: 1 00:00:06,612 --> 00:00:10,376 Something's wrong with this crystal ball. 2 00:00:15,482 --> 00:00:20,609 Bugger me! It's not showing the right dream. 3 00:00:25,336 --> 00:00:28,167 Ah! Here we go. --end
  11. The first-draft of the thug subtitles is done, and also successfully loaded into my test jig with custom soundshaders for the second-draft review/revise. That review will be paused while i work through some issues about best subtitling practices to target. For example, word per minute reading constraints and their impacts.
  12. Maybe. I'll keep it in mind to try, if I get a srt workflow going at some point.
  13. @datiswous, just the last phrase. @stgatilov, confirmed that subtitle display stops when sound ends. If I edit contents of an .srt file, neither reloadDecls nor reloadSounds sees that change.
  14. I have a question about the current TDM implementation of "srt"... For an audio clip using srt with multiple subtitle phrases, the start and end timestamps for each phrases are relative to the clip start. For implementation reasons, it would not be reasonable to have negative time. But it could be both reasonable and useful to have a subtitle that continues on-screen a bit longer than the audio clip. Is that currently possible?
  15. The page still loads crisply, so on balance, I think it's better.
  16. For the Thug, all the subtitles I've done so far have been rather short and so have lent themselves to the simple "inline" approach. I agree that a different character that has more long monologues, and so needs "srt", would benefit from a video editor (even if there's no real video, just audio) as you described earlier. I may need to call on your srt expertise later. I've been creating the subtitles with 3 windows open - 1- to select and play each .ogg with a minimal Windows player. 2- A view of the AI vocal script (e.g., source view of the Thug wiki entry), to use as a copy/paste starting point. 3 - The .subs file being built in a text editor. So, the FM I described is not used in the first place to create subtitles, but rather to do a second-pass review (e.g. by me) or third-pass review (as part of quality control during incorporation into TDM).
  17. Here's a screen shot of an FM I've been building to help test/review subtitles for an AI, in reproducible order with 100% coverage. It uses the TDM-distributed .ogg files, but has custom soundshaders. Each such soundshader wraps exactly one .ogg file, and has a uniform naming that includes an index number. The collection of soundshaders is housed in a single file. The hope is that that (prior to embedment in the FM) this file can be easily generated from a directory listing of the .ogg files for a particular AI, and subsequent manipulation. There are 3 buttons to step through the file. After each button press, you hear the vocal and see the subtitle. Also, briefly the index number within the list appears (a floating "7" near the statue's shoulder in this screenshot.) You can also - using a custom CVar in the console - jump to a particular index number. Unsurprisingly, you can't edit the subtitle within the FM. Just note what needs changing and do it with a text editor. Also, this FM is not intended to evaluate TDM's stock soundshaders, nor AI lip syncing (so just a statue speaking here.) If you hit the buttons fast enough, multiple sound files play at once, and multiple subtitle fields appear (up to the max of 3 the .gui offers)
  18. The wiki has AI vocal scripts for most but not all characters. I'm guessing the wiki info is derived from early-draft vocal scripts supplied to the voice artists. Looking at the Thug's script, I see "I'd drink horse piss [etc]" There are occasional differences between an .ogg file and it's draft text phrase. Some of that is just the voice artist riffing. Other cases seem to be a more intentional change of plan. For instance, with respect to swearing, "shit" is generally suppressed... "horse shit" became "horse dirt" or "horse filth". "Bloody" became more widely used, probably to make the English more British, less American. I'm about 2/3rds through the Thug translation. Going slow because I'm also working up related tools and style guides. And I don't plan to upload subtitle sets for any AI until after 2.11 is out, presumably Real Soon Now.
  19. @stgatilov, another variation on that theme: to indicate voice location, just have a fixed-size dot that is constrained to travel around the edge of the subtitle field. When the source is behind the player, the dot would be somewhere on the lower edge of the field. Also, thinking about color, that @datiswous brought up: another use would just be just to differentiate the 3 categories of "story", "speech", and eventually "effects". A color tint to the field for each of these makes more sense to me that trying to tie color to particular AI.
  20. Assuming what you're trying to call is tdm_ai_elemental::onDeath() at tdm_ai_monster_elemental.script(141), then probably you should cast to tdm_ai_elemental, not atdm:ai_elemental. The parsing is choking on the ":".
  21. I agree with that assessment... you could probably get a series of slides of timed slides to appear, but it would be more complicated than if the main menu system supported it natively, and adding image flow would be more complicated still.
  22. The idea of using left/right/center justification comes from captioning of pre-recorded material, but as you indicate is problematic for live action where characters move around unpredictably. A horizontal location bar could be implemented as a solid-color rect that is repositioned, I dunno, every 200 ms. The cheap and easy implementation would be just to determine the location from the origin of the source (AI, fixed-speaker, etc.) relative to the viewport. It would be more accurate but much harder to take account of sound propagation pathways. If you wanted to be fancy, you could make the bar get less-wide but more-high (thick) as the source got nearer and visible. I suppose you could even have the bar's vertical location relative to the upper edge of the slot field indicate something about the source's relative vertical location. Probably too fancy, that.
  23. Names that were autogenerated could be recognized and shortened, e.g., "guard #7: " or maybe in parentheses "(guard #7) " as some captioning systems prefer. (The post I copied above has other approaches, where the name text is separate from the caption text, or faces are used.)
  24. COPIED FROM "English Language Subtitles for AI Barks". MORE RELEVANT HERE... @datiswous, I imagine the TDM fonts don't support bold, italics, underline, etc. Maybe just different sizes. I don't think color by itself would be helpful for speaker identification, unless there was a color halo or name tag above each vocalizing AI. Might be useful for word emphasis, though. @stgatilov, if there is location data available to the subtitle code, that could be used in various ways, using a different GUI: 1) Let the 3 slots be either left justified, centered, or right justified, depending on relative speaker location (including off screen). This could be implemented by 9 actual windowDefs (all of same size; 3 each for each current slot, overlaid, with each of the 3 justifications.) Alternatively, with a 3x3 grid, all windowDefs the same size, but 1/3 the current width.) 2) Or instead, at the top edge of each slot, show a short horizontal bar, whose left/right position is moved to be under the relevant AI. If off screen, add an arrow head (<-- or -->). Naming is harder, if the subtitle code can't get at that info. Though at the point the sound engine is passed the sound to render, presumably it knows the speaker, and could independently and in parallel visualize the name information (particularly if the sound engine passed back which GUI slot it was going to use for subtitle.) It is true that associating a name (say, using a small tab-like field) with a slot is less useful when the player doesn't know the AI names (i.e., with no floating names above the characters). If the technical issues of naming could be overcome, then there's still the question of when you'd want to give a name (Rupert) versus type (thug #3) versus generic (speaker #2). Also have to handle special cases of (narrator) and (player). Dreaming... Instead of text names, more fun would be to show a thumbnail face of each AI next to their slot. With a question-mark face when they are off-screen?
  25. @datiswous, I imagine the TDM fonts don't support bold, italics, underline, etc. Maybe just different sizes. I don't think color by itself would be helpful for speaker identification, unless there was a color halo or name tag above each vocalizing AI. Might be useful for word emphasis, though. @stgatilov, if there is location data available to the subtitle code, that could be used in various ways, using a different GUI: 1) Let the 3 slots be either left justified, centered, or right justified, depending on relative speaker location (including off screen). This could be implemented by 9 actual windowDefs (all of same size; 3 each for each current slot, overlaid, with each of the 3 justifications.) Alternatively, with a 3x3 grid, all windowDefs the same size, but 1/3 the current width.) 2) Or instead, at the top edge of each slot, show a short horizontal bar, whose left/right position is moved to be under the relevant AI. If off screen, add an arrow head (<-- or -->). Naming is harder, if the subtitle code can't get at that info. Though at the point the sound engine is passed the sound to render, presumably it knows the speaker, and could independently and in parallel visualize the name information (particularly if the sound engine passed back which GUI slot it was going to use for subtitle.) It is true that associating a name (say, using a small tab-like field) with a slot is less useful when the player doesn't know the AI names (i.e., with no floating names above the characters). If the technical issues of naming could be overcome, then there's still the question of when you'd want to give a name (Rupert) versus type (thug #3) versus generic (speaker #2). Also have to handle special cases of (narrator) and (player). Dreaming... Instead of text names, more fun would be to show a thumbnail face of each AI next to their slot. With a question-mark face when they are off-screen?
×
×
  • Create New...