Jump to content
The Dark Mod Forums

Search the Community

Searched results for '/tags/forums/language strings/' or tags 'forums/language strings/q=/tags/forums/language strings/&'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General Discussion
    • News & Announcements
    • The Dark Mod
    • Fan Missions
    • Off-Topic
  • Feedback and Support
    • TDM Tech Support
    • DarkRadiant Feedback and Development
    • I want to Help
  • Editing and Design
    • TDM Editors Guild
    • Art Assets
    • Music & SFX

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. The *DOOM3* shaders are ARB2 ('cause of old GeForce support) carmack plan + arb2 - OpenGL / OpenGL: Advanced Coding - Khronos Forums
  2. The subtitles for The Mature Builder (aka Builder 3) vocal set are now available: testSubstitlesMatureBuilder.pk4 The wiki's "Vocal script: Builder3" suggests these traits for this cleric: "strong, intimidating voice, determined; archaic language; patient, talks about servitude, orthodoxy." Compared to other builders, this one seems to enjoy quoting scripture more often and at more length. As usual, the testing FM serves as the vehicle to provide these subtitles for eventual incorporation into TDM. Statistics In file fm_root.subs there are 315 "inline" subtitless, categorized as: - 52 with an explicit linebreak, intending 2 lines - 263 without 3 of the inlines have explicit duration extensions, as follows: - 2 from 0.25 to 0.49, for 17 cps presentation rate - 1 capped at 0.50 seconds, for 17-20 cps - none with more than 0.50 seconds There are 9 SRTs, including: - 8 with 2 messages - 1 with 4 messages None of these were given a duration extension. Of the 20 total SRT messages, there are: - 12 with an explicit linebreak, intending 2 lines - 8 without In all, in this vocal set captioning, there are 324 voice clips with subtitles, showing 335 messages. Corresponding Excel File MatureBuilderSubtitles.xlsx This is based on Version 5 of the Excel Template for TDM bark subtitles, which was also featured in the preceding work for Average Jack, The Pro, The Maiden, and The Grumbler.
  3. I mean names you can display on screen for the general audience when holding an object. Refer to the screenshots I posted. ------------------------------------------------ I would say we should employ the language/translation system (#str_). I expect most entityDef names (plus model names) are quite descriptive already: atdm:moveable_bowl_large_pewter (bowl01_large.lwo) atdm:moveable_mug_wood_old (mug_wooden01.lwo) atdm:moveable_ball_spiked (spiked_ball.lwo) atdm:moveable_coffeetable (coffeetable1.lwo) atdm:moveable_rock_small01 (smallrock1.lwo) ... When in doubt just fire up DR and see the object in question. Thinking of skins we should avoid including names of materials or colors, in example instead of a "Red Apple", stick to "Apple". Instead of "Pine Chair", let's name it a "Chair". Many times it will be difficult to settle on a name therefore simpler/shorter will be better: up to mappers to spice things up. Now if you ask me, how do we approach existing missions? No idea, haven't thought of it. And bound items such as candles and candle-holders? I don't know, Candle? One case at a time, I guess.
  4. Ameca has the language model of ChatGPT 3.5, but is way more than this. It's a cloud platform for human interactions, in continuous developement. It not react only to words, also of extern stimuli like a human does. It's not a joke with the mirror, but a test for this reactions. Naturally it's not something that we can call conciencious, but Ameca is the first step to it.
  5. Seems to confirm: https://bugs.thedarkmod.com/view.php?id=5718 does it happen in the latest dev build: https://forums.thedarkmod.com/index.php?/topic/20824-public-access-to-development-versions/
  6. GPT-3. However I don't think that video was generated by autonomous reactions. Rather it looks to me like the developers having fun with a pre-scripted sequence of expressions. If you did this experiment for real I don't think it would play out that way. GPT-3/4 cannot react with genuine surprise in my experience. Surprise requires having an expectation and then finding it subverted, but these LLMs don't have the neurological hardware to form those sorts of impressions. They have no continuity of non-verbal memory and limited options for introspection. Plus I have a hard time believing image recognition would be able to recognize the robot's reflection as a robot, or convey that information to the language model such that it could figure out it is looking at its reflection.
  7. I just read@motorsep Discovered that you are able to create a brush, then select it and right click "create light". Now you have a light that ha the radius of the former brush. Just read it on discord and thought it may be of use for some people in the forums here too.
  8. I honestly thought English was your first language. About the lockpicks
  9. Hm I guess there's some kind of language barrier there as I'm currently lost and don't know what to do.
  10. I just found this thread on ttlg listing Immersive Sims: https://www.ttlg.com/forums/showthread.php?t=151176
  11. Recently revisiting the forums after a longer period of time I wanted to check the unread content. I don't know if I am doing this wrong since.. ever... but on mobile (visiting the unread content page on my smartphone) you have to click on that tiny speech bubble to go to the most recent post in a thread. If you don't click correctly you'll hit the headline and end up at post 1 in the beginning of the thread. It's terrible on mobile, since not only the speech bubble is really small and was to miss. But also the thread headline is just millimeters away from it so you go right to the first post that was ever made instead of the most recent ones. Am I doing it wrong? I just want to go through u read content a d the to the newest post from that topic.
  12. A@datiswous Ah yeah, well sorry, I was quiet busy and only visiting discord. First time here on the forums since months now I think.. Thank you for the subtitles. I encourage everyone who is interested in using them to download it from here as I'm not sure when I'll be able to implement them myself into the mission. Again, thank you for your work.
  13. @datiswous, I suspect this is possible if rather tedious for signs or 1-page broadsheets. You could arrange that the frob calls a script, and the script causes the player/narrator to say a custom .ogg file. That file would be just silence, of the same length as the desired subtitle duration for the translated text (or at least within 5 seconds of it, then using the duration extension methods). Alternatively, the narrator could actually "read aloud" the text. The complexity of the .gui for books is rather daunting, but maybe you could tie into page-turning events. I don't think you'd want to recommend this approach for general book translation throughout an FM. If for no other reason, making the .ogg files would be a pain. Also, it doesn't seem to me to extend to more than one subtitle language. I could imagine that, with engine changes, books could be made into first-class speakers (i.e., as if AIs), so you could optionally hear as well as read a book, and in any event see the subtitle in any language of the player's choice for which a translation is available.
  14. Of course, it is one of the reasons for the decline of online forums, since the advent of mobile phones. Forums on a mobile are a pain in the ass, but on the other hand, for certain things there are no real alternatives to forums, social networks cannot be with their sequential threads, where it is almost impossible to retrieve answers to a question that is asked. has done days ago. For devs for internal communication, the only thing offered is a collaborative app, such as System D (not to be confused with systemd). FOSS, free and anonymous registration, access further members only by invitation, full encrypted and private. https://www.system-d.org
  15. Ignoring is somewhat inadequate as you still see other members engaging in a discussion with the problematic user, and as Wellingtoncrab says such discussions displace all other content within that channel. Moderation is also imperfect as being unpleasant to engage with is not in itself banworthy, so there is nothing more to do if such people return to their old behaviour after a moderator had a talk with them, except live with it or move away. I'd be more willing to deal with it if it felt like there were more on-topic discussion, i.e. thoughts about recently played fan missions or mappers showcasing their progress, rather than a stream of consciousness about a meta topic that may or not have to do with TDM. I guess the forums already serve the desired purpose, or they just compartmentalise discussions better.
  16. A search engine with the linguistic capabilities of ChatGPT can of course be useful, but a language model like ChatGPT without access to real-time information, no matter how good the language model is, is nothing more than a toy, can serve as an assistant or customer support for certain products with the necessary knowledge base of this product, but no more. Today, the information from search engines with AI is infinitely more reliable than that from AI chats, especially if they are not from large monopolies that may reflect certain commercial and/or political interests in their responses, as has already been experienced in Google, MS and also BraveAI, the latter with strong influences from the extreme right and whose CEO was already fired from Mozilla for this. Better the independent AI You can try right now https://andisearch.com (My favorite) https://www.perplexity.ai (also as usefull Browser extension) https://phind.com (specially for devs and programmers) https://you.com (the most complete)
  17. First of all, ChatGPT , independent of the version, is a language model to be able to interact with the user, imitating being intelligent. It has a knowledge base that dates back to 2021 and adds what users contribute in their chats. This means, first of all, that it is not valid if you are looking for correct answers, since if it does not find the answer in its base, it has a tendency to invent it with approximations or directly with false or obsolete answers. With this, the future will not change, it will occur with AI of a different nature, on the one hand with search engines with AI, since they have access to information in real time, without needing such complex language models and for this reason, they will gradually search engines are going to add AI, not only Bing or Google, but before these there was Andisearch, like the first of all, Perplexity.ai, Phind.com and You.com. Soon there will also be DuckDuckGoAI. On the other hand, generative AI to create images videos and even aplications, music and other, like game assets or 3D models., The risk with AI came up with Auto GPT, initially a tool that seemed useful, but it can be highly dangerous, since on the one hand it has full access to the network and on the other hand it is capable of learning on its own initiative to carry out tasks that are introduced as if it were a Text2Image app out there, what was demonstrated with ChaosGPT, the result of an order introduced in Auto GPT to destroy humanity, which it immediately began to develop with extraordinary efficiency, first trying to access the missile silos nuclear weapons and to fail, luckily, trying to get followers on Twitter with a fake account that he created and where he got more than 6000 followers, hiding later, realizing the danger that can be blocked or deactivated on the network. Currently nothing is known about it, but it is still a danger not exactly to be ruled out, it can really become Skynet. AI is going to change the future, but not ChatGPT which isnt more than a nice toy.
  18. My brain is reeling with overload! First, being enthralled with your goblin roleplay, I overlooked your second link to your clockwork story. That is now even more fascinating to me. In particular, the AI's reasoning powers (eg, when it perceives and describes the allegorical element of your story) is staggering. Clearly the AI can reason intelligently. No matter that it isn't conscious and, like all software, it's merely a list of instructions & data (thought HUGE!) it can reason in an intelligent way. In that instance it was better than me! It definitely has (or can represent and function as if) an understanding of language. And I say if something looks like a spade, and be used to dig, then it's a spade, even if constructed differently. thanks, now found the sharing feature and discovered your post is publicly-readable but posting to it requires an openAI account. I added a brief message to test if you can see it. I'm sure it can be seen on the public page. But does that sync with the copy in your private version of it? For further interest, I posted a version of one of my earlier tests that might be of interest at.... https://chat.openai.com/share/36420a5f-f8d8-4eeb-939b-a1c4c017c2b3 It's a failed story fragment but I learned a lot about how to work with gpt.
  19. I don't recall a system for noise masking. It sounds like it'd be a good idea, but when you get into the details you realize it'd be complicated to implement. It's not only noise that that goes into it, I think. E.g., a high register can cut through even a loud but low register rumble. And it's not like the .wav file even has data on the register of what it's playing. So either you have to add meta-data (which is insane), or you have to have a system to literally check pitch on the .wav data and paramaterize it in time to know when it's going to cut through what other parameters from other sounds. For that matter, it doesn't even have the data on the loudness either, so you'd have to get that off the file too and time the peaks with the "simultaneous" moment at arbitrary places in every other sound file correctly. And then position is going to matter independently for each AI. So it's not like you can have one computation that works the same for all AI. You'd have to compute the masking level for each one, and then you get into the expense you're mentioning. I know there was a long discussion about it in the internal forums, and probably on the public subforums too, but it's been so long ago now I can't even remember the gist of them. Anyway the main issue is I don't know if you'll find a champion that wants to work on it. But if you're really curious to see how it might work, you could always try your hand at coding & implementing it. Nothing beats a good demo to test an idea in action. And there's no better way to learn how to code than a little project like that. I always encourage people to try to implement an idea they have, whether or not it may be a good idea, just because it shows the power of an open source game. We fans can try anything we want and see if it works!
  20. OK, after discovering a gpt-4 account can be cancelled anytime (and even renovated later) I decided 20-odd dollars was worth investing to explore it. I found it fascinating but my main interest was fiction writing and I found although gpt-4 was creative and its grammar perfect, it's continuity was flawed with anything longer than maybe a couple of thousand words because amazingly it cannot recall precisely all the text even in one conversation. Somehow it condenses earlier messages and the further you get into a story the more I get forgets stuff, like Mr Johnson comes into the room. Erm... didn't Mr Johnson die in Chapter 3? This only applies if you try to help chatgpt-4 to write a story rather than get chatgpt-4 help YOU to write a story. That's different. Anyway, I've not given up trying to find a workaround, and in the meantime, gpt-4 now provides internest access via Bing. This is even better than Phind. You can type all sort of instructions in and get gpt-4 to do the searching like so: User: OK. please search for advice on getting chatgpt-4 to write a long story. List the first 10 website articles. Omit any that only help the user write a story, and only list those where the user helps chatgpt-4 to write the story. Abandon processing any websites that give errors such as failing to access and instead display here the link, then continue searching. If you can't find 10 articles within 10 minutes, then stop Chatgpt-4: I'm sorry, I wasn't able to fully complete your request. However, I've gathered some useful information. I found an article titled "How to Write a Novel With ChatGPT (Ultimate Guide in 2023)" on WritingBeginner.com. This article provides a step-by-step guide on how to use ChatGPT to write a novel. This guide includes 13 steps, such as: 1. Asking ChatGPT to suggest a genre and subgenre. 2. Using ChatGPT to come up with a story premise. 3. Expanding the premise into a back cover blurb with the help of ChatGPT. 4. Creating an outline from the blurb with ChatGPT's help【74†source】【75†source】【76†source】【77†source】【78†source】. I spent quite a bit of time exploring the details of this guide, which may be helpful if you're looking to use an AI like ChatGPT to assist with writing a long story. Unfortunately, I wasn't able to find 10 separate articles that specifically discuss how to help ChatGPT-4 write a long story, as per your request. The search results included a mix of different kinds of articles, and it took a significant amount of time to explore each one to determine its relevance. I'll list the URLs of the remaining pages that seemed potentially relevant, but I wasn't able to fully explore them within your time limit: 1. https://www.artificial-intelligence.blog/news/how-to-write-a-story-with-gpt-3 2. https://www.wired.com/story/ai-language-models-biggest-challenge-yet-long-form-content/ 3. https://www.writersdigest.com/be-inspired/how-artificial-intelligence-can-help-authors-write-a-better-novel 4. https://www.fastcompany.com/90697938/this-is-how-a-i-will-help-you-write-your-next-novel 5. https://towardsdatascience.com/writing-with-gpt-3-turning-prompts-into-novels-ba1d109b2a6f 6. https://www.technologyreview.com/2023/03/20/104107/artificial-intelligence-is-now-writing-novels-heres-an-example/ Please note that the relevance of these articles to your specific request is uncertain since I wasn't able to fully assess them within your time limit. As for the technical question about Chrome updates, the latest version of Google Chrome as of June 8, 2023, is 112.0.5615.137/138 for Windows and Mac, and 112.0.5615.165 for Linux【47†source】. I hope this information is helpful, and I apologize for not being able to fully meet your initial request. End... This will change searching the net forever! It's like having an assistant to do the legwork.
  21. FYI, there are now two documents that describe my current workflow for the bark subtitles associated with any AI. The "Overall" Doc This gives a 1-page summary of how an Excel spreadsheet serves as the hub to manage and tune the inline subtitles, and flag those that need an SRT treatment. SRT is then covered in more detail, as is preparation of an FM for testing the results and releasing them for incorporation into TDM: Subtitling Overall Workflow, SRT, Testing (Jack - June 2023).docx The "Explained" Doc This concentrates on the latest version of the spreadsheet, beginning with its instantiation from an Excel template, followed by population of the first four columns with program-generated data and with strings copied from the vocal set. Manipulation of each subtitle, and its affect on generated column info and your decision making, is then covered. Finally, export to a text file and further text editing creates the content needed to be incorporated into the testing FM: Explained - SubtitlesTemplate(v5) & AverageJackSubtitles.docx
  22. That sounds like a good proposal to me, since it places responsibility for modulating the difficulty effects fully in the player's hands. I think that is a good solution to the authorial intent problem. However, if you go that route I think a certain amount of cleverness is needed to really make the feature successful. In most situations, whether guards remain alerted for 30 seconds, 30 minutes, or 30 years makes no difference at all to difficulty. If you successfully get away, you will have plenty of maneuvering room and/or combat tools at your disposal to deal with the heightened threat. It just requires a bit more caution and patience. And if you don't get away, or just can't be bothered, then you will reload and the guards will switch back to unalerted regardless. Absent a carefully engineered scenario where guard activity patterns adjust depending on alert state (and save scumming is inhibited somehow), this proposed behavior amount to a cosmetic adjustment that most players will never even notice. Right now, as a practical matter, yes. There are currently some open source language models that can run on local hardware, but all the ones I have seen lack the ingenuity and self-awareness of GPT-4, which remains the gold standard. I do not think they could make very good guards. That said, I think it is reasonably likely that we could get to that point with local, consumer grade hardware within 3-5 years. There are a lot of signs suggesting GPT-4 is poorly optimized as neural networks go. Its brain contains a lot of connections that chew up compute resources, but only a small proportion of them do anything useful.
  23. I'm using the version from kcghost. I just tested and I can't see any difference inside the inventory. On the stats itself it doesn't show the different loot types (still seen in the inventory), but instead gives more info on stealth score. Edit: I see Dragofer made an updated version of his script. I have to check that out. Edit2: That version works: https://forums.thedarkmod.com/applications/core/interface/file/attachment.php?id=21272&key=02755164a3bed10498683771fe9a0453
  24. I looked but didn't see this video posted in these forums. It's pretty cool.
  25. It wasn't a "sacrifice", it was a deliberate decision. People wanted the game to be as close as possible to the original, including pixelated graphics. If you ask me, the former version based on the Unity engine looked and felt better. But, hey... I guess I'm not the right person to judge that, as I never played the original, and always found that the art style of System Shock 2 is much better anyway. This also illustrates the issue with community funded games: Too many cooks spoil the broth. In game design, you need freedom, not thousands of people who want you to do this and this and that. Just take a look at the Steam forums and see how all those wimps complain again about everything. Hopeless.
×
×
  • Create New...