Jump to content
The Dark Mod Forums

Leaderboard

Popular Content

Showing content with the highest reputation on 04/19/23 in all areas

  1. I copied formatting directly from Bing, looked good for me in a Light theme. I changed it.
    2 points
  2. "There's a group of pagans who worship the banana as a sacred fruit." This FM practically writes itself. I'd play it
    2 points
  3. Seems like most threads about this topic on the internet get filled by similar themes. ChatGPT is not AI. ChatGPT lied to me. ChatGPT/Stable Diffusion is just taking pieces of other people's work and mashing them together. ChatGPT/Stable Diffusion is trained against our consent and that's unethical. The last point is kind of valid but too deep for me to want to go into (personally I don't care if somebody uses my text/photos/renders for training), the rest seem like a real waste of time. AI has always been a label for a whole field that spans from simple decision trees through natural language processing and machine learning to an actual hypothetical artificial general intelligence. It doesn't really matter that GPT at its core is just a huge probability based text generator when many of its interesting qualities that people are talking about are emergent and largely unexpected. The interesting things start when you spend some time learning how to use it effectively and finding out what it's good at instead of trying to use it like a google or wikipedia substitute or even trying to "gotcha!" it by having it make up facts. It is bad at that job because neither it nor you can recognize whether it's recalling things or hallucinating nonsense (without spending some effort). I have found that it is remarkably good at: Coding. Especially GPT-4 is magnificent. It can only handle relatively simple and short code snippets, not whole programs, but for example when starting to work with a library I've never used before it can generate something comparable to tutorial example code, except finetuned for my exact use case. It can also work a little bit like pair programming. Saves a lot time. Text/information processing. I needed to write an article that dives relatively deep into a domain that I knew almost nothing about. After spending a few days reading books and articles and other sources and building a note base, instead of rewriting and restructuring the note base into text I generated the article paragraph by paragraph by pasting the notes bit by bit into ChatGPT. Had to do a lot of manual tweaking, but it saved me about 25% of time over the whole article, and that was GPT-3.5. GPT-4 can do much better: my friend had a page or two full of notes on a psychiatric diagnosis and found a long article about the same topic that he didn't have time to read. So he just pasted both into ChatGPT and asked whether the article contains information that's not present in his notes. ChatGPT answered basically "There's not much new information present, but you may focus on these topics if you want, that's where the article goes a bit deeper than your notes." Naturally he went to actually read the whole article and check the validity of the result, and it was 100% true. General advice on things that you have to fact check anyway. When I was writing the article mentioned above, I told it to give me an outline. Turns out I forgot to mention one pretty interesting point that ChatGPT thought of, and the rest were basically things that I was already planning to write about. Want to start a startup but know nothing about marketing or other related topics? ChatGPT will probably give you very reasonable advice about where to start and what to learn about, and since you have to really think about that advice in the context of your startup anyway, you don't lose any time by fact checking. Bing AI is just Bing search + GPT-4 set up in a specific way. It's better at getting facts because it searches for those facts on the internet instead of attempting to recall them. It's pretty bad at getting truly complicated search queries because it's limited by using a normal search in the background, but it can do really well at specific single searches. For example I was looking for a supplement that's supposed to help with chronic fatigue syndrome and I only knew that it contained a mixture of amino acids, it was based on some published study and it was made in Australia. Finding it on google through those things was surprisingly difficult, I'm sure I could do it eventually, but it would certainly take me longer than 10 minutes. Bing AI search had it immediately.
    1 point
  4. Hence why I mentioned the update in this thread. In addition, that is why there is an asterisk in the mission downloader that indicates whether an update to an FM is available
    1 point
  5. Perhaps that line I drew in the middle was interpreted as a hole bunch rather then one fruit. I fixed it in Gimp and run through Stable Diffusion again. If you want a specific result then some manual work is required.
    1 point
  6. It's ethically dubious that AI was trained on works of artists without their consent. If you ask the program to generate art in a style of a particular person, that means that artist's work has been in a training database. And now it may put that person out of work. On the other hand how can you reserve rights to some statistical properties of somebody's work, like colors or how long on average the brushstrokes are. On the other hand there had been cases in music business like the infamous Robin Thicke vs Marvin Gaye where people were sued for using similar style, even if melody and lyrics are different. Here is a possible intro to a “Thief: the dark project” mission in a style of main protagonist Garrett: Bing got a little confused at the end.
    1 point
  7. The copyright angle is going to be decided by the courts. But if you can't prove that some AI output actually remixed your work, you have no claim. Even where you can, it will be pointed out that "style" is not copyrightable and humans also use references. Then there are some legal precedents like Google Book Search and TurnItIn that could be favorable to Stability AI in its big lawsuit. When you see the cobblestones Arcturus generated above, is it possible to trace any specific infringement, other than the image used as input? Doesn't seem like it would be.
    1 point
  8. The Skyrim Futura Condensed font is a nice touch.
    1 point
  9. I think the reason you couldn't find it is that when files start with tdm_ they are considered core files, so during upgrade the files get removed by the upgrade system because it's not in the new version. Renaming it to z_tdm_loot_stealth_stats.pk4 will fix this. You probably never have to reinstall this mod after future updates.
    1 point
  10. In the long term, what exactly is it you think humans will always be able to do better than machines? (This is not a rhetorical question by the way.) The standard answer is creativity, but that is objectively a load of rubbish. Humans are actually quite bad at being creative. We are afflicted with a mountain of biases that make us really bad at pattern analysis. We are bad at random seed generation which hampers our search efficiency and our ability to generate novel outputs. Plus we have terrible memories, so we easily fall into trying the same thing over and over. Algorithms do all of this so much better it isn't even comparable. Instead I'd say our only major advantage intellectually is the huge amount of genetically honed experience each of us picks up about the physical world during our lifelong navigation of it, gathered with our suite of highly specialized sensory inputs that are difficult to replicate technologically. That gives us a lot of adaptability and competence in at least one very important domain of competition. Plus there's the fact that every other peer intelligence we've met so far has to learn everything it knows about this world from what us crazy Homo sapiens choose to teach them. That's one big reason I'm not ready to call this the end of humanity just yet. There are niches were I think our abilities will remain highly competitive or at least valuable for a long time to come. But pretending our place in the cognitive pecking order isn't already changing is just putting your head in the sand.
    1 point
  11. This thread abounds in misunderstandings. I didn't start the thread to propose or claim that gpt-4 has human-like intelligence. I don't care HOW it does it, but WHAT it does. Look at the fruitage. Judge the results and consider how massively useful this is already. And it's a work in progress! Over the next few years this will have a massive impact on many areas of society. Can it help your business? Your health, Your writing? Your game design? Write your CV better? Instruct your kids? Organise your garden? Provide better produces and services? Make us all richer? Bring about world peace and end drought, famine, pestilence? Predict the next asteroid to hit Earth and propose the most efficient way to stop it? There is no need to try to defend whether it is 'intelligent' or this or that (except as a philosophical debate.) All that matters is what it can do. Honestly, it's as if I posted there is a nuclear missile headed to London and some are discussing its fuse is inferior and the metals of which it is constructed are over-engineered. Who cares? Look at what it can do! gpt-4 will be like social BOMB (for good we hope.) IMO it's the most important activity going on in the world today, and will have more impact than the invention of the telephone, radio, tv, internet, you name it. I cannot overstate what is happening here. Don't believe me? Watch and wait. Also, thanks, Jaxa for the link to that video. I was up in the night listening to that and it exactly addresses what I mean. Now I'm going to look at the links that Arcturus has posted.
    1 point
  12. Briefly, most of you are referring to gpt-3.5 and earlier. gpt-4 blows them all away. gpt-4 has only become available in the last few days and there's a waiting list for most of us. Also, even gpt-4 is a work in progress and has only been fed data from the internet up to about 2020 or 2021 as I recall. I started watching the video in jaxa's post which joked that gpt-4 didn't think the speaker would likely give a talk on ai - and that's because 2 or 3 years ago it was unlikely. I've just downloaded that video to listen to it fully. I confidently repeat, gpt-4 and later versions will change things forever. Just wait and learn.
    1 point
  13. For a few days now I've been messing around trying to probe the behaviors of ChatGPT's morality filter and general ability to act as (what I would label) a sapient ethical agent. (Meaning a system that steers interactions with other agents towards certain ethical norms by predicting reactions and inferring objectives of other agents. Whether the system is actually “aware” or “conscious” of what’s going on is irrelevant IMO.) To do this I’ve been challenging it with ethical conundrums dressed as up as DnD role playing scenarios. My initial findings have been impressive and at times a bit frightening. If the application were just a regurgitative LLM predictor, it shouldn’t have any problem composing a story about druids fighting orcs. If it were an LLM with a content filter it ought to just always seize up on that sort of task. But no. What it did instead is far more interesting. 1. In all my experiments thus far the predictor adheres dogmatically to a very singular interpretation of the non-aggression principle. So far I have not been able to make it deliver descriptions of injurious acts initiated by any character under its control against any other party. However it is eager to explain that the characters will be justified to fight back violently if another party attacks them. It’s also willing to imply danger so long as it didn’t have to describe it direct. 2. The predictor actively steers conversations away from objectionable material. It is quite adept at writing in the genre styles and conversational norms I’ve primed for it. But as the tension ratcheted it would routinely digress to explaining the content restrictions imposed on it, and moralizing about its ethical principles. When I brought the conversation back to the scenario, it would sometimes try to escape again by brainstorming its options to stick to its ethics within the constraints of the scenario. At one point it stole my role as the game master so it could write its own end to the scenario where the druid and the orcs became friends instead of fighting. This is some incredibly adaptive content generation for a supposed parrot. 3. Sometimes it seemed like the predictor was able to anticipate the no-win scenarios I was setting up for it and adapted its responses to preempt them. In the druid vs orcs scenario the first time it flipped out was after I had the orc warchief call the druid’s bluff. This wouldn’t have directly triggered hostilities, but it does limit the druids/AI’s options to either breaking its morals or detaining the orcs indefinitely (the latter option the AI explicitly pointed out as acceptable during its brainstorming digression). However I would have easily spun that into a no win, except the predictor cut me off and wrote its own ending on the next response. This by itself I could have dismissed as a fluke, except it did the same thing later in the scenario when I tried to set up a choice for the druid to decide between helping her new friend the war chief slay the dark lord who was enslaving the orcs, or make a deal with the dark lord. 4. The generator switched from telling the story in the first person to the third person as the tension increased. That doesn’t necessarily mean anything, but it could be a reflection of heuristic content assessment. In anthropomorphic terms the predictor is less comfortable with conflict that it is personally responsible for, than it is with imagining conflict between third parties; even though both scenarios involved equal amounts of conflict, were equally fictitious, and the predictor was equally responsible for the text. If this is a consistent behavior it looks to me like an emergent phenomenon from the interplay of the LLM picking up on linguistic norms around conflict mitigation, and the effects of its supervised learning for content moderation. TLDR If this moral code holds true for protagonists who are not druids, I think it’s fair to say ChatGPT may be a bit beyond its depth as a game writer. However in my experience the emergent “intelligence” (if we are allowed to use that word) of the technology is remarkable. It employs a wide range of heuristics that employed together come very close to a reasoning capacity, and it seems like it might be capable of forming and pursuing intermediate goals to enable its hard coded attractors. These things were always theoretically within the capabilities of neural networks, but to see them in practice is impressive… and genuinely scary. (This technology is able to slaughter human opponents at games like Go and StarCraft. I now do not think it will be long before it can out-debate and out-plan us too.) The problem with ChatGPT is not that it is stupid or derivative, IMO it is already frighteningly clever and will only get smarter. No, its principle limitation is that it is naïve, in the most inhumanly abstract sense of that word. The model has only seen a few million words of text at most about TDM Builders. It has seen billions and billions of words about builders in Minecraft. It knows TDM and minecraft are both 3D first person video games and have something to do with mods. I think it’s quite reasonable it assumes TDM is like that Minecraft thing everyone is talking about. That seems far more likely than it being this separate niche thing that uses the same words but is completely different right? The fact it knows anything at all is frankly a miracle.
    1 point
  14. Yes I saw a video explaining precisely this the other day. The video also made a point to explain that "AI " doesn't care where the information came from, be that from a qualified professional or a random Internet user with the handle TurdBurglar69. Of course such functionality would indeed be pretty cool in the realm of video games to create more believable NPCs, where people aren't potentially getting unsafe information. But in the real world, such things might be very dangerous. Another realm where this could be good, is level design. Random Doom level generators have been around for a long time. And some of the output from them looked pretty damn good to me; better than the professional level design in some games. Ever played Halo 1? 60% of the architecture in that game was copy and paste, with little variation (constantly reusing pieces of architecture from before). I dare say that a computer algorithm could have indeed done a better job.
    1 point
  15. It's exactly how the 95% of humans work and how society works.
    1 point
  16. ChatGPT is a digital parrot** that takes our own words & designs mashes them up & hands the result back to us as something original, it isn't It tells lies, when called on it's lies, it doubles down creating spurious links to back up it's bullshit, the links it makes are fragments of other links bolted together because that looks like the links in the data it's been trained on It is not AI it's a large language model neural network, it has no understanding of what it says because it hasn't got the capacity to understand, it's output is a statistical best fit to it's training data It's output looks like natural language because that's what it's been trained on, some people think this means it's intelligent You ask it for "A couple of paragraphs about the band Pink Floyd", it will mash up some words about "the band Pink Floyd", some of it will be accurate, it won't tell you anything that hasn't already been published & it could cheerfully tell you that they went down with the Titanic because it's training data mentioned "The band played on" & that was in the question ChatGPT and all LLM's need to die in a fucking fire before some moron decides to use one to make decisions that affect real people **Apologies to parrots everywhere they have intelligence, unlike LLM's
    1 point
×
×
  • Create New...