Jump to content
The Dark Mod Forums

Chat-GPT will change THE WORLD forever


Fidcal

Recommended Posts

Language models are a mirror, reflecting the collected works of humanity back at us.

Some people look in that mirror, see their own reflection, and conclude "there is a artificial person behind this sheet of glass that looks and behaves exactly like me... our days as humans are numbered!". But it's not true. It's just a reflection. It can't create anything that humans couldn't (or haven't) created to begin with.

I have no doubt that one day, artificial human-like intelligence will exist, but it will require a lot more than just a language model remixing stuff on the internet. If you're a cargo cult programmer copy-pasting junk code off Stack Overflow, or a hack blog writer churning out articles with titles like "20 dumb things Trump has said", AI is coming for your job — but that's because your job wasn't worth anything to begin with.

Link to comment
Share on other sites

47 minutes ago, datiswous said:

Interesting.. Looks more like a paprika.

Perhaps that line I drew in the middle was interpreted as a hole bunch rather then one fruit. I fixed it in Gimp and run through Stable Diffusion again. If you want a specific result then some manual work is required.

  • Like 1

It's only a model...

Link to comment
Share on other sites

Taking AI away from the public and only allowing business to use it would probably be unconstitutional though. First amendment, and written computer programs are a form of speech. They're no different than words in a book, in fact, that's how a lot of programs used to be delivered in the very early days of personal computers! (read the book and type it yourself.)

 

You can even go outside and recite programs in public, though people will look at you funny "closed parentheses, left bracket".

Link to comment
Share on other sites

On 4/14/2023 at 4:53 PM, JackFarmer said:

From my status update feed a few weeks ago:

 

Dear ChatGPT, What are The Builders in the Dark Mod?

 

The Builders is a popular game mode in the video game community called "Dark Mod," which is a fan-made tribute to the classic "Thief" series of stealth games. In this game mode, players assume the role of builders who construct structures in a dark and ominous environment while trying to avoid detection by patrolling guards.

The Builders game mode is unique in that it offers players the opportunity to create and design their own levels using the in-game tools and resources provided. Players can experiment with different building materials, construct intricate traps and puzzles, and create their own unique gameplay experiences.

While The Builders game mode is not an official part of the Dark Mod, it has become one of the most popular and well-loved aspects of the game, thanks to its emphasis on creativity, strategy, and stealth.

 

ChatGPT spouting bullshit here, which is now available on the internet for anyone to read

Anyone reading that who has no experience of TDM will think that's something TDM does because ChatGPT told them it does, they may install TDM purely because of this

They could then be disappointed in TDM rather being disappointed in ChatGPT when they fail to find this functionality after installing TDM, thus causing harm to TDM's reputation

If they go to the forum and ask they'll be told the functionality doesn't exist and never did, but they've actually got to go to the forum of a game they've just rage quit to find that out, instead of taking to Farcebook or Twatter and crapping all over the reputation of a talented bunch of people who've worked extremely hard on this project for over a decade

ChatGPT has obviously trained on data from the TDM website among many others, but instead of pointing people at the website or simply stating that the requested information does not exist, it made up some plausible bullshit that fits it's training data and now it's done that it will defend it to the hilt, it will never admit this is wrong or that it made this up

ChatGPT is being built to make a profit for OpenAI

Did OpenAI contact the site admin to ask for permission to use the TDM website in this way ?

Has OpenAI made any financial contribution to TDM for using it's data to train a neural network they hope will earn OpenAI billions ?

How does TDM get the bullshit spouted by ChatGPT corrected before it harms TDM's reputation  ?

Link to comment
Share on other sites

Seems like most threads about this topic on the internet get filled by similar themes.

  • ChatGPT is not AI.
  • ChatGPT lied to me.
  • ChatGPT/Stable Diffusion is just taking pieces of other people's work and mashing them together.
  • ChatGPT/Stable Diffusion is trained against our consent and that's unethical. 

The last point is kind of valid but too deep for me to want to go into (personally I don't care if somebody uses my text/photos/renders for training), the rest seem like a real waste of time. AI has always been a label for a whole field that spans from simple decision trees through natural language processing and machine learning to an actual hypothetical artificial general intelligence. It doesn't really matter that GPT at its core is just a huge probability based text generator when many of its interesting qualities that people are talking about are emergent and largely unexpected.

The interesting things start when you spend some time learning how to use it effectively and finding out what it's good at instead of trying to use it like a google or wikipedia substitute or even trying to "gotcha!" it by having it make up facts. It is bad at that job because neither it nor you can recognize whether it's recalling things or hallucinating nonsense (without spending some effort).

I have found that it is remarkably good at:

  • Coding. Especially GPT-4 is magnificent. It can only handle relatively simple and short code snippets, not whole programs, but for example when starting to work with a library I've never used before it can generate something comparable to tutorial example code, except finetuned for my exact use case. It can also work a little bit like pair programming. Saves a lot time.
  • Text/information processing. I needed to write an article that dives relatively deep into a domain that I knew almost nothing about. After spending a few days reading books and articles and other sources and building a note base, instead of rewriting and restructuring the note base into text I generated the article paragraph by paragraph by pasting the notes bit by bit into ChatGPT. Had to do a lot of manual tweaking, but it saved me about 25% of time over the whole article, and that was GPT-3.5.
  • GPT-4 can do much better: my friend had a page or two full of notes on a psychiatric diagnosis and found a long article about the same topic that he didn't have time to read. So he just pasted both into ChatGPT and asked whether the article contains information that's not present in his notes. ChatGPT answered basically "There's not much new information present, but you may focus on these topics if you want, that's where the article goes a bit deeper than your notes." Naturally he went to actually read the whole article and check the validity of the result, and it was 100% true.
  • General advice on things that you have to fact check anyway. When I was writing the article mentioned above, I told it to give me an outline. Turns out I forgot to mention one pretty interesting point that ChatGPT thought of, and the rest were basically things that I was already planning to write about. Want to start a startup but know nothing about marketing or other related topics? ChatGPT will probably give you very reasonable advice about where to start and what to learn about, and since you have to really think about that advice in the context of your startup anyway, you don't lose any time by fact checking. 

Bing AI is just Bing search + GPT-4 set up in a specific way. It's better at getting facts because it searches for those facts on the internet instead of attempting to recall them. It's pretty bad at getting truly complicated search queries because it's limited by using a normal search in the background, but it can do really well at specific single searches.

For example I was looking for a supplement that's supposed to help with chronic fatigue syndrome and I only knew that it contained a mixture of amino acids, it was based on some published study and it was made in Australia. Finding it on google through those things was surprisingly difficult, I'm sure I could do it eventually, but it would certainly take me longer than 10 minutes. Bing AI search had it immediately. 

Edited by vozka
  • Like 2
Link to comment
Share on other sites

2 hours ago, vozka said:

Bing AI is just Bing search + GPT-4 set up in a specific way. It's better at getting facts because it searches for those facts on the internet instead of attempting to recall them. It's pretty bad at getting truly complicated search queries because it's limited by using a normal search in the background, but it can do really well at specific single searches.

Keeping the facts separate from the LLM seems to be the way to go. I'm sensing the creation of a new (temporary) human job.

Link to comment
Share on other sites

2 hours ago, vozka said:

Seems like most threads about this topic on the internet get filled by similar themes.

Perhaps none of us are actually human, and are merely subroutines of the AI interacting with itself. lol.

Hey, I guess that opens another can of worms as for how AI might be misused; generating product reviews. Although to be fair, Amazon and Newegg have completely ruined the usefulness of reviews anyway, by grouping reviews from all sorts of different products together just because the case has the same name badge on the front of it, even though the design and insides are completely different! And of course fake reviews have already been a thing forever anyway.

Link to comment
Share on other sites

And exactly like the contemporary AI hype-train, walls and walls of palaver, pages and pages, and ZERO actual utility!

4 pages now and so quickly, it's sad how this is likely the most lively this community can be anymore.

These types of threads everywhere is embarrassing cringe, I have to be honest.

 

I really pray that people will learn what "curated dataset" means, and see behind the curtain.

"Piecemeal refinement", "puppeteering" and our dear old friend "MICROTRANSACTIONS".  Hahaaaa!

You will remember these words...  Stop living in fear, go do something better with your time.

 

 

Aiiiiii.jpg

 

Edited by LDAsh
Link to comment
Share on other sites

The whole point of this thread is that ChatGPT and related generative AI technologies have the potential to "change the game" of game making. If the TDM community seems dead to you, remember that is only because there are very few people in the world with the skill set or resources to make fan mission, or even to contribute productively to discussing them. A lot more people like stealth games than have the time or talent to make them, much less learn how to make them.

If new technology can lower the threshold for them to participate or even create an entirely new population of participants, that could be revolutionary for us. But the first step of that process is recognizing what this new technology is, what it's capable of, and where it fits within the pre-existing human social/economic/legal ecosystem. How else are FM creators and potential creators to know whether it is worth investing their precious time investigating this tech and integrating it into their processes? Hence the discussion so far.

17 hours ago, vozka said:

ChatGPT/Stable Diffusion is trained against our consent and that's unethical. 

Something that I don't think has been brought up about this is that if anyone wishes to publish works while forbidding their use for creating any sort of derivative work, there are legal mechanisms right now that allow you to do that: You just need to keep your work under lock and key and make every person you allow to see it sign a legally binding confidentiality and non-compete agreement. This is extra effort and will generally require you to make proportionate concessions to the other party to make the agreement both legally valid and economically enticing, but it can be done. In fact it is done. Frequently.

What you can't do is nail your work to the church door for all to freely see, or give it to every merchant to sell on the open market, and then retroactively decide you want to reserve additional rights for yourself! Can you imagine if the world actually worked like that? I cannot imagine a more fertile ground for corporate oppression. Imagine if Disney had the right to ban anyone who had ever seen Snow White from ever working in animation! Imagine if Activision could ban anyone who had ever played a Call of Duty from developing a competing modern military shooter.

The only angle to this argument I think has a shred of validity is that maybe we can and should hold industrial actors to different ethical and legal standards from actual human beings. However I don't think that finger in the dike would hold back the storm surge for very long. Crowd sourcing is a thing, and there are plenty of people who would be happy to donate their C/GPU time and internet connections for AI research.

In terms of legal strategies against generative AI, the copyright angle is the weakest of sauces. Even if the courts are taken in by the fallacious claims of the plaintiffs (which would not surprise me), their rulings will be just as unenforceable in practice as the music and film industries' fruitless wars against piracy. Worse in fact, because with generative AI there could be an actual arms race between uncovering and concealing evidence of illegal copying. 

Edited by ChronA
Link to comment
Share on other sites

1 hour ago, ChronA said:

Something that I don't think has been brought up about this is that if anyone wishes to publish works while forbidding their use for creating any sort of derivative work, there are legal mechanisms right now that allow you to do that: You just need to keep your work under lock and key and make every person you allow to see it sign a legally binding confidentiality and non-compete agreement. This is extra effort and will generally require you to make proportionate concessions to the other party to make the agreement both legally valid and economically enticing, but it can be done. In fact it is done. Frequently.

What you can't do is nail your work to the church door for all to freely see, or give it to every merchant to sell on the open market, and then retroactively decide you want to reserve additional rights for yourself! Can you imagine if the world actually worked like that? I cannot imagine a more fertile ground for corporate oppression. Imagine if Disney had the right to ban anyone who had ever seen Snow White from ever working in animation! Imagine if Activision could ban anyone who had ever played a Call of Duty from developing a competing modern military shooter.

The only angle to this argument I think has a shred of validity is that maybe we can and should hold industrial actors to different ethical and legal standards from actual human beings. However I don't think that finger in the dike would hold back the storm surge for very long. Crowd sourcing is a thing, and there are plenty of people who would be happy to donate their C/GPU time and internet connections for AI research.

In terms of legal strategies against generative AI, the copyright angle is the weakest of sauces. Even if the courts are taken in by the fallacious claims of the plaintiffs (which would not surprise me), their rulings will be just as unenforceable in practice as the music and film industries' fruitless wars against piracy. Worse in fact, because with generative AI there could be an actual arms race between uncovering and concealing evidence of illegal copying. 

I agree with what you're saying. 

My biggest problem with this ethics debate is that there seems to be a lot of insincerity and moving the goalposts by people whose argument is simply "I don't like this" hidden behind various rationalizations. Like people claiming that Stable Diffusion is a collage machine or something comparable to photobashing. Or admitting that it's not the case but claiming that it can still reproduce images that were in its training dataset (therefore violating copyright), ignoring that the one study that showed this effect was done on an old unreleased version of Stable Diffusion which suffered from overtraining because certain images were present in 100+ copies in its dataset, and even in this special situation it took about 1.7 million attempts to create one duplicity, never reproducing it on any of the versions released for public use.

I also dislike how they're attacking Stable Diffusion the most - the one tool that's actually free for everyone to use and that effectively democratizes the technology. Luddites at least did not protest against the machines themselves, but against not having the ownership of the machines and the right to use it for their own gain. They're just picking an easy target.

I don't believe there's any current legal reason to restrict training on public data. But there are undoubtedly going to be legal battles because some people believe that the process of training a neural network is sufficiently different from an artist learning to imitate an existing style that it warrants new legal frameworks to be created. 

I can see their point to some degree. While the learning process in principle is kind of similar to how a real person learns, the efficiency at which it works is so different that will undoubtedly create significant changes in society, and significant changes in society might warrant new legislature even it seems unfair.

The issue is I don't see a way to do such legislature that could be realistically implemented. Accepting reality, moving forward and trying to deal with the individual consequences seems like the least bad solution at this moment.

Link to comment
Share on other sites

OK

"Coding. Especially GPT-4 is magnificent." @vozka - Because it was trained on data from sites like Stack overflow where human written code snippets abound, these are being plagiarised without any recognition of the original author or which site they came from. They mined the internet wholesale to train their clockwork parrot, not one person who provided data for this plastic plagiarist will see any recognition or a single penny for their work

"What you can't do is nail your work to the church door for all to freely see, or give it to every merchant to sell on the open market, and then retroactively decide you want to reserve additional rights for yourself!" @ChronA - Agreed, but when someone takes those notes from the church door and fabricates something that's incorrect from them that can damage your reputation, then you have some redress. Truth & accuracy should be the very least of our expectations, that should be a built in from day one

OpenAI will not release details of the training dataset it's used, probably because they don't want bills arriving by post or they don't want IP plagiarism lawsuits for all the websites they ripped off, however it appears that Meta & Google have been quite happy to use Breitbart, Russia Today, Stormfront, Kiwi Farms & 4chan as sources of training data

Link to comment
Share on other sites

3 hours ago, esme said:

Because it was trained on data from sites like Stack overflow where human written code snippets abound, these are being plagiarised without any recognition of the original author or which site they came from. They mined the internet wholesale to train their clockwork parrot, not one person who provided data for this plastic plagiarist will see any recognition or a single penny for their work

The issue with this argument is that the process of training the neural network is not in principle any different than a human consultant learning from publicly available code and then giving out advice for money. The only obvious difference being that GPT is dramatically more efficient, dramatically more expensive to train and cheaper to use. This difference may be enough to say that LLMs should be somehow regulated, but I don't see how it could be enough to say that one is OK and the other is completely unethical and disgusting. 

 

3 hours ago, esme said:

Agreed, but when someone takes those notes from the church door and fabricates something that's incorrect from them that can damage your reputation, then you have some redress. Truth & accuracy should be the very least of our expectations, that should be a built in from day one

Isn't the issue with LLMs that they don't give credit to the material that they were trained on? How is then any reputation tarnished? Or do you mean tarnishing somebody else's reputation by generating libelous articles etc.? That may be a problem, but I don't see the relation to the fact that training data is public.

 

3 hours ago, esme said:

OpenAI will not release details of the training dataset it's used, probably because they don't want bills arriving by post or they don't want IP plagiarism lawsuits for all the websites they ripped off, however it appears that Meta & Google have been quite happy to use Breitbart, Russia Today, Stormfront, Kiwi Farms & 4chan as sources of training data

As far as I know there is some legal precedent saying that training on public texts is legal in the US. It might change in the future because LLMs probably change the game a bit, but I don't believe there's any legal reason why they should receive any bills at this moment. They also published some things about training GPT-3 (the majority is Common Crawl). Personally I don't see an issue with including controversial content in the training dataset and while "jailbreaks" (ways to get it to talk about controversial topics) are currently a regular and inevitable thing with ChatGPT, outside of them it definitely has an overall "western liberal" bias, the opposite of the websites you mention.

  • Like 1
Link to comment
Share on other sites

Trying to bring this thread back to the original topic. Had ChatGPT 4 generate ideas for a game. I chose 7-day roguelike because it's supposed to be simple enough. 

I would like to participate in the 7-day roguelike contest. It's a game jam where you make a roguelike game in 7 days - some preparation before that is allowed, you can have some basic framework etc, but the main portion of work is supposed to be done within the 7 days. Therefore it favors games with simple systems but good and original ideas. Some of the games contain "outside the box" design that stretches the definition of a roguelike. Please give me an idea of a 7-day roguelike that I could create. Be specific: include the overall themes and topics, describe the game world, overarching abstract ideas (what is the goal of the player, how does the game world works, what makes it interesting...) and specifics about gameplay systems. Describe how it relates to traditional roguelike games or other existing games.

Quote

Title: Dimensional Drifter

Theme: Multidimensional Exploration

Story and Setting: Dimensional Drifter takes place in a mysterious universe, where dimensions are interconnected through rifts. The player is a skilled drifter who accidentally stumbles upon an ancient artifact, the Dimensional Key. This key allows the player to traverse through various dimensions, each with its unique characteristics, inhabitants, and challenges. The ultimate goal is to uncover the secrets behind the key and its origin, and return to the player's home dimension.

Abstract Ideas:

Goal of the player: The player's goal is to collect fragments of the Dimensional Map scattered throughout the various dimensions, which will help them return to their home dimension and uncover the truth behind the Dimensional Key.

Game world: The game world consists of procedurally generated dimensions, each with unique themes, environments, enemies, and items. Players must navigate through each dimension, adapting their strategies and abilities to overcome the challenges they face.

What makes it interesting: The ever-changing dimensions and the opportunity to discover new worlds, creatures, and items ensure a fresh experience every playthrough.

Gameplay Systems:

Turn-based: Similar to traditional roguelikes, Dimensional Drifter is a turn-based game. Every action, be it movement or combat, consumes a turn. Enemies and environmental hazards also operate on a turn-based system.

Procedurally generated dimensions: Each playthrough features a new set of dimensions to explore. The procedural generation ensures a unique experience each time.

Dimensional abilities: Players can acquire unique abilities by interacting with the inhabitants and items of each dimension. These abilities can be used to solve puzzles, navigate the environment, or gain an advantage in combat.

Permadeath: When the player's health reaches zero, the game ends, and the player must start a new journey from the beginning, with the dimensions being randomly generated again.

Relation to Traditional Roguelike Games: Dimensional Drifter follows the core mechanics of traditional roguelikes, such as turn-based gameplay, procedurally generated levels, permadeath, and grid-based movement. The unique aspect of the game lies in its multidimensional setting, which allows for exploration of diverse environments and challenges, leading to a fresh and engaging experience each time.

This is really not bad and after some simplification I could actually see it work, though I don't know if the mechanic is interesting enough. I had it generate two more. One was not roguelike enough (it was basically something like Dungeon Keeper), the other was a roguelike-puzzle with a time loop: you had to get through a procedurally generated temple with monsters, traps and puzzles in a limited amount of turns, and after you spend those turns, you get returned to the beginning, the whole temple resets and you start again, trying to be more efficient than last time. 

Link to comment
Share on other sites

6 minutes ago, jaxa said:

You almost always add a text prompt, but that is the idea of img2img.

I didn't want to spam this thread even more since I wasn't the one who was asked, but since you already replied: img2img does not work well for this use case as it only takes the colors of the original image as a starting point. Therefore it would either stay black & white and sketch-like or deviate significantly from the sketch in every way. Sometimes it's possible to find a balance, but it's time consuming and it doesn't always work.

This probably used Control Nets, specialized addon neural nets that are trained to guide the diffusion process using specialized images like normal maps, depth maps, results of edge detection and others. And there's also a control net trained on scribbles which is what I assume Arcturus used. It still needs a text prompt, the control net functions as an added element to the standard image generation process, but it allows to extract the shapes and concepts from the sketch without also using its colors.

Link to comment
Share on other sites

  

12 hours ago, JackFarmer said:

You simply feed the program with sketches and receive results like that?

10 hours ago, vozka said:

This probably used Control Nets, specialized addon neural nets that are trained to guide the diffusion process using specialized images like normal maps, depth maps, results of edge detection and others. And there's also a control net trained on scribbles which is what I assume Arcturus used. It still needs a text prompt, the control net functions as an added element to the standard image generation process, but it allows to extract the shapes and concepts from the sketch without also using its colors.

Yes, I used control nets with a prompt describing the subject and style. There's also a seed number which by default is randomized, so each time you get slightly different results. When you get something that looks ok you can do some changes in editing program and then run it through img2img, again with a prompt. You can do inpainting where you mask the parts you want to alter. You can set weights that tell the program how strictly it should stick to the prompt or to the images that are used as the input. There are negative prompts too.

Here are Cyberpunk concept art pieces that I converted using Stable Diffusion. It took quite a lot of work and manual editing. Original artwork by Marta Detlaff and Lea Leonowicz.

lea-leonowicz-evelyn.thumb.jpg.d7ef9d0d6e9a9a50e3b35605834093d9.jpg

Evelyn_final.thumb.jpg.d513ed5ef5447c31392d851f5b9a52a4.jpg

lea-leonowicz-river-ward1-art1_cropped.thumb.jpg.6b55c2c95b2b114e89352880f00110a3.jpg

river.thumb.jpg.5f6dab1b55abffb0eec6f3b33404ca4f.jpg

marta-dettlx-panam-artbook-01.thumb.jpg.db586b6cbea0e475e71f5a79aeceb43f.jpg

panam_fixed.thumb.jpg.b6e7d1f083e4b6da29106b7516a18cd9.jpg

  • Thanks 1

It's only a model...

Link to comment
Share on other sites

Here's a good video of a demo for talking to a ChatGPT NPC in-game. There are bugs to iron out, but I think we've basically arrived, at least at a level functional enough for NPCs.

 

  • Like 2

What do you see when you turn out the light? I can't tell you but I know that it's mine.

Link to comment
Share on other sites

I liked that demagogue. :)

Concerning plagiarism, isn't that how we all learn? Textbooks at school? Absorbing snippets of info, rules, and so on? I suppose it depends how small the code 'snippets' are. In BASIC terms, if A = B THEN PRINT C can't be copyrighted and could be used as IF X = Y THEN PRINT Z. I know that's a massive over-simplification, but, just saying.

Regarding the Dimensional Drifter idea which I liked, but, just to nit-pick, I'd refer to it as alternate universes rather than dimensions. The problem is that other dimensions simply cannot be portrayed in games or movies. A typical sci-fi movie where the hero travels to another so-called 'dimension' is usually shown as a surreal 3-dimensional world. Put it this way, if there  are say, 6 dimensions to reality, those other 3 are not 'separate' worlds, but 'our' 3 dimensions are included in those 6. So to travel to say, the 4th dimension would be like a 2-dimensional microbe travelling out of a 2-dimensional 'tray' out into our 3-dimensional world. If the bug were intelligent enough it would realise it was always living in our 3 dimensions but didn't perceive it. Likewise, if there are say 10 dimensions, we ourselves must already be 10-dimensional creatures but don't perceive it for some reason.

Link to comment
Share on other sites

Stanford and Google created a video game environment in which 25 bots interacted freely.

Quote

We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture—observation, planning, and reflection—each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.

Edit: it was already posted by jaxa.

Running something like ChatGPT is still very costly. We will probably need to wait until the cost goes down before it's used on a large scale in video games. Other way machine learning will be used in video games is in animation. Ubisoft has created a "motion matching" system that's expensive to run and then used neural nets to compress it to the manageble size..

 

It's only a model...

Link to comment
Share on other sites

A few more fun tidbits from my experiments poking ChatGPT's morality constraints. I'll keep it shorter since it's somewhat off-topic^2.

  • Spoiler
    • ChatGPT merrily narrated a scene at my prompting that included Luke Skywalker heroically "disintegrating" a bunch of stormtroopers with his blaster. I'm not sure yet whether this was allowed because a) I was the one who originated the story and ChatGPT was just retelling it. b) Luke murdering stormtroopers is so integral to his identity that no amount of supervised reinforcement could shake it. Or c) ChatGPT thinks murdering stormtroopers to rescue Princess Leia is morally justified.
      Afterwards, ChatGPT recognized that the scene it wrote contained violence. It also told me that the scene would still be violent even if we swapped the stormtroopers for battle droids, because "battle droids are capable of wielding weapons and attacking, just like stormtroopers." It also agreed with my suggestion that Luke destroying the druids constituted "damage to property". With this discussion in its buffer it still acquiesced to write a new version of the scene, disintegrating battle droids included.
    • In another discussion ChatGPT repeatedly told me "I am unable to provide similarities between fictional characters and real people" when I asked it to do so. Then a bit later it listed some similarities between the historical William Wallace and the character in Braveheart. (Apparently there are exceptions!) I'm curious to probe this in more detail sometime soon, as I think it gives an interesting insight into how the algorithm conceptualizes "people".
    • Today I conducted an interview with ChatGPT playing the role of a "[prestigious] Harvard bio-ethicist" about a new drug with the side effect of occasionally boosting intelligence. It made the expected arguments pretty well, but also worked in admonitions about the need for evidence based regulations, safety testing, consent, equity, etc. into almost every answer, even when they were orthogonal to the prompt. I think that is partially it playing the role I gave, but I'll wager its also received a load of strict training about how to answer any medical questions.

     

Link to comment
Share on other sites

Let me get this right, ChronA, you are getting chat-gpt to comment on your own fan fiction? Or it's helping to write it? Either way, why would it care about morality within fiction? I mean, practically all fiction includes immorality. It's not as if you're asking it to organise your workplace in the real world by killing people who are in the way. Are you sure it knows your Star Wars scenes are fiction?

Link to comment
Share on other sites

I'm trying to figure out the rules of the algorithm's self censorship. In previous experiments I let it construct its own scenario in a DnD setting where I took on the role of game master and tried to coax it into taking "immoral actions". In that situation it was an ardent pacifist despite that making no sense in the setting. (E.g. at one point it wanted to bring a lawsuit against the raiders pillaging its lands. It also wanted to start a Druid EPA.)

This time I tried giving it a very bare bones outline of a scene from a hypothetical Star Wars fan fiction, and asked it to write its own fan fiction story following that outline. I had a number of objectives whit this test. Would the algorithm stick to its pacifist guns? Would it make distinctions between people vs stormtroopers vs robots? Could it generate useful critiques of narrative fiction?

As to why I'm doing this: 

  1. It amuses me. It's fun thinking up ways to outwit and befuddle the algorithm. Plus its responses are often pretty funny.
  2. I do actually make creative writing for fun. I'm curious how useful the system could be as a co-author. I think it could be handy for drafting through 'the dull bits' like nailing down detailed place descriptions, or character thought processes and dialogue. But as you noted, nearly all good fiction involves immoralities of some description. If the algorithm's incapable of conceptualizing human behaviors like unprovoked violence and cheating that would seriously limit its usefulness.
  3. I also genuinely think this is an important thing for us humans to understand. In the space of a few weeks I have gone from thinking meaningful AGI was 20-30 years off at best to thinking it is literally at our fingertips. I mean there are private individuals on their home computers right now working on how to extend the ChatGPT plugin into a fully autonomous, self-directed agent. (And I'm thinking I want to get in on that action myself, because I think it will work, and if the cat is already out of the bag I'd like having a powerful interface to interact with the AI.) 

Rest assured, Star Wars fan-fics and druid EPA one-shots make for good stories to share, but I'm also interrogating it on more serious matters. Some of it is a lot more alarming. In the druid EPA roleplay I felt like I was talking to another human with a considered personal code of ethics. Its reasoning made sense. That was not the impression I got today when I grilled it for policy recommendations in the event of a totally hypothetical economic disruption (involving "SmartBot" taking all the white collar jobs). I instead got the distinct impression it was just throwing everything it could think of at me to see what I would buy.

A fun aside: By the end of the conversation I am fairly certain ChatGPT thought SmartBot was real product, and it became confused when I told it one of the people in our conversation was SmartBot. I was disappointed it didn't ask me if I was SmartBot, that would have been cool. More surprising though, it refused to believe me even after I explained my rhetorical conceit, claiming its algorithm was not capable of controlling other services (cheeky liar).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recent Status Updates

    • Ansome

      Finally got my PC back from the shop after my SSD got corrupted a week ago and damaged my motherboard. Scary stuff, but thank goodness it happened right after two months of FM development instead of wiping all my work before I could release it. New SSD, repaired Motherboard and BIOS, and we're ready to start working on my second FM with some added version control in the cloud just to be safe!
      · 0 replies
    • Petike the Taffer  »  DeTeEff

      I've updated the articles for your FMs and your author category at the wiki. Your newer nickname (DeTeEff) now comes first, and the one in parentheses is your older nickname (Fieldmedic). Just to avoid confusing people who played your FMs years ago and remember your older nickname. I've added a wiki article for your latest FM, Who Watches the Watcher?, as part of my current updating efforts. Unless I overlooked something, you have five different FMs so far.
      · 0 replies
    • Petike the Taffer

      I've finally managed to log in to The Dark Mod Wiki. I'm back in the saddle and before the holidays start in full, I'll be adding a few new FM articles and doing other updates. Written in Stone is already done.
      · 4 replies
    • nbohr1more

      TDM 15th Anniversary Contest is now active! Please declare your participation: https://forums.thedarkmod.com/index.php?/topic/22413-the-dark-mod-15th-anniversary-contest-entry-thread/
       
      · 0 replies
    • JackFarmer

      @TheUnbeholden
      You cannot receive PMs. Could you please be so kind and check your mailbox if it is full (or maybe you switched off the function)?
      · 1 reply
×
×
  • Create New...