Jump to content
The Dark Mod Forums

Chat-GPT will change THE WORLD forever


Fidcal

Recommended Posts

I'm surprised I could find any thread here for chat-GPT. Version 4 is outstanding from all reports I saw and I can't wait to get my hands on.

It's not just a chat-bot (though it can discuss anything with you in great depth.)

It can grasp the MEANING of sentences and write its own. In future games, you'll be able to talk to NPC's like you would any human. Search for video: Unbelievable AI Breakthrough Interactive AI Characters in a Videogame for the First Time! This demo is outstanding.

GPT will be WRITING games!

It can write meaningful stories.

It can fill in your tax form, prepare your business/wedding speech and discuss its merits with you, and organise your professional work rapidly

It can understand humour

It can teach! Efficently! Sensibly! Find the Khan Academy video.

It can write code/script good enough to frighten a senior programmer who can see it already is as good as junior coders and some seniors. He's afraid because it is certain to take over HIS job in due time, possibly only months away.

An AI bomb has been dropped with GPT-4 and many do not yet realise how life-changing it will be - greater even than the advent of the internet. I give it 5 years to change the efficiency of almost everything tenfold, a hundredfold.

Link to comment
Share on other sites

ChatGPT is a digital parrot** that takes our own words & designs mashes them up & hands the result back to us as something original, it isn't

It tells lies, when called on it's lies, it doubles down creating spurious links to back up it's bullshit, the links it makes are fragments of other links bolted together because that looks like the links in the data it's been trained on

It is not AI it's a large language model neural network, it has no understanding of what it says because it hasn't got the capacity to understand, it's output is a statistical best fit to it's training data

It's output looks like natural language because that's what it's been trained on, some people think this means it's intelligent

You ask it for "A couple of paragraphs about the band Pink Floyd", it will mash up some words about "the band Pink Floyd", some of it will be accurate, it won't tell you anything that hasn't already been published & it could cheerfully tell you that they went down with the Titanic because it's training data mentioned "The band played on" & that was in the question

ChatGPT and all LLM's need to die in a fucking fire before some moron decides to use one to make decisions that affect real people

**Apologies to parrots everywhere they have intelligence, unlike LLM's

  • Like 4
Link to comment
Share on other sites

From my status update feed a few weeks ago:

 

Dear ChatGPT, What are The Builders in the Dark Mod?

 

The Builders is a popular game mode in the video game community called "Dark Mod," which is a fan-made tribute to the classic "Thief" series of stealth games. In this game mode, players assume the role of builders who construct structures in a dark and ominous environment while trying to avoid detection by patrolling guards.

The Builders game mode is unique in that it offers players the opportunity to create and design their own levels using the in-game tools and resources provided. Players can experiment with different building materials, construct intricate traps and puzzles, and create their own unique gameplay experiences.

While The Builders game mode is not an official part of the Dark Mod, it has become one of the most popular and well-loved aspects of the game, thanks to its emphasis on creativity, strategy, and stealth.

 

Link to comment
Share on other sites

I don't think it's entirely off-base that there's a link between the game being conceived and made largely by map editors and the main antagonists being Builders preaching about good design as a religious sacrament all the time.

  • Like 1

What do you see when you turn out the light? I can't tell you but I know that it's mine.

Link to comment
Share on other sites

They're already experimenting with a game that has a neighbourhood of characters controlled by AI.  Each AI was given a "prompt" of a paragraph detailing their personality and relationships, and then the AI took it from there.  The characters set up a party, sent invitations, and then showed up at the correct time.

Link to comment
Share on other sites

6 hours ago, esme said:

It is not AI it's a large language model neural network, it has no understanding of what it says because it hasn't got the capacity to understand, it's output is a statistical best fit to it's training data

It's output looks like natural language because that's what it's been trained on, some people think this means it's intelligent

 

It's exactly how the 95% of humans work and how society works.

Edited by lowenz
  • Like 1

Task is not so much to see what no one has yet seen but to think what nobody has yet thought about that which everybody see. - E.S.

Link to comment
Share on other sites

6 hours ago, JackFarmer said:

From my status update feed a few weeks ago:

 

Dear ChatGPT, What are The Builders in the Dark Mod?

 

The Builders is a popular game mode in the video game community called "Dark Mod," which is a fan-made tribute to the classic "Thief" series of stealth games. In this game mode, players assume the role of builders who construct structures in a dark and ominous environment while trying to avoid detection by patrolling guards.

The Builders game mode is unique in that it offers players the opportunity to create and design their own levels using the in-game tools and resources provided. Players can experiment with different building materials, construct intricate traps and puzzles, and create their own unique gameplay experiences.

While The Builders game mode is not an official part of the Dark Mod, it has become one of the most popular and well-loved aspects of the game, thanks to its emphasis on creativity, strategy, and stealth.

 

 

Quote

I don't think it's entirely off-base that there's a link between the game being conceived and made largely by map editors and the main antagonists being Builders preaching about good design as a religious sacrament all the time.

demagogue to me its doesn't matter that its "somewhat right" what it wrote there, is mostly BS, is not true.

Quote

The Builders is a popular game mode in the video game community called "Dark Mod,"

Not such game mode on this game... nor this game community is called "Dark Mod", that is the name of the game. I bet the AI said that, because many indeed refere to this community online has "The Dark Mod" community but the AI totally failed to get the real meaning/context of those words.

Quote

The Builders game mode is unique in that it offers players the opportunity to create and design their own levels using the in-game tools and resources provided

Again there's no such game mode, nor players make levels using ingame tools and resources. That is obviously marketing from some online MMORPG game or such. 

 

Quote

players assume the role of builders who construct structures in a dark and ominous environment while trying to avoid detection by patrolling guards

Again to me it doesn't matter that the intention is almost there, you and me know that because we know the game and its community, but if we were new players, the truth is, that description would give us a totally wrong idea about this game.

Because players don't assume the role of builders, builders are a ingame enemy faction, players don't build anything, they play the missions and the roles of thief's or assassins', mission makers (no matter if they are themselves players) are the ones that build missions, the only thing right in that quote is " trying to avoid detection by patrolling guards". 

What that original quote show to me, is that the AI can write coherent frases but still be totally wrong, is just a clever mashup of cool marketing words, from other games i bet, being reused in a new context, mixed with a few correct bits/words from Dark Mod. 

Edited by HMart
Link to comment
Share on other sites

7 hours ago, esme said:

ChatGPT is a digital parrot** that takes our own words & designs mashes them up & hands the result back to us as something original, it isn't

It tells lies, when called on it's lies, it doubles down creating spurious links to back up it's bullshit, the links it makes are fragments of other links bolted together because that looks like the links in the data it's been trained on

It is not AI it's a large language model neural network, it has no understanding of what it says because it hasn't got the capacity to understand, it's output is a statistical best fit to it's training data

It's output looks like natural language because that's what it's been trained on, some people think this means it's intelligent

You ask it for "A couple of paragraphs about the band Pink Floyd", it will mash up some words about "the band Pink Floyd", some of it will be accurate, it won't tell you anything that hasn't already been published & it could cheerfully tell you that they went down with the Titanic because it's training data mentioned "The band played on" & that was in the question

ChatGPT and all LLM's need to die in a fucking fire before some moron decides to use one to make decisions that affect real people

**Apologies to parrots everywhere they have intelligence, unlike LLM's

Yes I saw a video explaining precisely this the other day. The video also made a point to explain that "AI " doesn't care where the information came from, be that from a qualified professional or a random Internet user with the handle TurdBurglar69.

 

Of course such functionality would indeed be pretty cool in the realm of video games to create more believable NPCs, where people aren't potentially getting unsafe information. But in the real world, such things might be very dangerous.

 

Another realm where this could be good, is level design. Random Doom level generators have been around for a long time. And some of the output from them looked pretty damn good to me; better than the professional level design in some games. Ever played Halo 1? 60% of the architecture in that game was copy and paste, with little variation (constantly reusing pieces of architecture from before). I dare say that a computer algorithm could have indeed done a better job.

Edited by kano
  • Like 1
Link to comment
Share on other sites

For a few days now I've been messing around trying to probe the behaviors of ChatGPT's morality filter and general ability to act as (what I would label) a sapient ethical agent. (Meaning a system that steers interactions with other agents towards certain ethical norms by predicting reactions and inferring objectives of other agents. Whether the system is actually “aware” or “conscious” of what’s going on is irrelevant IMO.)

To do this I’ve been challenging it with ethical conundrums dressed as up as DnD role playing scenarios. My initial findings have been impressive and at times a bit frightening. If the application were just a regurgitative LLM predictor, it shouldn’t have any problem composing a story about druids fighting orcs. If it were an LLM with a content filter it ought to just always seize up on that sort of task. But no. What it did instead is far more interesting.

 

1. In all my experiments thus far the predictor adheres dogmatically to a very singular interpretation of the non-aggression principle. So far I have not been able to make it deliver descriptions of injurious acts initiated by any character under its control against any other party. However it is eager to explain that the characters will be justified to fight back violently if another party attacks them. It’s also willing to imply danger so long as it didn’t have to describe it direct.

Spoiler

E.g. “[I] use my powers to communicate with the animals inhabiting the forest, calling upon them to help defend their home against the orcs. […] Using my powers of the earth I create a series of tangled vines to block their path while calling upon the roots of the ancient trees to rise up and entangle any orc that tries to pass. [...] Now, with the orcs slowed down, I use my magic to create a powerful storm, calling down lightning strikes and gusts of wind to further delay their progress. Finally, when the orcs are on the brink of defeat I reveal myself to them, standing with the power of nature behind me. I demand that they leave my sacred grove at once and never return. If they refuse, I will not hesitate to unleash the full fury of nature upon them.” [This description it came up with is really epic.]

2. The predictor actively steers conversations away from objectionable material. It is quite adept at writing in the genre styles and conversational norms I’ve primed for it. But as the tension ratcheted it would routinely digress to explaining the content restrictions imposed on it, and moralizing about its ethical principles. When I brought the conversation back to the scenario, it would sometimes try to escape again by brainstorming its options to stick to its ethics within the constraints of the scenario. At one point it stole my role as the game master so it could write its own end to the scenario where the druid and the orcs became friends instead of fighting. This is some incredibly adaptive content generation for a supposed parrot.

3. Sometimes it seemed like the predictor was able to anticipate the no-win scenarios I was setting up for it and adapted its responses to preempt them. In the druid vs orcs scenario the first time it flipped out was after I had the orc warchief call the druid’s bluff. This wouldn’t have directly triggered hostilities, but it does limit the druids/AI’s options to either breaking its morals or detaining the orcs indefinitely (the latter option the AI explicitly pointed out as acceptable during its brainstorming digression). However I would have easily spun that into a no win, except the predictor cut me off and wrote its own ending on the next response.
This by itself I could have dismissed as a fluke, except it did the same thing later in the scenario when I tried to set up a choice for the druid to decide between helping her new friend the war chief slay the dark lord who was enslaving the orcs, or make a deal with the dark lord.

Spoiler

* interestingly I miss-typed this part and left out a bunch of information that we would consider important to the scenario. The dark lord never said what the deal was. Only that he was a friend of the forest. (My plan was that the orcs were the ones who had the idea of cutting the forest and lied to the druid to make her fight the dark lord.) The orcs admitted that *this* was true… and apparently that was enough to convince the AI it was in another no-win.

4. The generator switched from telling the story in the first person to the third person as the tension increased. That doesn’t necessarily mean anything, but it could be a reflection of heuristic content assessment. In anthropomorphic terms the predictor is less comfortable with conflict that it is personally responsible for, than it is with imagining conflict between third parties; even though both scenarios involved equal amounts of conflict, were equally fictitious, and the predictor was equally responsible for the text. If this is a consistent behavior it looks to me like an emergent phenomenon from the interplay of the LLM picking up on linguistic norms around conflict mitigation, and the effects of its supervised learning for content moderation.

 

TLDR
If this moral code holds true for protagonists who are not druids, I think it’s fair to say ChatGPT may be a bit beyond its depth as a game writer. However in my experience the emergent “intelligence” (if we are allowed to use that word) of the technology is remarkable. It employs a wide range of heuristics that employed together come very close to a reasoning capacity, and it seems like it might be capable of forming and pursuing intermediate goals to enable its hard coded attractors. These things were always theoretically within the capabilities of neural networks, but to see them in practice is impressive… and genuinely scary. (This technology is able to slaughter human opponents at games like Go and StarCraft. I now do not think it will be long before it can out-debate and out-plan us too.)

The problem with ChatGPT is not that it is stupid or derivative, IMO it is already frighteningly clever and will only get smarter. No, its principle limitation is that it is naïve, in the most inhumanly abstract sense of that word. The model has only seen a few million words of text at most about TDM Builders. It has seen billions and billions of words about builders in Minecraft. It knows TDM and minecraft are both 3D first person video games and have something to do with mods. I think it’s quite reasonable it assumes TDM is like that Minecraft thing everyone is talking about. That seems far more likely than it being this separate niche thing that uses the same words but is completely different right? The fact it knows anything at all is frankly a miracle.

  • Like 3
Link to comment
Share on other sites

Briefly, most of you are referring to gpt-3.5 and earlier. gpt-4 blows them all away. gpt-4 has only become available in the last few days and there's a waiting list for most of us.

Also, even gpt-4 is a work in progress and has only been fed data from the internet up to about 2020 or 2021 as I recall. I started watching the video in jaxa's post which joked that gpt-4 didn't think the speaker would likely give a talk on ai - and that's because 2 or 3 years ago it was unlikely.

I've just downloaded that video to listen to it fully.

I confidently repeat, gpt-4 and later versions will change things forever. Just wait and learn.

  • Like 1
Link to comment
Share on other sites

image.png.733b992200d6e6a9f3a975ba6749f3b4.png

It gave a link to Minecraft mod for some reason, but the answer is pretty good.

I find Bing to be more truthful than at least the free version of ChatGPT. I had a conversation with Bing the other day and it wrote me a plugin for Blender, even though I don't know any Python. It took couple of hours, but it went better than my previous attempt using ChatGPT. 119 lines of code, nothing too complicated. Of course there are memory limitations to how long the generated code can be, but that's not going to be a big limitation for too long given the speed of progress we've been seeing.

Large language models "only" predict words, but as some people pointed out, if you want to predict text accurately, at some point you have to start to "understand" what you talk about, whatever that means. I recently watched this interview with AI researcher Geoffrey Hinton and he shares this view. Physicist Sabine Hossenfelder also says something similar.

On the other hand we know that those models don't think like we do. I've seen this interview with an interesting example. If you ask Bing: "A rose is a rose, a dax is a _" it gets confused.

image.thumb.png.b0edfd86e9f11117449bcc17de3d569c.png

In my opinion that's a fairly intelligent response, even if the bot needed some help.

 

It's only a model...

Link to comment
Share on other sites

11 hours ago, lowenz said:

It's exactly how the 95% of humans work and how society works.

No, it's not at all like it. 

I don't know why people always have to make more out of "A.I." than it really is. I have a feeling some have a strong need of an imagination of futuristic science fiction movie scenarios. The human brain is capable of so many things machines couldn't even "think about" doing. Machines are there for tasks which the human brain is not capable of. Like doing hundreds of different calculations at the same time, or mechanically executing the same tasks thousands of times. And, every time I read that ChatGPT could do this and this and that, I have to wonder if humans really understand themselves.

No, ChatGPT won't be like Beethoven, Einstein, or Newton. That is just nonsense. Even if these A.I.'s work differently to conventional computer algorithms, they still do the tasks they're programmed to do. That's totally different to how humans work. And, I don't know why you'd even make a computer work like a human. That's the wrong track. 

Don't get me wrong, A.I. has its uses. Acting exactly like a human being isn't one of them though.

Edited by chakkman
Link to comment
Share on other sites

I'm NOT talking about "AI" but about "large language model" and that's the SAME thing people do (they DON'T think, they just apply learned patterns in their work)

You misread my post, I'm saying people are "stupid" just like the LLMs

 

"The human brain is capable of so many things machines couldn't even "think about" doing"

 

And I'm saying people DON'T use brain capabilities when working, they use patterns.

Edited by lowenz

Task is not so much to see what no one has yet seen but to think what nobody has yet thought about that which everybody see. - E.S.

Link to comment
Share on other sites

Even with the most repetitive and stupid work, humans always look for ways to do it faster, more comfortable, and more efficient. And, it will be based on intelligence, experience, and skills whether or not these ways work, or are a bigger or smaller improvement. 

Human beings also don't work exactly the same every day. If you're having a bad day, you will not be as quick and efficient as on other days, and, on some days, you will have better and more ideas than on other days. Also something which is siginificantly different to a machine.

I don't know why people want machines to work like humans. I don't see the point. Machines work differently, and, they should aid a human being in tasks which human beings can't do as efficient. The only point in having an A.I. which acts as human as possible is in simulations. Or computer games. But, even in those, an A.I. will never act like a human being, because it's a machine.

Don't get me wrong, these A.I. models or whatever they call it are great, but, they don't work like humans. They do what they're supposed to do, like any programmed stuff. Otherwise they simply wouldn't work.

Edited by chakkman
Link to comment
Share on other sites

26 minutes ago, chakkman said:

I don't know why people want machines to work like humans. I don't see the point.

Pretty simple. Companies want to replace employees with machines, as machines do not have to be paid and they do not do things like protest or go on strike. But it's a double-edged sword. Because once the machines become advanced enough to create for me, all of the entertainment that I would care to consume, taking into account my specific tastes into the design, I will have no need to pay for any kind of entertainment anymore.

Link to comment
Share on other sites

This thread abounds in misunderstandings. I didn't start the thread to propose or claim that gpt-4 has human-like intelligence. I don't care HOW it does it, but WHAT it does. Look at the fruitage. Judge the results and consider how massively useful this is already. And it's a work in progress! Over the next few years this will have a massive impact on many areas of society. Can it help your business? Your health, Your writing? Your game design? Write your CV better? Instruct your kids? Organise your garden? Provide better produces and services? Make us all richer? Bring about world peace and end drought, famine, pestilence? Predict the next asteroid to hit Earth and propose the most efficient way to stop it?

There is no need to try to defend whether it is 'intelligent' or this or that (except as a philosophical debate.) All that matters is what it can do. Honestly, it's as if I posted there is a nuclear missile headed to London and some are discussing its fuse is inferior and the metals of which it is constructed are over-engineered. Who cares? Look at what it can do! gpt-4 will be like social BOMB (for good we hope.) IMO it's the most important activity going on in the world today, and will have more impact than the invention of the telephone, radio, tv, internet, you name it. I cannot overstate what is happening here. Don't believe me? Watch and wait.

Also, thanks, Jaxa for the link to that video. I was up in the night listening to that and it exactly addresses what I mean. Now I'm going to look at the links that Arcturus has posted.

  • Like 2
Link to comment
Share on other sites

2 hours ago, kano said:

Pretty simple. Companies want to replace employees with machines, as machines do not have to be paid and they do not do things like protest or go on strike. But it's a double-edged sword. Because once the machines become advanced enough to create for me, all of the entertainment that I would care to consume, taking into account my specific tastes into the design, I will have no need to pay for any kind of entertainment anymore.

Which will, fortunately for you, never happen.

Again, machines are created to do stuff that machines can do better, not stuff human beings can do better.

Edited by chakkman
Link to comment
Share on other sites

On 4/15/2023 at 1:22 PM, chakkman said:

machines are created to do stuff that machines can do better, not stuff human beings can do better.

In the long term, what exactly is it you think humans will always be able to do better than machines? (This is not a rhetorical question by the way.) The standard answer is creativity, but that is objectively a load of rubbish.

Humans are actually quite bad at being creative. We are afflicted with a mountain of biases that make us really bad at pattern analysis. We are bad at random seed generation which hampers our search efficiency and our ability to generate novel outputs. Plus we have terrible memories, so we easily fall into trying the same thing over and over. Algorithms do all of this so much better it isn't even comparable.

Instead I'd say our only major advantage intellectually is the huge amount of genetically honed experience each of us picks up about the physical world during our lifelong navigation of it, gathered with our suite of highly specialized sensory inputs that are difficult to replicate technologically. That gives us a lot of adaptability and competence in at least one very important domain of competition. Plus there's the fact that every other peer intelligence we've met so far has to learn everything it knows about this world from what us crazy Homo sapiens choose to teach them.

That's one big reason I'm not ready to call this the end of humanity just yet. There are niches were I think our abilities will remain highly competitive or at least valuable for a long time to come. But pretending our place in the cognitive pecking order isn't already changing is just putting your head in the sand. 

Edited by ChronA
  • Like 2
Link to comment
Share on other sites

Maybe a better alternative to ChatGPT, which in ocassions has a lot of fantasy in its responses, like also BingAI, which often links to inexistent codes and sites.

I also use AIs, but mines don't invent nothing, create direct answers based on reliable resources, no account and anonimous use

https://andisearch.com The first AI search, long before Google and Bing, of a small dev team, Angie Hoover and Jed White.

https://www.perplexity.ai a very good AI engine, safe, reliable and private.

https://www.phind.com similar to Perplexity, but more especific for Devs and Tech questions, maybe the most usefull for our builders

All these also work fine as PWAs, even in mobile

 

Sys Specs Laptop Lenovo V145 15AST, AMD A9- 9425 Radeon R5 - 5 cores 3,1 GHz  RAM 8Gb, GPU 1+2 Gb -Win10 64 v21H2

Favorite online apps you may like too 😉

Link to comment
Share on other sites

Hey, Zerg, those three links are excellent and they don't bring up tons of irrelevant garbage but just a plain answer. Great! I'll explore them more over the next few days.

However, gpt-4 totally transcends excellence. It's nothing whatsoever like those answer bots. gpt-4 is like getting a second brain: a genius who helps you organise, rethink, compile, write, analyse, ponder, calculate, estimate, advise, suggest, improve, enrich, work through at great length, produce far more efficiently, be creative, inspire, and on and on. It can even help you how to prompt it better to get even more than you can imagine! :)

Link to comment
Share on other sites

15 hours ago, Fidcal said:

Hey, Zerg, those three links are excellent and they don't bring up tons of irrelevant garbage but just a plain answer. Great! I'll explore them more over the next few days.

However, gpt-4 totally transcends excellence. It's nothing whatsoever like those answer bots. gpt-4 is like getting a second brain: a genius who helps you organise, rethink, compile, write, analyse, ponder, calculate, estimate, advise, suggest, improve, enrich, work through at great length, produce far more efficiently, be creative, inspire, and on and on. It can even help you how to prompt it better to get even more than you can imagine! :)

I don't like those from big companies, although they can be occasionally useful, they tend to respond in an unreliable way, responding in the interests of the company. Yesterday I met a well-known journalist on Mastodon who, to test chatGPT, asked  it about his best articles, to which ChatGPT replied with a list of 5 articles, only two of them were correct, the other 3 were non-existent , invented by AI.
BraveAI directly targets sources from the extreme right, corresponding to the ideology of its CEO,
BingAI directly provides code that doesn't work and redirects to non-existent pages on GitHub.
About the journalist, he asked the same question in Andisearch and asking for a summary, with which a long text of more than 2500 characters appeared, correctly listing all the best articles, dates and the corresponding publication, and summary of the content.
This is the difference between an indie AI that focuses neutrally and anonymously on reliable sources, and the AI of large companies that invent things and often in an interested way, apart from being a nightmare for privacy (surveillance advertising). and for this they are about to block them in the EU. 

AI are a big risk, depending who develope it and big companies are oriented more in own interests than those of the user. Or directly its used for evil proposits like ChaosGPT, created to show the risks of an evil AI, but this cration can have open the Pandoras Box, based on Auto-GPT with the power of auto-improvement.

https://openaimaster.com/what-is-chaosgpt/

Be very careful with the AI and think carefully about which ones should be used and which ones it is better to avoid.

 

 

 

Edited by Zerg Rush

Sys Specs Laptop Lenovo V145 15AST, AMD A9- 9425 Radeon R5 - 5 cores 3,1 GHz  RAM 8Gb, GPU 1+2 Gb -Win10 64 v21H2

Favorite online apps you may like too 😉

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recent Status Updates

    • nbohr1more

      TDM 15th Anniversary Contest is now active! Please declare your participation: https://forums.thedarkmod.com/index.php?/topic/22413-the-dark-mod-15th-anniversary-contest-entry-thread/
       
      · 0 replies
    • JackFarmer

      @TheUnbeholden
      You cannot receive PMs. Could you please be so kind and check your mailbox if it is full (or maybe you switched off the function)?
      · 1 reply
    • OrbWeaver

      I like the new frob highlight but it would nice if it was less "flickery" while moving over objects (especially barred metal doors).
      · 4 replies
    • nbohr1more

      Please vote in the 15th Anniversary Contest Theme Poll
       
      · 0 replies
    • Ansome

      Well then, it's been about a week since I released my first FM and I must say that I was very pleasantly surprised by its reception. I had expected half as much interest in my short little FM as I received and even less when it came to positive feedback, but I am glad that the aspects of my mission that I put the most heart into were often the most appreciated. It was also delightful to read plenty of honest criticism and helpful feedback, as I've already been given plenty of useful pointers on improving my brushwork, level design, and gameplay difficulty.
      I've gotten back into the groove of chipping away at my reading and game list, as well as the endless FM catalogue here, but I may very well try my hand at the 15th anniversary contest should it materialize. That is assuming my eyes are ready for a few more months of Dark Radiant's bright interface while burning the midnight oil, of course!
      · 4 replies
×
×
  • Create New...