Jump to content
The Dark Mod Forums

Chat-GPT will change THE WORLD forever


Fidcal

Recommended Posts

I'm as anti-big corporate as anyone around here, but these AI's are likely to remain in the domain of big business for various reasons. I don't see a free and open source alternative springing up.

 

For one thing, feeding the AI with training data is much easier to do when you also run the biggest online services in the world. For another, it is important to keep really nasty things that one might find on the Internet from getting into the pool, (is that the correct term?) of data that the AI uses. From what I understand, there are actual humans whose job it is to do exactly this. Note that when I say bad stuff, I mean the really bad stuff, not political crap or anything like that. Nobody's going to do that job for free.

 

Also, there was an open source alternative to smart speakers, but I heard recently that they got sued into the ground.

 

Back on the subject of AI though, I think until humanity has invented a machine that can automatically police itself, deciding what information to suck in from the public Internet and what to ignore, based on prior defined criteria, calling it actual AI is a stretch. And even if you invent that, you'll have nefarious people still trying to make it suck up and incorporate bad stuff.

Edited by kano
Link to comment
Share on other sites

@kano I think you are spot on with that assessment (unfortunately), at least for the next 10-15 years. As with pretty much everything tech I would love it if the commons could produce a viable non-proprietary competitor. Unfortunately the massive amount of work, data, and processing power it takes to train one of these bots simply necessitates major corporate or government backing.

That may change as the field matures and people start figuring out what makes these bots work. Human infants are able to learn to speak with far, far, far, far, FAR less language exposure than it takes even the most primitive chat bots to approximate coherent fluency. That's because a human brain is not a single undifferentiated mass of neurons. Our brains come pre-divided into function oriented sub modules, pre-populated with effective neural configurations for their specific tasks.

By contrast, an undifferentiated mass really is more or less how these bots start out. 99% of all that training they need is just getting them to the starting line of approximating any sort of receptive brain configuration. Once people start cracking the code though that whole process will become much more efficient. Assuming our current society survives, people will eventually be able to buy pre-configured bot-brain-parts to train and run on their home computers for a build-your-own AI companion experience. We are still in early days of this technology.

Incidentally that's why I'm skeptical of any claims that the hallucinations these bots continue to exhibit are a feature to the people building them, rather than a bug. I'm sure we will see that eventually (truth by Google...), but for right now it is hard enough just to get these bots to stop spouting racial slurs. Teaching them to double think on top of that is too much work... for now.

Link to comment
Share on other sites

@kano

It's possible that open source efforts will always lag behind corporate products, but here are some things to consider:

You can get datasets such as LAION-5B for free. Whether or not that data has been adequately screened is another story, but all sorts of work are being swapped around for free right now. Just look at what people are doing with Stable Diffusion.

Training an LLM/AI requires more resources than running it. If the model leaks, as we have already seen in a few cases, it can be possible to run it at home. That's not "open source" per se, but it might be able to dodge censorship measures if they aren't baked into the model, relying instead on screening the user input and model output server-side.

Increasing the parameters and hardware needed to train a model by a factor of 10x doesn't necessarily mean the model will be "10x as good". If large models like GPT-4+ are reaching a plateau of quality, then that could allow smaller players to catch up. There has been research around reducing the number of parameters, since many of them are redundant and could be removed without affecting quality that much.

The "little guys" can pool their resources together. You can compare this to hackerspaces with expensive tools that can be used by ordinary people who pay a membership fee. Not only could a few individuals come together and make their own miniature GPU cluster, they could also rent hardware in the cloud, probably saving a lot of money by doing so. Why buy an Nvidia A100 80GB GPU when you can rent 10 of them for the length of time that you need them? Services like Amazon's Bedrock might be helpful, time will tell.

Regarding lawsuits or DMCAs, when it comes to software, you can get away with almost anything. It is trivial for power users to anonymously swap files that are hundreds or thousands of gigabytes in size. Even if we're talking about a 100 terabyte blob, that should cost only about $1000 to store on spinning rust, which is well within the means of millions of people. Doing something useful with that may be difficult, but if it's accessible, someone motivated enough will be able to use it.

It seems unlikely that we're going to get something self-aware from the current approaches. That battle will be fought a couple decades from now, with much different hardware and more legislative red tape arising out of the current hype fest.

  • Thanks 1
Link to comment
Share on other sites

I've been playing with Stable Diffusion a little. Old 512x512 textures could be upsampled this way. By using original image as input for "img2img" and existing normalmaps in the "Control Net" it's possible to create an infinite amount of variations. Or using only the normalmap as a guide one can create a new style while keeping the old pattern.

Original:

old_small_bricks_grey.jpg.60365396406b81ac29ea43d7e25c8163.jpg

Generated:

00011-594339208.thumb.jpg.1707174fceac38f06b24c73190ba734c.jpg

00008-1223843877.thumb.jpg.3fbe737e2763474b3481b2c6bf3f6447.jpg

 

It's only a model...

Link to comment
Share on other sites

On 4/14/2023 at 5:26 PM, Fidcal said:

I'm surprised I could find any thread here for chat-GPT. Version 4 is outstanding from all reports I saw and I can't wait to get my hands on.

It's not just a chat-bot (though it can discuss anything with you in great depth.)

It can grasp the MEANING of sentences and write its own. In future games, you'll be able to talk to NPC's like you would any human. Search for video: Unbelievable AI Breakthrough Interactive AI Characters in a Videogame for the First Time! This demo is outstanding.

GPT will be WRITING games!

It can write meaningful stories.

It can fill in your tax form, prepare your business/wedding speech and discuss its merits with you, and organise your professional work rapidly

It can understand humour

It can teach! Efficently! Sensibly! Find the Khan Academy video.

It can write code/script good enough to frighten a senior programmer who can see it already is as good as junior coders and some seniors. He's afraid because it is certain to take over HIS job in due time, possibly only months away.

An AI bomb has been dropped with GPT-4 and many do not yet realise how life-changing it will be - greater even than the advent of the internet. I give it 5 years to change the efficiency of almost everything tenfold, a hundredfold.

Can it build fm's or at least increase developing speed for builders? I am not familiar with what it can do for games in general. Usually new things at first are welcomed with excitement but quickly they boil down to something not so special.

Link to comment
Share on other sites

27 minutes ago, kin said:

Can it build fm's or at least increase developing speed for builders? I am not familiar with what it can do for games in general. Usually new things at first are welcomed with excitement but quickly they boil down to something not so special.

Right above you we have an example of AI-created/remastered textures.

There's AI models being worked on for creating 3D models from text prompts, photographs, etc. That could be relevant to TDM since if it can be made in Blender, it can be imported into the game. Imagine using AI to create a gigantic cathedral, or even a city.

Over a decade ago, Tels was working on Swift Mazes, a demo for procedurally generating TDM maps. I don't know how you would go about making an AI version of that concept, but anything's possible.

  • Like 1
Link to comment
Share on other sites

5 minutes ago, jaxa said:

Right above you we have an example of AI-created/remastered textures.

There's AI models being worked on for creating 3D models from text prompts, photographs, etc. That could be relevant to TDM since if it can be made in Blender, it can be imported into the game. Imagine using AI to create a gigantic cathedral, or even a city.

Over a decade ago, Tels was working on Swift Mazes, a demo for procedurally generating TDM maps. I don't know how you would go about making an AI version of that concept, but anything's possible.

Intresting. This could mean eventually that we could have many released fm's monthly. And that's good enough for me.

Link to comment
Share on other sites

7 minutes ago, kin said:

Intresting. This could mean eventually that we could have many released fm's monthly. And that's good enough for me.

I wouldn't count on it. It's within the realm of possibility, but things are not likely to speed up that much anytime soon. And a fully AI-generated TDM mission could be like an abandoned McDonald's compared to a fine restaurant.

AI is going to have an major impact on AAA studios working on open world games like Grand Theft Auto and The Elder Scrolls. They'll figure out how to leverage it quicker than everybody else, and use it to eliminate some jobs and/or massively increase productivity. The maps will be 100x larger, with 16x the detail. They can use it to add orders of magnitude more voice lines to their games, or even store all/most voice as text and have the game engine generate the sounds as needed, with dynamic options.

About the only thing I don't see changing that much in the short term is the actual behavior of "AIs" (NPCs).

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

6 hours ago, jaxa said:

 

  • Like 1

"I really perceive that vanity about which most men merely prate — the vanity of the human or temporal life. I live continually in a reverie of the future. I have no faith in human perfectibility. I think that human exertion will have no appreciable effect upon humanity. Man is now only more active — not more happy — nor more wise, than he was 6000 years ago. The result will never vary — and to suppose that it will, is to suppose that the foregone man has lived in vain — that the foregone time is but the rudiment of the future — that the myriads who have perished have not been upon equal footing with ourselves — nor are we with our posterity. I cannot agree to lose sight of man the individual, in man the mass."...

- 2 July 1844 letter to James Russell Lowell from Edgar Allan Poe.

badge?user=andarson

Link to comment
Share on other sites

One angle no one has presented in this discussion yet, is copyright. I mean it sucks ass and is probably unconstitutional in it's current perpetual form where someone across the world can break your stuff after taking your money. But that is beside the point; the way these AI routines operate, is by creating derivative works from someone else's work. Well it's not *quite* the same thing, because it takes little snippets from everyone and mish-mashes them together into something new.

 

Or put a different way, if I was a skilled mapper, I'm not sure how happy I would be about sections of my work winding up in another "machine-generated" map comprised of work of other humans without my permission or giving proper credit. I guess this opens Pandora's box to the idea that humanity could achieve more if everyone worked together, which is something that I don't actually disagree with. I however also don't think humanity would cease to create new things if machines began rampantly "sampling and remixing" all of our output, because creativity and sharing is just part of human nature.

Edited by kano
Link to comment
Share on other sites

26 minutes ago, kano said:

One angle no one has presented in this discussion yet, is copyright. I mean it sucks ass and is probably unconstitutional in it's current perpetual form where someone across the world can break your stuff after taking your money. But that is beside the point; the way these AI routines operate, is by creating derivative works from someone else's work. Well it's not *quite* the same thing, because it takes little snippets from everyone and mish-mashes them together into something new.

 

Or put a different way, if I was a skilled mapper, I'm not sure how happy I would be about sections of my work winding up in another "machine-generated" map comprised of work of other humans without my permission or giving proper credit. I guess this opens Pandora's box to the idea that humanity could achieve more if everyone worked together, which is something that I don't actually disagree with. I however also don't think humanity would cease to create new things if machines began rampantly "sampling and remixing" all of our output, because creativity and sharing is just part of human nature.

The copyright angle is going to be decided by the courts. But if you can't prove that some AI output actually remixed your work, you have no claim. Even where you can, it will be pointed out that "style" is not copyrightable and humans also use references. Then there are some legal precedents like Google Book Search and TurnItIn that could be favorable to Stability AI in its big lawsuit.

When you see the cobblestones Arcturus generated above, is it possible to trace any specific infringement, other than the image used as input? Doesn't seem like it would be.

  • Like 1
Link to comment
Share on other sites

It's ethically dubious that AI was trained on works of artists without their consent. If you ask the program to generate art in a style of a particular person, that means that artist's work has been in a training database. And now it may put that person out of work. On the other hand how can you reserve rights to some statistical properties of somebody's work, like colors or how long on average the brushstrokes are. On the other hand there had been cases in music business like the infamous Robin Thicke vs Marvin Gaye where people were sued for using similar style, even if melody and lyrics are different.

bing.PNG.b8777f9f8e51a586b45cbc52c0ecdafc.PNG

Here is a possible intro to a “Thief: the dark project” mission in a style of main protagonist Garrett:

Quote

I've heard rumors of a new job in town. A wealthy baron named Lord Bannock has recently acquired a priceless painting of a banana from a foreign land. He keeps it in his private gallery, guarded by his loyal men and a few mechanical contraptions. The painting is said to be worth a fortune, and I'm not the only one who's interested in it.

There's also a group of pagans who worship the banana as a sacred fruit. They believe that the painting belongs to them, and they're willing to do anything to get it back. They've hired a ship to sail into the harbor and launch a raid on the baron's mansion. They plan to use their magic and their wooden chicken idols to distract the guards and break into the gallery.

I don't care much for their cause, but I do care for their gold. They've offered me a generous sum to join them and help them steal the painting. They say they need someone with my skills and experience to bypass the security systems and grab the prize. It sounds like an easy job, but I have a feeling that things won't go as smoothly as they hope.

The baron is not a fool. He knows that his painting is in danger, and he's prepared for any intruders. He's hired some of the best mercenaries in the city to protect his property. He's also installed some new devices that I've never seen before. They look like metal eyes that can see in the dark and shoot fireballs at anyone who gets too close. I'll have to be careful and avoid them at all costs.

This is going to be a tricky one, but I'm not afraid of a challenge. I've done worse things for less money. Besides, I've always wanted to see what a banana looks like up close. Maybe I'll even take a bite out of it before I hand it over to the pagans. That would be a nice souvenir.

The ship is leaving soon, and I have to get ready. I've packed my trusty blackjack, my bow and arrows, my lockpicks and my flash bombs. I hope they'll be enough for this job. I don't know what awaits me inside the baron's mansion, but I'm sure it won't be boring.

I'm Garrett, and I'm a thief.

 

Bing got a little confused at the end.

  • Like 1

It's only a model...

Link to comment
Share on other sites

On 4/18/2023 at 9:59 AM, jaxa said:

Does not look like a trustworthy source.

 

On 4/18/2023 at 9:23 PM, kano said:

......................... the way these AI routines operate, is by creating derivative works from someone else's work. ...............

Although our brain works similarly (has to take information from somewhere to create content), this could be a definition of A.I.

And because of that I see it quickly being tightly regulated.

Edited by kin
Link to comment
Share on other sites

21 minutes ago, kin said:

Although our brain works similarly (has to take information from somewhere to create content), this could be a definition of A.I.

And because of that see it quickly being tightly regulated.

There's a lot of money on the side wanting to use AI (see Microsoft), and geopolitical concerns. Example: If EU or America tightly regulates AI, China could run with it and use it to dominate certain industries.

Best case scenario: light/no/ineffective regulation

Worst case scenario: regulation takes AI away from the people, but big corporations still get to use it. There's already talk of surveillance of cloud providers to watch for AI training, and restrictions on purchasing AI hardware.

https://cyber.fsi.stanford.edu/io/news/forecasting-potential-misuses-language-models-disinformation-campaigns-and-how-reduce-risk
https://arxiv.org/abs/2301.04246

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...