Jump to content

What do you think about the implications of image generating AIs to game development (and the world)?


vozka
 Share

Recommended Posts

12 minutes ago, Springheel said:


Yep, professional artists and graphic designers will be without a job within a few years.  Modellers and animators won't be far behind, I imagine.

One lower tier artist or graphic designer will be doing the job of multiple, and AI image models will be an important part of the toolkit alongside the usual software like Photoshop/GIMP/Illustrator. Freelance artists will be competing for a smaller piece of a smaller pie.

 

Gaming could be hit hard. What might help counteract that are the increasing demands for larger and more intricate open world games. 16 times the detail!!!!!1 The Elder Scrolls 6 and Grand Theft Auto 6 will be massive.

Link to comment
Share on other sites

2 hours ago, jaxa said:

This is offtopic, but you just reminded me of a great memory. 

I was always into underground music culture, mostly metal as a teenager, but later also electronic music. A few months after I moved to a different city for university I discovered a tiny underground (literally) private bar, basically a rave speakeasy. No license, entrance with a camera, located in what used to be a wine cellar and a later a PC gaming arcade (which left very strong air conditioning there, so the weed only smelled outside, not inside). Each weekday it was open from about 6-7 pm to about 4 am, and almost every time there were one or more DJs playing more or less underground electronic music, for free, just for fun. Sometimes doing live broadcasts over the internet, before that became a common thing. 

I still remember the first time I came there, walked down a narrow set of stairs and entered the dark main room with a couple tables with benches and about 10 different CRT monitors placed in random places (including under the benches), all playing Electric Sheep animations. At the time it felt like something from The Matrix. 

The place is still kind of operating many years later, but it got handed over to guys who care more about getting drunk everyday than about music or technology, so the magic is long gone. 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

2 hours ago, jaxa said:

Gaming could be hit hard.

I can imagine when this tech goes into the third dimension it'll be a big sea change.

I can even imagine in the future the tech procedurally generating the gameplay along with it, so you can just generate a whole game world, characters, & gameplay. I don't know if the tech for storytelling will be up to scratch on the large scale, it's lagging so far, but on the small scale I could see it convincing enough.

-----

What's been striking me most is that there's just such a flood of it. I have my folders of AI art... Actually I started them like 10 years ago, but I only ever collected a few things more for the gimmick of it than it actually looking good. But now I've already got masses of really interesting works in them and find masses more every day.

But I think there's a point where there's just too much. That's what I think is going to be the big issue on the social and economic level. There's so much that one can only take so much in, and what does that do for anyone or anything else that wants to squeeze into that space.

  • Like 1

What do you see when you turn out the light? I can't tell you but I know that it's mine.

Link to comment
Share on other sites

34 minutes ago, demagogue said:

But I think there's a point where there's just too much. That's what I think is going to be the big issue on the social and economic level. There's so much that one can only take so much in, and what does that do for anyone or anything else that wants to squeeze into that space.

Speaking of which, we could use better storage technology. Consumer holographic drives with hundreds of terabytes, please.

Link to comment
Share on other sites

I swear the development of AI sprinkled with some over-hyped assumptions about the progress of tech, seems to suggest that within maybe 10-20 years from now, it'll be like:

---

Me: Man I'm tired of these boring-ass games. OI! COMPUTER! Create me a stealth game with <insert gameplay requirements, desired art style, perspective, and other parameters>.

Computer: Running AI algorithms, generating engine, assets and levels. Game will be built in 5 hours.

Me: I'm gonna go into the refrigeration unit for a while, I really miss when we still had a season called "winter". Let me know when it's done!

A word of warning, Agent Denton. This was a simulated experience; real LAMs will not be so forgiving.

Link to comment
Share on other sites

Imagine the algorithms that generate whole worlds and creatures in No Man's Sky such that it includes more and more giant library assets to build your complete game. Clearly the vastness of an undertaking would require massive CPU and storage. You could start some new GTA or Far Cry or Assassins Creed game off with custom level parameters such as human diversity level (or choose orcs, dwarves, knights, droids, or the entire cast of characters from a movie), time period/era, weapon types, scenery settings, aka medieval, cyberpunk, neon, rustic, etc. It could really start customizing the game so that it's completely different depending on your starting parameters.

Link to comment
Share on other sites

To be clear, the AI algorithms that will be creating 3D models, environments, etc. don't necessarily have anything to do with the text-to-image algorithms that are all the rage right now. But they are already being worked on, and will leverage the improvements in GPUs and ML accelerators. (Some of these GPUs are seeing doubling/quadrupling of compute in a generation.)

I think I've seen some stuff related to 3D modeling and animation on the Two Minute Papers YouTube channel but I need some time to find them.

https://spectrum.ieee.org/mlperf-rankings-2022

Link to comment
Share on other sites

10 hours ago, jaxa said:

Speaking of which, we could use better storage technology. Consumer holographic drives with hundreds of terabytes, please.

Nah, just make sure, that the AI image generation process is deterministic (the actual AI should already be). Then you only need to store the AI, maybe a seed value for the deterministic pseudo random number generator used to get more variation, and your image description text. That way, one Gigabyte of disk space gives you exabytes of AI image storage capacity.
Storing images like that also means, that there finally isn't a distinction between vector and bitmap images anymore. Both would be generated by an AI and are therefore sorta infinitely scalable (but ultimately the contained detail in the resulting image keeps being limited by the size of the AI of course).

The only catch is, that you can't store non-AI-generated images that way - yet (surely there also could be an AI that takes an image and generates a detailed description from it, but some human-visible details would likely be lost in the process).

Link to comment
Share on other sites

From what I've been reading in the AI lit, the next step is rapidly integrating new information into the network weights with some tricks to avoid the so-called catastrophic forgetting problem, more in the Grossbergian version of neural nets to get a little technical about it, more in the way human memory actually works. Humans also don't save visual images verbatim either, but they're procedurally recreated in memory on demand.

Basically what it means is you can get your local AI to look at a work of art or music and it'll recalibrate its weights so it can "recall" it later based on some lightweight cue like "that painting I uploaded on (date & time)". I mean the user doesn't even have to know it's not a direct save and it got embedded in AI weights.

-------

I remember in high school and college reading the claptrap from some futurists about the coming Singularity, and laughing it off, not even whether we were actually approaching it, but that the very idea itself felt kind of too outlandish to buy.

But now I'm starting to get a sense of what it might actually be like to reach that point, when AI have integrated knowledge way beyond what humans can follow, and humans having direct and instant access to it, they just take it for granted like it was an extension of themselves. Or something like that. I'm still not sure how it may play out if at all.

What do you see when you turn out the light? I can't tell you but I know that it's mine.

Link to comment
Share on other sites

1 hour ago, Oktokolo said:

The only catch is, that you can't store non-AI-generated images that way - yet (surely there also could be an AI that takes an image and generates a detailed description from it, but some human-visible details would likely be lost in the process).

This is already possible to a degree. It takes two things: first you use the language model of the AI to "interrogate" the image and find what text description would lead to generate something close to it. Then you basically use the diffusion algorithm which normally generates images from noise in reverse, to gradually generate noise from an image. This gives you the seed and description, and the result is often very close to the original image. 

I was playing with it yesterday because it then allows you to change some details without changing the whole image, and that allows you to take the portraits of your women friends and transform them into being old and unattractive, which I found all of them universally appreciate and find kind and funny.

The early implementation in Stable Diffusion is just a test and it's quite imperfect (it produces noisy images with overblown colours among other things), plus it's limited by the quality of normal SD image generation, but as a proof of concept it obviously works and will probably get better soon.

 

1 hour ago, Oktokolo said:

Storing images like that also means, that there finally isn't a distinction between vector and bitmap images anymore. Both would be generated by an AI and are therefore sorta infinitely scalable (but ultimately the contained detail in the resulting image keeps being limited by the size of the AI of course).

However I don't see this happening any time soon. The AIs work with bitmaps, are trained on a specific image size (with Stable Diffusion it's 512 x 512 px at the moment) and making anything significantly smaller or larger completely breaks the process. Various AI upsampling algorithms exist, but they never work as well as straight up generating the image in the resolution that the neural net was optimized for. And I don't know about any practical solutions to this yet. 

Edited by vozka
  • Like 1
Link to comment
Share on other sites

On 9/9/2022 at 5:08 AM, OrbWeaver said:

When humans create art, they do so by a process of generating ideas based on existing art styles they have seen or been taught, along with various sources of "inspiration" from their everyday life or past experiences.

"There is no magic, there is no "soul"."

Picasso's Guernica.  Anything by Klee.

I agree with chakkman and I don't think he's making an argument for the existence of a "soul" or "magic", but rather for natural causes.  Humans are living creatures and product of eons of evolution and with it comes biological imperatives, to live, to propagate and to protect and provide for ones offspring and ones community.  Math logic is a wonderful invention but almost by definition it is mindless.  It's a tool.  A magnificent tool.

A few days ago I replayed Sotha's Glenham Tower.  I played it years ago and forgot all the details - just a memory that it was good bookmarked in my skull as being an FM had "it".  My replay was just as immersive as the first playthrough even though my memory prompted me to anticipate as I played along.  I'm unsure if I used any of the ammo.  Anyway I rate this FM as being close to perfect.  I tend to be generous in my reviews and there are several FMs that I rate that high.  I think it shows a skill that's beyond my own abilities.  But what is that skill?  How can I know what it is, since it's beyond me?  I can deconstruct the map and recreate the gameplay along different lines, throwing in glitz like volumetric lights and so on, and come up with something unspeakable.  By proceeding that way, no matter what kind of glitz I throw in, what kind of diversions and filler, what kind of joy all this copy/pasting gives me, I'm not matching the "skill" that went into making Glenham. 

Link to comment
Share on other sites

12 hours ago, geegee said:

But what is that skill?  How can I know what it is, since it's beyond me?  I can deconstruct the map and recreate the gameplay along different lines, throwing in glitz like volumetric lights and so on, and come up with something unspeakable.  By proceeding that way, no matter what kind of glitz I throw in, what kind of diversions and filler, what kind of joy all this copy/pasting gives me, I'm not matching the "skill" that went into making Glenham. 

Artificial "intelligence" isn't there yet. But "But what is that skill?" - how exacly our senses, thinking, intuition and creativity works - is exactly, what the AI research at its core actually is about. There might be an AI that your descendants can feed their favorite missions and it will give them a new mission that is comletely different but "feels" the same and "bears the mark" of the author of the original missions. Maybe that AI will have to be created by a "general" AI, because there just isn't enough original training material for training such an AI with any method known today and you can't just substitute stealth immersive sim missions with maps of other game genres (i would even go so far that you can't really use Dishonored missions as training material for TDM missions - despite both being stealth immersive sim games).

AI will get there though - if we or our descendants don't nuke eachother into oblivion first.

Link to comment
Share on other sites

13 hours ago, geegee said:

Humans are living creatures and product of eons of evolution and with it comes biological imperatives, to live, to propagate and to protect and provide for ones offspring and ones community.

Exactly. We are biological "machines" following the "programming" of millions of years of biological evolution, along with several thousand years of cultural evolution.

There is no reason to assume that a biological machine is fundamentally capable of doing something an electronic machine can't, unless you cling to the philosophy of vitalism which says "Biological organisms are Just Different in ways which are impossible to describe or understand." Which is more or less identical to the belief in a metaphysical soul, just with slightly different language.

13 hours ago, geegee said:

 Math logic is a wonderful invention but almost by definition it is mindless.

So are the neurons which comprise our brains. They are balls of water and other substances which communicate with one another in a primitive, well-defined way. Nobody has ever been able to look at a neuron and say "That is the neuron which gives rise to consciousness and artistic appreciation". No neuron has a mind of its own. But together they somehow comprise a human mind.

13 hours ago, geegee said:

But what is that skill?  How can I know what it is, since it's beyond me?

And that's the problem with all these vitalistic and mysterian theories of consciousness. They rely on the logical fallacy that says "If I can't understand how this happens, it must be fundamentally non-understandable". But such an argument is clearly nonsense. There are thousands of things (e.g. in advanced physics or mathematics) that I don't understand, but other people do — and that's just looking at the present day, not all of things that future generations will understand better than any of us.

It would require an extraordinary arrogance to assume that because we can't understand how a fully-conscious machine could be built today, then it must necessarily be impossible even after hundreds or thousands of years of technological advancement.

Link to comment
Share on other sites

3 hours ago, Oktokolo said:

how exacly our senses, thinking, intuition and creativity works - is exactly, what the AI research at its core actually is about.

I don't think so.  It fits the desired conclusion while leaping right over any distracting ideas that the conclusion might not be right. What you describe is an emulator.  You look at it from a consumer perspective - as in a Turing test situation where the machine passes the test when in a short blind run an "evaluator" can't tell the difference between human and machine.  We have bots right now that can pass such a "test" as "evaluated" by millions of internet users. That doesn't make the bots capable of thinking, intuition, creativity or even anything in the same ball park.  Most bots seem to be run by assholes.

So, who runs an authoritative test of that sort?  Beyond an "I think therefore I am" declaration, individual humans can't even prove that other consciousness's exist.  That's the nature of subjectivity, awareness, and ultimately of thinking, intuition, creativity.    Who decided that thinking, intuition and creativity (as emulated by a machine) should imitate e.g. Glenham Tower or some other human inspired art, perhaps pumping out thousands of similar FMs that all have (according to your desired conclusion, stated at the start) the same mark of inspiration and execution?  What's the motivation of the machine?

I don't go along with the other conclusion OrbWeaver asserts as fact - that connections of neurons in the human brain are somehow similar to the hard electric connections of silicon chips flipping on/off and hence that the brain, and subsequently thinking, intuition,..., are likewise the same, or will be when the hardware and software is ramped up. I don't think so.  On the other hand, I'm not so dubious that bio engineering of the kind producing (thinking) Blade Runner replicants will soon enough be possible.  I think that's a different concept, though.

 

 

 

Link to comment
Share on other sites

32 minutes ago, geegee said:

I don't think so.

Well, some people actually doing research in that field are. And that is what ultimately will lead to further improvements in that fields.
There surely will be a working model of a whole human brain including its peripherals (the "body") in the future.

The idea of how a transistor could work is a hundred years old by now. We can run The Dark Mod on a personal computer smaller than a cubic meter now. And there still is plenty of time to research the inner workings of the human brain before our sun dies (although it i would guess it needing hundreds of years - not billions)...

Link to comment
Share on other sites

Just in terms of the raw tech, human brains still have ~8 orders of magnitude more nodes or multiplications (10K vs 1Q synapses I think it was), even assuming you equated nodes between neural nets, which is misleading, but it still handwaves at the distance artificial neural nets lag behind in pure information terms. We're still in earthworm territory and need high end graphics cards to crunch that much.

That's not a mysterian argument, but it has a similar punchline. When your model lags that far behind, it may as well be magic what the brain can do in comparison.

It's interesting that you can do (what seem to us like) really high level things, like this procedural art, chess engines, and probably music and other forms soon, with an earthworm sized brain; but really (what seem to us like) simple things, like ordering lunch at a fast food  joint or other open ended things, run into the AI Complete problem, i.e., you can't even do the most simple operation unless you have full human-level cognition and world knowledge.

What do you see when you turn out the light? I can't tell you but I know that it's mine.

Link to comment
Share on other sites

5 hours ago, Oktokolo said:

Well, some people actually doing research in that field are. And that is what ultimately will lead to further improvements in that fields.
There surely will be a working model of a whole human brain including its peripherals (the "body") in the future.

The idea of how a transistor could work is a hundred years old by now. We can run The Dark Mod on a personal computer smaller than a cubic meter now. And there still is plenty of time to research the inner workings of the human brain before our sun dies (although it i would guess it needing hundreds of years - not billions)...

It's funny you mention volume. The brain-mimicking neuromorphic chips are likely going to be easier to scale up into an improved 3D architecture than traditional CPUs, because the neuron spiking models use on the order of milliwatts instead of hundreds of watts. Less heat, massively parallel, easier to scale up.

At some point, planar chips are going to run out of steam and most high performance computing will move to 2.5D and 3D designs. Neuromorphic chips could be constructed in a way that makes a device similar in size to the human brain (~1.2 liters), or as large as can possibly be made at fabs. If the result costs millions of dollars, big companies will pay for it if it works. See the Cerebras Wafer Scale Engine.

There are orders of magnitude of additional performance left to pursue even after the apparent death of Moore's law. But the consumer hardware and dumb image models we have today already have artists spooked.

Edited by jaxa
  • Like 1
Link to comment
Share on other sites

1 hour ago, demagogue said:

It's interesting that you can do (what seem to us like) really high level things, like this procedural art, chess engines, and probably music and other forms soon, with an earthworm sized brain; but really (what seem to us like) simple things, like ordering lunch at a fast food  joint or other open ended things, run into the AI Complete problem, i.e., you can't even do the most simple operation unless you have full human-level cognition and world knowledge.

Simple things for AI are the things that we already know how to implement. When someone discovers a new design that allows to model a currently impossible task, that task suddenly becomes "simple" for AI.
Of course, if we wait till hardware with the capacity of the human brain becomes available, we probably could just train it as long as we train a human child and it would become a general AI even with our current level of knowledge.

But for each task, that AI already has achieved superhuman levels in, the brain of the best humans at that task obviously are the same ~8 orders of magnitudes less neuron-efficient than the earthworm AI beating them. So there seems to be a shitton of optimization potential when not modeling the human brain, but the resulting abilities.

What really holds AI back isn't the hardware anymore. It is the lack of knowledge about good AI designs, training methods and how to debug that beasts. Also, a lot of the human brain mass probably is just really lossy memory. We already have pretty good non-neuron-based replacements for that. So even for recreating the brain the actual distance could be a few orders of magnitude less than it seems...

Link to comment
Share on other sites

An article about using Stable Diffusion for image compression - storing the data necessary for the neural network to recreate the image in a way that would be more efficient than using standard image compression algorithms like jpeg or webp: https://matthias-buehlmann.medium.com/stable-diffusion-based-image-compresssion-6f1f0a399202

 

It's just a proof of concept since the quality of the model is not there yet (and the speed would be impractical), it can't do images with several faces very well for example. But it's very interesting because it has completely different tradeoffs than normal lossy compression algorithms - it's not blocky, it doesn't produce blur or color bleed, in fact quite often it actually keeps the overall character of the original image almost perfectly, down to grain levels of a photograph. But it changes the content of the image, which standard compression does not do. 

Link to comment
Share on other sites

More intelligent video codecs are definitely on their way. Current codecs have difficulty with things like white noise, gravel, water splashes and the like, because of the rapidly changing high-frequency content which does not compress well under DCT and Fourier-based algorithms. But a human doesn't care if this detail is accurate at the pixel level, as long as the texture appears realistic.

A future codec might encode this more efficiently by looking at the higher-level patterns, and representing something more like "a frame full of flowing water using scale S and colours C1, C2 and C3" which the decoder can use to recreate the detail, even if it doesn't match the actual pixels in the source footage.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recent Status Updates

    • freyk

      Tried to make a tdm advertisement commentpost at one of civvie11 youtube videos about T2. Post got marked as spam. His problem (to not discover TDM for himself),..not my problem.
      But some help of some fellow TDM yt-videocomment posters would be nice. To ask him and others, to play TDM. To get more players/creators. 
      · 0 replies
    • datiswous

      Currently Profile Information has 3 fields, these are shown in forum posts under your avatar:
      1. Gender
      2. Location
      3. Interests
      I think that it could be useful to have an extra field called "Operating system" (under location). It can be useful for tech support and to see what people use.
      Alternatively it could be a more general term, like PC system, so that you can for example state that you use an AMD gpu.
      · 2 replies
    • OrbWeaver

      Greetings fellow kids.
      · 11 replies
    • Crafty_Creeper

      Keep on Creeping on...
       
      · 3 replies
    • datiswous

      Just found out piped.kavin.rocks has a build-in audio-only for video's (great for music video's). Never thought something like this (including browser extensions) excisted. This would have saved me a lot of money on vacation, where I was camping without wifi.
      · 4 replies
×
×
  • Create New...