Jump to content
The Dark Mod Forums

AI with more surroundings based hearing


V-Man339
 Share

Recommended Posts

After sneaking past generators and thunderstorms where the AI could hear my footsteps just as well as they could in an indoor, quiet environment I think it's safe to say we need more attention given to having AI that hears worse or better depending on the noise generated by the environment.

 

Splinter Cell Chaos Theory is a great example of a game that had this down beautifully, can provide examples if need be for both levels that could use this touch in the Dark Mod and levels in games that handle this wonderfully.

I like to record difficult stealth games, and right now you wonderful people are the only ones delivering on that front.

Click here for the crappy channel where that happens.

Link to comment
Share on other sites

This was something we had planned to do from the start, but it proved very difficult to create a generic system that works for all maps.

Link to comment
Share on other sites

There is a spot in the code that says something like, "Put environmental noise impact here." But there are no clues as to how to represent that noise.

 

At this point, AFAIK, it's not possible to retrofit environmental noise so existing maps can take advantage of it. Something new would need to be added to maps to identify noisy areas. And machinery that can be turned off needs to be dealt with as well.

 

And the problem becomes especially nasty if the noise is thunder, which is stochastic.

 

I wasn't in the original design discussions, but I imagine these types of considerations helped to torpedo the idea at the time.

  • Like 1
Link to comment
Share on other sites

Is it possible to tell, if an AI is inside the volume of a speaker? If so, maybe the hearing acuity of the AI could be multiplied with a factor that includes the volume setting of the speaker? This would require "only" modification of the AI and use spawnargs that are already inside the game.

Link to comment
Share on other sites

The only sounds an AI can hear are "propagated sounds". These are defined by a list. The AI can't hear anything not on the list.

 

The code works like this:

 

1 - If the AI is beyond the radius of the sound, quit.

 

2 - Else propagate the sound from the speaker through the surrounding visportals, obeying occlusion settings (open/closed doors, mapper-defined occlusions) until you either reach the AI or run out of volume. If the latter, quit.

 

3 - Now that you've reached the AI, is there enough volume left for it to "hear" the sound? If not, quit.

 

4 - The AI will react to the sound depending on what it is.

 

 

What isn't in the code (environmental sounds):

 

1 - An "area" method (simple) ...

 

If the AI is in a noisy area, reduce the volume of a propagated sound when it reaches the AI, by a value contained in the InfoLocation entity for that area. "Area" is defined as a game location (boiler room, waterfall, etc.) Mappers would be responsible for defining the boundaries of the noisy area (w/o having to consider what the speakers in the area are playing).

 

Manage the state where the noise disappears (i.e. a generator is shut off).

 

Existing maps can't rely on this method, because they already have their locations defined (if any appear at all), and there's no noise value on the InfoLocation entities.

 

2 - A "propagated sound" method (complex) ...

 

Create a list of noisy sounds. A generator would be on the list; crickets or a crying baby wouldn't. When a propagated sound makes it to an AI, and is loud enough for him to hear it, seek out all currently playing noisy sounds whose radius reaches the AI. Run steps 1-4 above using each found noisy sound to see how much of it reaches the AI. If the propagated sound is louder at the AI than all of the noisy sounds, let the AI process the propagated sound.

 

A sound turning off is easily managed with this method.

 

This method can only consider the volume setting of the sound; it can't know anything about the volume of the actual sound (i.e. is the sound file itself loud or soft).

 

3 - Some unknown method ...

 

Create a different method after thinking about it for a while and discussing it in a thread.

  • Like 1
Link to comment
Share on other sites

If I understand that correctly, my suggestion would require a couple of preceding steps: First: Check if the AI is the radius of any speaker and if so, which volume does the speaker have (this, however, has the same problem as the second method you suggested). If the AI is not inside the radius of a speaker proceed as before. If the AI is inside the radius of a speaker, the acuity is adjusted according to the volume of the sound (the AI would not need to hear the sound, as the adjustment is only based on the location of the AI, without the need of a location_info). This, in turn, (if I understand correctly) influences step 3 in the code by modifying the amount of sound volume required to "hear" the sound.

This method would not require the introduction of new location_infos and would thus, apply more easily to older maps and would also not require the definition of new propagating sounds and the comparison of volumes of each propagated sound (which in my layman's opinion sounds quite resource intensive).

 

What I am not sure about is: What happens internally, when the sound stops? I.e. is the speaker removed, is its (geometrical or sound) volume reduced to 0 or is there any other way to discern a muted speaker from a not-muted speaker?

Link to comment
Share on other sites

First: Check if the AI is the radius of any speaker

 

 

There are at least three reasons why this wouldn't work, beyond the fact that there's currently no way to tell if an AI is inside the radius of a speaker.

 

1. An AI can be in the radius of a speaker that he can't hear (he might be inside a speaker on a floor above him, but there is no path for the sound to reach him).

2. Sounds can play intermittently without the speaker radius changing.

3. Not all speakers should affect AI hearing...a speaker that plays narration, for example.

Link to comment
Share on other sites

Ok, then scrap this idea. I have no idea about coding and thus, have to make suggestions on what I think might work.

 

From the two suggestions grayman made, I would prefer the second one, as it is more presice and might also work for older missions.

Link to comment
Share on other sites

What I am not sure about is: What happens internally, when the sound stops? I.e. is the speaker removed, is its (geometrical or sound) volume reduced to 0 or is there any other way to discern a muted speaker from a not-muted speaker?

 

The speaker isn't removed, unless the mapper has explicitly removed it via other objects or scripts.

 

It's possible to query a speaker to see what its current volume is. However, see below.

 

 

There are at least three reasons why this wouldn't work, beyond the fact that there's currently no way to tell if an AI is inside the radius of a speaker.

 

1. An AI can be in the radius of a speaker that he can't hear (he might be inside a speaker on a floor above him, but there is no path for the sound to reach him).

2. Sounds can play intermittently without the speaker radius changing.

3. Not all speakers should affect AI hearing...a speaker that plays narration, for example.

 

Right. If we don't use the location entity method (simple), then we need to use the propagated sound method (complex). We can't use a "within radius" method because of all the architecture that might be between the speaker and the AI. The speaker sound would need to be propagated, starting with its current volume and working through the visportals toward the AI, to see what we're left with when we reach the AI.

 

It would be interesting to understand how the cited games simulate environmental sound masking. They might have been designed from the start to handle it. Whether we could backfit that into TDM is a different matter, though.

Link to comment
Share on other sites

I think most of the time environmental masking is done simply through scripts for linear story sequences like in Call of Duty World at War with the sniping during bombing scenes.

Constructing an entire mechanism based on this is an Eldorado waterfall of work. 99% of people will be happy if EAX can work here.

"I really perceive that vanity about which most men merely prate — the vanity of the human or temporal life. I live continually in a reverie of the future. I have no faith in human perfectibility. I think that human exertion will have no appreciable effect upon humanity. Man is now only more active — not more happy — nor more wise, than he was 6000 years ago. The result will never vary — and to suppose that it will, is to suppose that the foregone man has lived in vain — that the foregone time is but the rudiment of the future — that the myriads who have perished have not been upon equal footing with ourselves — nor are we with our posterity. I cannot agree to lose sight of man the individual, in man the mass."...

- 2 July 1844 letter to James Russell Lowell from Edgar Allan Poe.

badge?user=andarson

Link to comment
Share on other sites

Can it be something as simple as sounds louder than a players footsteps and sounds not louder than the players footsteps and if inside a certain area where that sound is the AI won't respond to it at all based on if the sound that said piece of environment is putting off is louder then the players footsteps?

Link to comment
Share on other sites

This would be the second method grayman stated. You need the engine to know which sounds have which volume (and how far the volume is reduced when it reaches the AI) and how loud any sounds are that the player makes.

 

Regarding the location based setting: Shouldn't it be possible to create a script that reduces the AI's hearing acuity on entering an a location and return it to its original value upon leaving? When this is handled with a script, the script itself can be turned off/on together with the sound.

Link to comment
Share on other sites

Regarding the location based setting: Shouldn't it be possible to create a script that reduces the AI's hearing acuity on entering an a location and return it to its original value upon leaving? When this is handled with a script, the script itself can be turned off/on together with the sound.

Do we have call_on_* location scripts that work for AI, not just the player? (If we do I might be able to avoid this problem with triggers...)

 

To handle cases like turning a machine on or off, you'd addiitionally need the script to modify the settings of AI already inside the location in question. (I don't know whether there's a more elegant way of doing that by script than iterating through entities to find every AI and checking its location. Edit: maybe the entry script could store their names in some entity's spawnargs - in this case it doesn't much matter if they get killed or knocked out and never removed - and the exit script remove them.) To make it really surprise-proof for mappers you'd need to handle cases like AI being spawned in or teleported.

Edited by VanishedOne

Some things I'm repeatedly thinking about...

 

- louder scream when you're dying

Link to comment
Share on other sites

You are right, the call_on_entry/exit script calls are limited to the player. But at least the entering/exiting part could maybe be handled with a trigger_multiple that can be turned active/inactive together with the sound. However, if we start with something like that, it will be another workaround to simulate the effect and no general solution...

Link to comment
Share on other sites

Do we have call_on_* location scripts that work for AI, not just the player? (If we do I might be able to avoid this problem with triggers...)

 

You can't call scripts on ai from the location entities, but the scriptobject used by the ai can check in which location they are and can access the location entity. So it is possible to change the ai's hearing acuity, their hearing threshhold or other settings via the location entities by modifying the ai script.

  • Like 1

FM's: Builder Roads, Old Habits, Old Habits Rebuild

Mapping and Scripting: Apples and Peaches

Sculptris Models and Tutorials: Obsttortes Models

My wiki articles: Obstipedia

Texture Blending in DR: DR ASE Blend Exporter

Link to comment
Share on other sites

You are right, the call_on_entry/exit script calls are limited to the player. But at least the entering/exiting part could maybe be handled with a trigger_multiple that can be turned active/inactive together with the sound. However, if we start with something like that, it will be another workaround to simulate the effect and no general solution...

Also this will not work on old FMs unless this is intended to not distrub their balance or something...

Link to comment
Share on other sites

I think it is impossible (at least without tons of work) to find a way that would work on older missions. So, I think it is better to think of this feature as an addition for future missions.

Plus it may break existing missions where mappers may have added loud machines or similar for the purpose of athmosphere, not thinking of the possibility that it would effect ai's hearing. I mean, break is probably a strong word, but it does effect gameplay.

 

A location based system is a pretty straight-forward way especially for city missions and thelike, where you would expect outdoor areas to be loud and indoor areas to be quiet, except maybe for those few rooms that contain loud engines or similar.

 

Btw.: On a per speaker base I was thinking about using stim/response in the past, so this would be doable, too without the need to apply changes to the code.

FM's: Builder Roads, Old Habits, Old Habits Rebuild

Mapping and Scripting: Apples and Peaches

Sculptris Models and Tutorials: Obsttortes Models

My wiki articles: Obstipedia

Texture Blending in DR: DR ASE Blend Exporter

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...