grayman Posted September 4, 2018 Report Share Posted September 4, 2018 I'm working on this issue, as previously discussed in this thread, and I'd like to get input from mappers. Atm, it works like this: A new entity, func_listener, gets placed in the map where you want to hear the sounds around it. Activate the entity and it turns on, and you hear sounds as if the player's ear is where the entity is. Activate it again, and it turns off. If one listener is active, and another gets activated, sound should switch from the first to the second. When finished with all listeners (if there's a string of them), activating the last one should return sound to the player's ear. The hierarchy of sound is currently: leaning against door listener entity player ear Anything else that's important to people, w/o making this thing over-complex? Thanks. 3 Quote Link to comment Share on other sites More sharing options...
Dragofer Posted September 4, 2018 Report Share Posted September 4, 2018 How about a combined camera and ear entity? It's already possible to achieve by binding the func_listener to the camera, but often both are needed in cutscenes. Quote FM: One Step Too Far | FM: Down by the Riverside | FM: Perilous Refuge Co-FM: The Painter's Wife | Co-FM: Written in Stone Dragofer's Stuff | Dragofer's Scripting | A to Z Scripting Guide | Dark Ambient Music & Sound Repository Link to comment Share on other sites More sharing options...
peter_spy Posted September 4, 2018 Report Share Posted September 4, 2018 Btw. you can solve the problem with sound and moving camera. The problem is that player is moved in relation to camera when you trigger the cutscene. To solve that you need to teleport the player to camera starting point. This way camera receives sound correctly. Quote Misc. assets for TDM Link to comment Share on other sites More sharing options...
grayman Posted September 4, 2018 Author Report Share Posted September 4, 2018 Yes, thats whats done in the SAtC FM and the cutscene wiki pages. The new entity removes having to teleport the player, so you dont have to deal with the player being seen by AI during the cutscene. 1 Quote Link to comment Share on other sites More sharing options...
Obsttorte Posted September 4, 2018 Report Share Posted September 4, 2018 This is great. One question, though. It reads like it would not be possible to let the player hear the sounds around the player entity and the func_listener entity simultanously, is that right? If so, could this be added? And how about beeing able to link a func_listener to a speaker, so that the speaker emits the sound hearable from the listeners location? Just asking as you are already at it. Quote FM's: Builder Roads, Old Habits, Old Habits Rebuild Mapping and Scripting: Apples and Peaches Sculptris Models and Tutorials: Obsttortes Models My wiki articles: Obstipedia Texture Blending in DR: DR ASE Blend Exporter Link to comment Share on other sites More sharing options...
grayman Posted September 5, 2018 Author Report Share Posted September 5, 2018 Mixing sounds: This would prolly require a lot of work in the sound code to capture what two locations hear and then blend them together. I haven't researched it, but that's my gut feeling. I'm not sure I want to a - do the research and b - make the changes. Perhaps someone else would be willing to tackle that. Linking listener to speaker: Not sure what is meant by this. Placing a listener at a location will pick up all the sounds around it. It isn't necessary for a listener to be bound to a speaker. Allowing a camera to have hearing: I think of these things like a movie set: the camera records the video and the microphone records the sound. I don't think there's an advantage to extending camera function to include listener abilities, since you can simply place the listener next to the camera and achieve the same effect. If the camera is moved along a path, a listener can be made to move along the same path. Quote Link to comment Share on other sites More sharing options...
peter_spy Posted September 5, 2018 Report Share Posted September 5, 2018 IMO it would be easier if listener was integrated in camera entity by default, and if camera movement was decoupled from player movement. The camera setup is complicated enough as it is, you have to stack up at least 3 entities now to make it work. Otherwise you still need to teleport player in the right place when the cutscene ends (otherwise he ends up in void, or where the camera stops, if you teleported him to camera starting location when starting the cutscene). As for AI detecting player during cutscene, can't you just set notarget to 1 in the script for the time needed for camera to move? Quote Misc. assets for TDM Link to comment Share on other sites More sharing options...
Obsttorte Posted September 5, 2018 Report Share Posted September 5, 2018 1. Mixing sounds: This would prolly require a lot of work in the sound code to capture what two locations hear and then blend them together. I haven't researched it, but that's my gut feeling. I'm not sure I want to a - do the research and b - make the changes. Perhaps someone else would be willing to tackle that. 2. Linking listener to speaker: Not sure what is meant by this. Placing a listener at a location will pick up all the sounds around it. It isn't necessary for a listener to be bound to a speaker.Yeah, I've thought so. Well, it was just an idea. Although I am not sure whether it is neccessary to blend the sounds. I didn't meant to bind the listener and the speaker, but have a listener at one location transferring the sound to a speaker placed in another location. I am not thinking of cutscenes here, but more of the player beeing able to overhear the sound of a location he is not in via a device while he is still hearing everything in his surrounding normally. Similar to how a camera gui allows the player to oversee what goes one elsewhere while still seeing his normal surroundings. This goes partially hand in hand with the first proposal, as the player would both hear what he normally hears and the sound of a different area as well. Quote FM's: Builder Roads, Old Habits, Old Habits Rebuild Mapping and Scripting: Apples and Peaches Sculptris Models and Tutorials: Obsttortes Models My wiki articles: Obstipedia Texture Blending in DR: DR ASE Blend Exporter Link to comment Share on other sites More sharing options...
grayman Posted September 5, 2018 Author Report Share Posted September 5, 2018 I'm going to see if this works: If you want a camera to also act like a microphone, you need to tell the camera (or the entity the camera gets its view from) to turn on its microphone. Switching to a camera's view doesn't automatically get you the sound at the camera. Being able to toggle the camera mic on and off allows the option of hearing or not hearing the sound at the camera. That sound right? 2 Quote Link to comment Share on other sites More sharing options...
peter_spy Posted September 5, 2018 Report Share Posted September 5, 2018 Yup, if this was a spawnarg for current camera entity, that would be great. Quote Misc. assets for TDM Link to comment Share on other sites More sharing options...
grayman Posted September 5, 2018 Author Report Share Posted September 5, 2018 Ooops, might have spoken too soon. This won't work if the cutscene setup is a camera that's some distance from the center of the action. The mic needs to be at the center of the action, which means it has to be a separate entity from the camera. Which takes me back to using a separate entity as the Listener. If the mapper wants the Listener to be at the camera or follow the camera if it moves, then the mapper would bind the Listener to the same mover that the camera is bound to, or bind the Listener to the camera. Using a separate Listening entity still seems to be the solution that allows the mapper to cover the most situations. Continue discussing! 3 Quote Link to comment Share on other sites More sharing options...
RPGista Posted September 5, 2018 Report Share Posted September 5, 2018 (edited) Definitely would be better to have separate entities. You can be doing a panoramic and the microphone is firmly centered on the action/conversation. To allow for the "echo chamber" effect mentioned, maybe we could have the microphone capture sounds and send it to a speaker as well, not only the player. This way you can be walking close to that vent or "phone tube" end and you can overhear, the closer you get, whatever is being echoed there in real time (as well as keep your normal hearing in your surroundings - you wouldnt need to "use" the vent or pipe in order to listen to it or not, just aproach it). This would in theory allow for cries of help (alerts) to be carried from one part of the map to another and to acvitate whatever AI happens to be around the end of the tube, thats close enough to heae it. For this to work during gampelay however you would need to add some kind of filter arg to the speaker, as it would have to sound distorted and muffled, it shouldnt sound crystal clear. Edited September 5, 2018 by RPGista Quote Link to comment Share on other sites More sharing options...
grayman Posted September 6, 2018 Author Report Share Posted September 6, 2018 Regarding hearing what's around you PLUS what's arriving from a remove Listener: The current design allows one Listener at a time. Either 1 - the player's ear OR2 - a location on the other side of a door the player is leaning into OR3 - a remote location defined by a new Listener entity. Each sound the player can hear goes through these steps: 1 - Create a sound emitter.2 - Using the sound's data (volume, min/max distance), send out a waveform to see if that waveform reaches THE listener (whichever is the current listener from the list above).3 - Do that for each sound being emitted in a frame.4 - Combine the results for each emitter into an audible sound played on the hardware channel(s). To allow the player to hear sounds around him while also hearing sounds from a remote Listener, two emitters would need to be created for each emitting sound. One is given the location of the player's ear and the other is given the location of the Listener. Then each emitter would follow steps 1 through 4 above to determine its contribution to what the player hears. So having two listening locations active in the same frame (whereas now we have only one) will require sending out 2 waveforms for each sound. This may or may not have an impact on performance; that would have to be determined by testing. I think the amount of code that needs to be added is very small, but the impact of running that extra code could affect performance. Is there a really strong desire to hear what's going on around the player AND simultaneously hear what's going on at a remote location? 1 Quote Link to comment Share on other sites More sharing options...
Obsttorte Posted September 7, 2018 Report Share Posted September 7, 2018 I would say yes. This would allow this feature to be used for more then just to remove the neccessity to teleport the player for cutscenes. You wrote yourself that the amount of code that needs to be added is probably very small, so it should be worth the effort. Quote FM's: Builder Roads, Old Habits, Old Habits Rebuild Mapping and Scripting: Apples and Peaches Sculptris Models and Tutorials: Obsttortes Models My wiki articles: Obstipedia Texture Blending in DR: DR ASE Blend Exporter Link to comment Share on other sites More sharing options...
Destined Posted September 7, 2018 Report Share Posted September 7, 2018 I think it could be very useful. You could introduce vents through which the player can gather additional information (conversations, footsteps etc.), you could introduce speaking tubes like on ships (where the captain speaks to the machine room, don't know the word, though), you could intoduce security cameras with additional sounds, and many more things. As I am writing this, I notice that it is mostly ambience and I am not sure if it is worth the possible performance hit, but I am quite sure that people would find further use for it. Quote Link to comment Share on other sites More sharing options...
grayman Posted September 7, 2018 Author Report Share Posted September 7, 2018 I've committed the new idListener entity to SVN, along with new Windows binaries. To test it, add a func_listener to your map. Activate it to let the player hear what it hears. Activate it again to turn it off. You can activate a series of these things, and the sound will move from one to the next. To return to the normal "player's ear" sounds, just activate the final listener in the sequence once more. None of this includes the player hearing what's around him while he hears the remote listeners. That is a more complex problem, and one I'm looking into now. 3 Quote Link to comment Share on other sites More sharing options...
RPGista Posted September 7, 2018 Report Share Posted September 7, 2018 Thats awesome. Makes me want to play with cutscenes. Quote Link to comment Share on other sites More sharing options...
Obsttorte Posted September 7, 2018 Report Share Posted September 7, 2018 Nice. Thanks for the effort. Quote FM's: Builder Roads, Old Habits, Old Habits Rebuild Mapping and Scripting: Apples and Peaches Sculptris Models and Tutorials: Obsttortes Models My wiki articles: Obstipedia Texture Blending in DR: DR ASE Blend Exporter Link to comment Share on other sites More sharing options...
grayman Posted September 8, 2018 Author Report Share Posted September 8, 2018 Significant progress on the problem of letting the player hear sounds around him while listening to a remote Listener: All the changes I've put in have gotten me to the point where I now can't hear a farkin' thing. Nicht. Nada. Zilch. Quote Link to comment Share on other sites More sharing options...
Dragofer Posted September 8, 2018 Report Share Posted September 8, 2018 Excellent to hear of this progress. Maybe you could even enhance our current door eavesdropping by letting one ear hear what's around the player and one ear what's on the other side of the door? The door-crossing sounds may need some muffling though, both for realism and to give the player an additional clue to figure out from which side of the door a sound is coming from, i.e. an approaching guard. Maybe reduce sound_loss to one third? Quote FM: One Step Too Far | FM: Down by the Riverside | FM: Perilous Refuge Co-FM: The Painter's Wife | Co-FM: Written in Stone Dragofer's Stuff | Dragofer's Scripting | A to Z Scripting Guide | Dark Ambient Music & Sound Repository Link to comment Share on other sites More sharing options...
grayman Posted September 8, 2018 Author Report Share Posted September 8, 2018 Yes, eavesdropping is considered to be a Listener. The goal of eavesdropping is to make sounds on the other side of a door louder; they skip the door s occlusion. If I get it right, these sounds will reach the shifted ear (loud) AND the normal ear (occluded), and will be mixed into a single sound. Quote Link to comment Share on other sites More sharing options...
peter_spy Posted September 8, 2018 Report Share Posted September 8, 2018 Btw. Making two sound sources would create mixing and volume problems, so it's probably better to have just one source available. Quote Misc. assets for TDM Link to comment Share on other sites More sharing options...
grayman Posted September 8, 2018 Author Report Share Posted September 8, 2018 There is only one source. It travels in one waveform for each listener atm, each waveform contributing to the mixed sound as do all sounds that are playing in a given frame. An improvement would be a single waveform seeking multiple listeners, instead of one listener per waveform. Thats a bit more complex because the current design makes lots of assumptions based on only one listener (I.e. when to stop traveling). Quote Link to comment Share on other sites More sharing options...
grayman Posted September 9, 2018 Author Report Share Posted September 9, 2018 Can now play sounds on the far side of a door while leaning, and still hear sounds on the near side of the door. Can now play sounds from an activated remote Listener entity, and still hear sounds around the player. TODO: create an option to turn off the sounds around the player, which is useful in cutscenes where you want to hear what a microphone is picking up in the scene, but you don't want to hear anything from around the spot where the player is standing while the cutscene is playing. This feature would mean you no longer have to teleport the player and hide him in the scene just to hear what's going on. Next up is transferring a sound from a remote Listener to a speaker, as if they share the same origin. Think of this as having a microphone in a remote location that's picking up sounds and a speaker in a monitoring room that plays the sounds the microphone is hearing. The ultimate goal of the waveform is the player's ear, and this is like a "tunnel" that the waveform passes through w/o occlusion, regardless of the distance or visportals or closed doors between the mic and the speaker. After completing this "tunnel" and the TODO, I think I'm done with these changes. Is there anything I missed? 1 Quote Link to comment Share on other sites More sharing options...
grayman Posted September 9, 2018 Author Report Share Posted September 9, 2018 Stretch Goal: allow EFX reverb on remote sounds the player can hear, either from Listeners or from leaning against a door. For example, if the player is in a wooden room, but leans into a door to a stone room, the reverb on the sounds on the far side of the door should be for a stone room, not the wooden room the player is standing in. Check the EFX code to see if it uses the player origin or the player's ear. 2 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.