I am trying to implement Resonance Audio via Wwise for a game in Unity, but I have a problem I can not solve for the life of me.
The problem is that the zones where I can hear the room reverb doesn't correspond to the set room boundaries. I've uploaded a video to illustrate the problem. Each room in the game has a WwiseResonanceAudioRoom component attached to it, with no change in scale, that fits the room. But for each of the three rooms, I can only hear the room reverb in some specific part of the room. Some reverb for objects on a longer distance can be heard from parts of the room where the others can't be heard. I've added a debugging box to the game view which outputs the exact data that the ResonanceAudio.cs script is sending to Wwise. In Wwise, all of the reverbed sounds are sent to a "RoomEffectsMasterBus" with the Resonance Audio mixer plugin on it, as well as an ambisonic aux bus (which I turned the volume down on for the video). The regular sound works well, and I'm using Wwise Rooms to separate the room sounds.
How the hell do I make Resonance Audio output room reverbs in the entire room? I haven't touched the Resonance Audio scripts (other than now adding the debug text output). Any help would be greatly appreciated!
I'm using
- Wwise 2019.2.0.7216
- Resonance Audio SDK 2019.2.1.91 from Wwise Launcher
- Unity 2019.2.17f1
- The ResonanceAudioForWWise 1.2.1 Unity-Wwise implementation wrapper scripts for Unity
Here's the video showing the issue:
https://youtu.be/0Y7GXG69IZ0
Room 2 boundaries 1
Room 1 boundaries 2
Related
I am using a prefab with an audio source attached to it and is used only if you click on the prefab to play a short sound (click sound). There is a scene that I am using this prefab for ~50 times.
There is no problem at all, it works great but I was just wondering is it a bad practice to have so many prefabs each one using its own audio source?
Thank you.
It depends on the use case, but in most cases you can't really avoid it(using more than one audio source). If you look at the inspector of the Audio Source-Component, you see a field for referencing the audio file. So basically even if you have 50 Audio Source-Components it just remains one audio file (in case that you only want to play this single sound). The intention of this approach with multiple audio sources is to get a "physically realistic" feeling. So as in real life if you are out of the range of an audio source you won't hear it.
For example if you have a game with around 50 enemies in the current scene, it's more or less necessary to attach each of them an Audio Source-Component because you want to hear only the enemies, which are in your range.
If you have just one central Audio Source it has to play everything and in most cases you have more work than benefit from it. But a static game like a card game can work very well with this approach, so that you have only one GameObject which holds an Audio Source-Component. If you have more than one sound effect, you have to change the referenced AudioClip programmatically every time you want to play a sound, which is not the currently selected one.
So basically its not really a bad practice, because in most cases it is more or less intended that you have more than one audio source.
I'm wondering what other solutions people are using to control volume and activation of various Resonance Audio Sources in Unity.
I'm working on a mobile VR experience in Unity with interactible elements (looping audio until player triggers next step) and linear movement of the player between different-sounding spaces in one scene.
I've resorted to using the Animation features on the Timeline to control how to turn Audio Sources on and off and setting volumes, as Snapshots and other audio controls are ignored in the Resonance Mixer.
I'd love to hear how other Resonance users are controlling their audio!
Thanks,
Anna
I learned to capture screenshot from unity game and I'm having my HD ones happily. By my still poor knowledge of animation I can't understand how many screenshots and with what delay to get from unity to render a reasonable quality mpeg video later in blender, say, for a 3 minute clip?
Any link to a tutorial is very welcomed. I sure found ready exntensions for in-game video from unity, but I want to DIY to learn something!
Can someone please provide me some insights about how the pipeline is to implement 360 video and sound in VR? I know a thing or two about video editing, sound and unity3d but would like to know more about how to do all these things for VR. Lets say I want to shoot a 360 video, then put it on VR but also I want to incorporate the sound captured. Also I would like to have some interactive spots on it.
Edit: If I want to make interactive spots on it, will that mean I need different 360 cameras shooting from the spots I want the interaction to happen? or will the one video shot with one camera allowed for that?
Thank you
First you have to choose target platfrom e.g. IOS,Android etc. Than you have to find out video player which support 360 video like AVPROMEDIAPLAYER from unitiy3d's AssetStore.
for Interactive spots in video you have to make some local database like thing e.g. xml file for position of trigger and time for doing any activity.Hope this will help you.
I am going to build a FPS video game. When I developing my game, I got this problem in my mind. Each and every video game developer spend very big time and use a lot of effort to make their game's environment more realistic and life-like. So my question is,
Can We Use HD or 4K Real Videos as Our Game's Environment? (As we seen on Google Streetview - but with more quality)
If we can, How we program the game engine?
Thank you very much..!
The simple answer to this is NO.
Of-course, you can extract texture from the video by capturing frames from it but that's it. Once you capture the texture, you still need a way to make a 3D Model/Mesh you can apply the texture to.
Now, there have been many companies working on video to 3D model converter. That technology exist but is more for movie stuff. Even with this technology, the generated 3D models from a video are not accurate and they are not meant to be used in a game because they end up generating a 3D model with many polygons, that will easily choke your Game engine.
Also, doing this in real-time is another story. So you will need to continuously read a frame from the video, extract a texture from the video, generate a mesh with the HQ texture, cleanup/reduce/reconstruct the mesh so that your game engine won't crash or drop many frames. You then have to generate a UV for the mesh so that the extracted image can be applied to the current mesh.
Finally, each one of these are CPU intensive. Doing them all in series,in real-time, will likely make your game unplayable.I also made doing this sound easy but it's not. What you can do with the video is to use it as a reference to model your 3D environment in a 3D application. That's it.