I'm wondering what other solutions people are using to control volume and activation of various Resonance Audio Sources in Unity.
I'm working on a mobile VR experience in Unity with interactible elements (looping audio until player triggers next step) and linear movement of the player between different-sounding spaces in one scene.
I've resorted to using the Animation features on the Timeline to control how to turn Audio Sources on and off and setting volumes, as Snapshots and other audio controls are ignored in the Resonance Mixer.
I'd love to hear how other Resonance users are controlling their audio!
Thanks,
Anna
Related
I am trying to implement Resonance Audio via Wwise for a game in Unity, but I have a problem I can not solve for the life of me.
The problem is that the zones where I can hear the room reverb doesn't correspond to the set room boundaries. I've uploaded a video to illustrate the problem. Each room in the game has a WwiseResonanceAudioRoom component attached to it, with no change in scale, that fits the room. But for each of the three rooms, I can only hear the room reverb in some specific part of the room. Some reverb for objects on a longer distance can be heard from parts of the room where the others can't be heard. I've added a debugging box to the game view which outputs the exact data that the ResonanceAudio.cs script is sending to Wwise. In Wwise, all of the reverbed sounds are sent to a "RoomEffectsMasterBus" with the Resonance Audio mixer plugin on it, as well as an ambisonic aux bus (which I turned the volume down on for the video). The regular sound works well, and I'm using Wwise Rooms to separate the room sounds.
How the hell do I make Resonance Audio output room reverbs in the entire room? I haven't touched the Resonance Audio scripts (other than now adding the debug text output). Any help would be greatly appreciated!
I'm using
- Wwise 2019.2.0.7216
- Resonance Audio SDK 2019.2.1.91 from Wwise Launcher
- Unity 2019.2.17f1
- The ResonanceAudioForWWise 1.2.1 Unity-Wwise implementation wrapper scripts for Unity
Here's the video showing the issue:
https://youtu.be/0Y7GXG69IZ0
Room 2 boundaries 1
Room 1 boundaries 2
I am using a prefab with an audio source attached to it and is used only if you click on the prefab to play a short sound (click sound). There is a scene that I am using this prefab for ~50 times.
There is no problem at all, it works great but I was just wondering is it a bad practice to have so many prefabs each one using its own audio source?
Thank you.
It depends on the use case, but in most cases you can't really avoid it(using more than one audio source). If you look at the inspector of the Audio Source-Component, you see a field for referencing the audio file. So basically even if you have 50 Audio Source-Components it just remains one audio file (in case that you only want to play this single sound). The intention of this approach with multiple audio sources is to get a "physically realistic" feeling. So as in real life if you are out of the range of an audio source you won't hear it.
For example if you have a game with around 50 enemies in the current scene, it's more or less necessary to attach each of them an Audio Source-Component because you want to hear only the enemies, which are in your range.
If you have just one central Audio Source it has to play everything and in most cases you have more work than benefit from it. But a static game like a card game can work very well with this approach, so that you have only one GameObject which holds an Audio Source-Component. If you have more than one sound effect, you have to change the referenced AudioClip programmatically every time you want to play a sound, which is not the currently selected one.
So basically its not really a bad practice, because in most cases it is more or less intended that you have more than one audio source.
The problem is that I have a character controller the player with a camera and the camera have a Audio Listener.
But I also have another camera the Main Camera that also have a Audio Listener.
The Main Camera is using Cinemachine Brain and using virtual cameras.
If I disable the Audio Listener on the Main Camera the character in my cut scene will walk to a door/s but when the door/s will open there will be no sound of the door open.
And if I disable the player controller camera Audio Listener then when I will move my player around to door/s there will be no sound when the player enter a door.
And I need both to work. While the character in the cut scene is walking and enter a door and the door is open the player can walk around.
Screenshot of the player controller camera and the audio listener:
And this is the Main Camera Audio Listener screenshot:
So now when running the game the character medea_m_arrebola is walking by animation through a door and there is a sound of the door open and close.
This is part of a cut scene that work in the background I mean the cut scene camera is not enabled yet but you can hear the audio.
Later I will switch between the cameras to show parts of the cut scene.
But now also the FPSController ( Player ) is working and the player can move around but when he will walk through a door the door will open but there will be no audio sound of the door.
And if I will enable both Audio Listeners I will get this warning message in the console in the editor say that more then 2 audio listeners are enabled....etc.
This sounds like a design issue to me. Unity can only handle one AudioListener at a time. You basically have to construct your cutscene-system to work with what Unity offers, or find some kind of workaround to fit your specific case.
You could try to en-/disable your AudioListeners on the fly or maybe use AudioSources around you player dedicated to directional audio input while in a cutscene. (Like a surround sound setup with empty objects) That way you could simulate two AudioListeners. The best case would be if you reworked your system to use one AudioListener for both inputs.
Maybe try a workaround first but if it does not 100% work as intended do the rework. It's worth it in the long run.
I have a 2D video and I would like to move it into 360. I'm aware of the differences and so on and so forth but, I would like to have a 360 video with something like a cinema room and, in the main screen, that 2D video would be displayed.
Is there any suggestion, automatic tool for that, or anything that could be useful? I'm open to use Unity3D, blender or any software related to video editing.
The usual approach is to use a computer generated world or a 360 image for the cinema room rather than a 360 video, and display your 'flat' 2D video on a 'screen' or wall in that generated room.
You basically render the video onto a texture that you have set up in Unity. This is supported as standard by Google VR for unity(https://developers.google.com/vr/develop/unity/video-overview):
Streaming Video Support
The GVR SDK for Unity includes an additional GVR video plugin that supports streaming flat and 360° videos in both mono and stereo formats by using the ExoPlayer library to handle decoding and rendering of video, audio, and related streams. ExoPlayer provides all the standard playback framework for both videos embedded in your application and streaming videos, including adaptive streaming support, such as DASH and HLS.
For example the Netflix version, VR Theatre, looks like this - the video plays on the 'screen' in front of the viewer:
If you look on the unity asset store you can find complete home or cinema 360 'images' that you can use in Unity also - for example (at the time of writing): https://assetstore.unity.com/packages/templates/vr-home-cinema-66863
I learned to capture screenshot from unity game and I'm having my HD ones happily. By my still poor knowledge of animation I can't understand how many screenshots and with what delay to get from unity to render a reasonable quality mpeg video later in blender, say, for a 3 minute clip?
Any link to a tutorial is very welcomed. I sure found ready exntensions for in-game video from unity, but I want to DIY to learn something!