Wwise, Resonance Audio and Unity Integration. Configure Wwise Resonance Audio Plugin - unity3d

I have tried to get a response on Github but with no activity about this issue there I will ask here.
I have been following the documentation and I am stuck when I have imported the WwiseResonanceAudioRoom mixer effect on the bus in Wwise and I do not see anything in the properties. I am not sure if I am supposed to? Right after that part of the documentation is says "Note: If room properties are not configured, the room effects bus outputs silence." I was wondering if this was the case and yes it outputs silence. I even switched the effect to see if it will just pass audio and it does just not with the Room effect, so at least I know my routing is correct.
So now this leads up to my actual question. How do you configure the plugin?? I know there is some documentation but there is not one tutorial or a step by step for us non code savvy audio folk. I have spent the better half of my week trying to figure this out b/c frankly for the time being this is the only audio spatialization plugin that features both audio occlusion, obstruction and propagation within Wwise.
Any help is appreciated,
Thank you.

I had Room Effects with Resonance Audio working in another project last year, under its former name, GVR. There are no properties on the Room Effect itself. These effect settings and properties reside in the Unity Resonance prefabs.
I presume you've follow the latter tutorial on Room Effect here:
https://developers.google.com/resonance-audio/develop/wwise/getting-started
Then what you need to do is to add the Room Effect assets into your Unity project. The assets are found in the Resonance Audio zip package, next to the authoring and SDK files. Unzip the Unity stuff into your project, add a room Effect in your scene and you should be able to see the properties in the inspector of the room object?

Figured it out thanks to Egil Sandfeld Here ! https://github.com/resonance-audio/resonance-audio-wwise-sdk/issues/2#issuecomment-367225550
To elaborate I had the SDKs implemented but I went ahead and replaced them anyways and it worked!

Related

FPS Test Issue with Onvif Test Tool v21.12 rev.7225

When I test with v21.12 rev.7225,
Test items with issue:
RTSS-1-1-46-v21.12 VIDEO ENCODER CONFIGURATION - JPEG RESOLUTION
RTSS-1-1-48-v21.12 VIDEO ENCODER CONFIGURATION - H.264 RESOLUTION
Test camera:
AXIS P1375 with f/w 10.8.1, or 10.9.4 (It claims to pass v21.12)
IndigoVision BX620-4MP with f/w 2.2.0
Error Message:
"Only 0 frames captured (0.0 FPS)"
Note that we've tried increase Operation Delay, yet still fail the test items.
What we confuse:
We've tested numerous cameras with both OTT v21.06 and OTT v21.12.
With OTT v21.06, we passed both test items without issues.
Yet with OTT v21.12, the fail rate is at around 80% as we tested several times with each cameras.
All setups are exactly the same, except the OTT version.
It's also quite weird to us that not every time we test with OTT v21.12 result fail.
We'd like to know if there's any change on test procedure or criteria for both test items to help us solve the issues.
I also encountered the same problem and found that the new version updated the UDP port used to obtain the URL stream is fixed
So nice
We had the same problem and we were looking
https://github.com/onvif/testspecs/discussions/21
If you check the link above, this is an ONVIF Test Tool bug
Maybe it will be fixed in v22.06! Hope this helps!
And check the ONVIF_Device_Test_Tool_Errata v3.50 file!
You can get that file at onvif.org
good news,
v22.06 already fix this problem written on ONVIF_Device_Test_Tool_Errata v3.51.pdf,
Erratum number: 60
share with everyone
BR,
You need to remember two things about the ONVIF conformance (none of those are related to programming):
Each ONVIF compliant camera published on https://www.onvif.org/conformant-products/ has an Interface Guide that explains if special settings are required before starting the test.
To be ONVIF conformant, cameras must have one FW tested with the testtool that was valid at the time of the certification. There is no requirement to re-certify the FW once newer versions of the test tool are released. Therefore it may happen that they pass with 21.06 and not with 21.12

HoloLens 2 Research Mode with Unreal - How?

I'm new here, so feel free to give tips where needed. I am running into trouble using the Unreal engine combined with the HoloLens 2.
I would like to access the special black/white cameras of the HoloLens, for tracking purposes. These are normally not accessible. However, they can be activated by using the “perceptionSensorsExperimental” capability. This should be possible, since it also works with Unity: https://github.com/doughtmw/HoloLensForCV-Unity
I have tried to add the capability in the Unreal Project Settings: Config\HoloLens\HoloLensEngine.ini” -> “+RescapCapabilityList=perceptionSensorsExperimental”. The project still builds as expected, but I noticed that it doesn’t matter what I add here. Even something random like “+abcd=efgh” doesn’t break the build.
However, if I add “+CapabilityList=perceptionSensorsExperimental”, I get “Packaging (HoloLens): ERROR: The 'Name' attribute is invalid - The value 'perceptionSensorsExperimental' is invalid according to its datatype 'http://schemas.microsoft.com/appx/manifest/types:ST_Capability_Foundation' - The Enumeration constraint failed.”. I conclude: 1.) I’m making the changes in the right file. 2.) The right scheme needs to be configured in order for “+RescapCapabilityList=perceptionSensorsExperimental” to work as expected.
My question is how do I add the right schema to my Unreal project? (like in the Unity example referenced above, which uses “http://schemas.microsoft.com/appx/manifest/foundation/windows10/restrictedcapabilities”), I cannot find any example and I cannot find any proper place to put it. Not in the settings, not in the xml/ini files. Clearly, I am missing something.
Any thoughts are much appreciated!
Updated. We released HoloLens-ResearchMode-Unreal plugin

Why LibGDX sound pitch doesn't work with GWT?

I use the instance of class Sound. To get the desired pitch I use the method:
public long loop (float volume, float pitch, float pan);
It works as expected on desktop build but on GWT pitch isn't working.
My gwtVersion is 2.8.2, gdxVersion is 1.9.10 and I use de.richsource.gradle.plugins:gwt-gradle-plugin:0.6.
I have been stuck on this problem for a couple of days now and would be very thankful for any input.
"Why LibGDX sound pitch doesn't work with GWT?"
Because the official libGDX GWT backend uses SoundManager for playing sounds within the browser, and SoundManager does not support pitching.
To get it run, you must change to another solution: the WebAudio API. Lucky as you are, others already implemented it for libGDX, but it is not pulled into the repository for unknown reasons. See PR 5659.
As told in the PR, you can switch to my forked GWT backend to get it to work. Simply change your gradle.build file
implementation 'com.github.MrStahlfelge.gdx-backends:gdx-backend-gwt:1.910.0'
You also need to declare the Jitpack repo if you don't already have.

Can both image target and object target be added one single database in unity vuforia?

I am developing an android app where I have to train my app to recognize two images and four objects.I created one single database where I added all the images and objects target in vuforia developer site and created the unity package. Now neither image nor object is getting recognized.
Probably the problem is the same for objects and images.
I think you should share some more info about what your doing as well as some meaningful code implementing it.
W/O that, I would suggest:
verify that the database and trackables are loaded and active # runtime
if so, see in console that the trackables are tracked by Vuforia
if so, verify the code enabling your augmentations
Please confirm whether have run trough these steps already and what results you got. I can share some code and further tips once the issue is a little but more specific.
Regards

Audio Metering levels with AVPlayer

Is there a way to have the audio metering levels from the AVPlayer class?
I know that AVAudioPlayer does that, but it doesn't play streamed http .m3u8 urls , like AVPlayer does.
I do not know, specifically, if they privatized what you're looking for, but I do know the various high-level classes are usually just the tops of the pyramid for the lower level AV items. In any case, I do have good news...
Remember an AVPlayer uses an AVPlayerItem. The AVPlayerItem has assets (and an AVAudioMix). The AVAssets have the properties for modifying audio and video.
Unfortunately, the properties indicate they are "suggestions", not absolutes, but try this:
preferredVolume
If you reference the docs, AVPlayer->AVPlayerItem->AVAsset and voila. It might be what you're after.
Hope it's enough
There is a fork of audioStream link ,does what you want.
I had try avplayer for days ,but the end shows it can't. Likely i find the audioStream fork version.This code is 4 years old ,but still works good at ios7(change the demo target from ios3 to ios7).
I've recently found this GitHub project called SCWaveformView that helped me a lot, and I hope someone else out there could benefit from it as well.
Edit:
Actually the extension suggested by dev in the comments is quite nice, and you can find it here, the ACBAVPlayerExtension.