I use the instance of class Sound. To get the desired pitch I use the method:
public long loop (float volume, float pitch, float pan);
It works as expected on desktop build but on GWT pitch isn't working.
My gwtVersion is 2.8.2, gdxVersion is 1.9.10 and I use de.richsource.gradle.plugins:gwt-gradle-plugin:0.6.
I have been stuck on this problem for a couple of days now and would be very thankful for any input.
"Why LibGDX sound pitch doesn't work with GWT?"
Because the official libGDX GWT backend uses SoundManager for playing sounds within the browser, and SoundManager does not support pitching.
To get it run, you must change to another solution: the WebAudio API. Lucky as you are, others already implemented it for libGDX, but it is not pulled into the repository for unknown reasons. See PR 5659.
As told in the PR, you can switch to my forked GWT backend to get it to work. Simply change your gradle.build file
implementation 'com.github.MrStahlfelge.gdx-backends:gdx-backend-gwt:1.910.0'
You also need to declare the Jitpack repo if you don't already have.
Related
I'm new here, so feel free to give tips where needed. I am running into trouble using the Unreal engine combined with the HoloLens 2.
I would like to access the special black/white cameras of the HoloLens, for tracking purposes. These are normally not accessible. However, they can be activated by using the “perceptionSensorsExperimental” capability. This should be possible, since it also works with Unity: https://github.com/doughtmw/HoloLensForCV-Unity
I have tried to add the capability in the Unreal Project Settings: Config\HoloLens\HoloLensEngine.ini” -> “+RescapCapabilityList=perceptionSensorsExperimental”. The project still builds as expected, but I noticed that it doesn’t matter what I add here. Even something random like “+abcd=efgh” doesn’t break the build.
However, if I add “+CapabilityList=perceptionSensorsExperimental”, I get “Packaging (HoloLens): ERROR: The 'Name' attribute is invalid - The value 'perceptionSensorsExperimental' is invalid according to its datatype 'http://schemas.microsoft.com/appx/manifest/types:ST_Capability_Foundation' - The Enumeration constraint failed.”. I conclude: 1.) I’m making the changes in the right file. 2.) The right scheme needs to be configured in order for “+RescapCapabilityList=perceptionSensorsExperimental” to work as expected.
My question is how do I add the right schema to my Unreal project? (like in the Unity example referenced above, which uses “http://schemas.microsoft.com/appx/manifest/foundation/windows10/restrictedcapabilities”), I cannot find any example and I cannot find any proper place to put it. Not in the settings, not in the xml/ini files. Clearly, I am missing something.
Any thoughts are much appreciated!
Updated. We released HoloLens-ResearchMode-Unreal plugin
I'm trying to implement AudioServicesPlaySystemSound(SystemSoundID(****)) therefor I need a list of existing IDs for Apples SystemSounds. Searching through various posts I found this on GitHub. I couldn't find a fitting sound in this list for my purpose. Since this repository wasn't updated since 2013 I'm not sure if its up to date. I would like to know if there is a list of SystemSounds which is more up to date.
First of all, the list you have found is not published by Apple.
I do not know if the author researched by themself or just collected them, but that sort of action is considered as a reverse engineering which is inhibited in developer agreement.
I couldn't find a fitting sound in this list for my purpose.
You may need to find a sound resource instead of SystemSoundID, and register it and create a SystemSoundID for it using AudioServicesCreateSystemSoundID.
I would like to know if there is a list of SystemSounds which is more up to date.
The latest list of public SystemSoundID is here:
Alert Sound Identifiers
Constants
kSystemSoundID_Vibrate
On the iPhone, use this constant with the
AudioServicesPlayAlertSound
function to invoke a brief vibration. On the iPod touch, does nothing.
kSystemSoundID_UserPreferredAlert
On the desktop, use this constant with the
AudioServicesPlayAlertSound
function to play the alert specified in the Sound preference pane.
kSystemSoundID_FlashScreen
On the desktop, use this constant with the
AudioServicesPlayAlertSound
function to display a flash of light on the screen.
kUserPreferredAlert
Deprecated. Use kSystemSoundID_UserPreferredAlert instead.
(Some of them are for macOS only.)
Using other SystemSoundIDs, can be considered as using private API.
Some comments from Apple developers in Apple's devforums:
Does this count as a private API?
It would not be appropriate to use undocumented arbitrary values in APIs, so I would recommend you not do that in your submission.
Haptic feedback for force touch?
For a fixed SystemSoundID value to be considered API, it must have a symbolic constant in the headers. Passing in other fixed values is not OK.
I'm developping a game with Unity.
I'm trying to check all of the VRC (Virtual Reality Check) tests, especially TestResponseToRecenterRequest and TestAppShouldQuit ( link ).
My problem is, I have absolutly no idea how to listen these requests.
Most of forums said to use ovr_GetSessionStatus. However, it's a C++ method, not a C# one. Can you point me to a valid solution to listen ovr status or, at least, handle recenter and quit requests ?
Cordially
Probably you already fixed this, but i found that the ovr_GetSessionStatus is exposed as static variables on the OVRPlugin class, so you can check on an Update().
if (OVRPlugin.shouldRecenter)
{
OVRManager.display.RecenterPose();
}
I have tried to get a response on Github but with no activity about this issue there I will ask here.
I have been following the documentation and I am stuck when I have imported the WwiseResonanceAudioRoom mixer effect on the bus in Wwise and I do not see anything in the properties. I am not sure if I am supposed to? Right after that part of the documentation is says "Note: If room properties are not configured, the room effects bus outputs silence." I was wondering if this was the case and yes it outputs silence. I even switched the effect to see if it will just pass audio and it does just not with the Room effect, so at least I know my routing is correct.
So now this leads up to my actual question. How do you configure the plugin?? I know there is some documentation but there is not one tutorial or a step by step for us non code savvy audio folk. I have spent the better half of my week trying to figure this out b/c frankly for the time being this is the only audio spatialization plugin that features both audio occlusion, obstruction and propagation within Wwise.
Any help is appreciated,
Thank you.
I had Room Effects with Resonance Audio working in another project last year, under its former name, GVR. There are no properties on the Room Effect itself. These effect settings and properties reside in the Unity Resonance prefabs.
I presume you've follow the latter tutorial on Room Effect here:
https://developers.google.com/resonance-audio/develop/wwise/getting-started
Then what you need to do is to add the Room Effect assets into your Unity project. The assets are found in the Resonance Audio zip package, next to the authoring and SDK files. Unzip the Unity stuff into your project, add a room Effect in your scene and you should be able to see the properties in the inspector of the room object?
Figured it out thanks to Egil Sandfeld Here ! https://github.com/resonance-audio/resonance-audio-wwise-sdk/issues/2#issuecomment-367225550
To elaborate I had the SDKs implemented but I went ahead and replaced them anyways and it worked!
I'm trying to integrate a mechanism to calculate the BPM of the song in the iPod library(also on iphone).
Searching on the web I found that the most used and reliable libraries to do this things is soundtouch.Anyone has experience with this library? It is computationally possible to make it run on the iPhone hardware?
I have recently been using the code from the BPMDetect class of the soundtouch library succesfully. Initially compiled it on C++, later on translated the code to C# and lately I've been using the C++ code on an Android app through JNI. I'm not really familiar with development in iOS but I'm almost certain that it is possible what you're trying to do.
The only files you should use from the soundtouch source code are the following:
C++ files
BPMDetect.cpp
FIFOSampleBuffer.cpp
PeakFinder.cpp
Header files
BPMDetect.h
FIFOSampleBuffer.h
FIFOSamplePipe.h
PeakFinder.h
soundtouch_config.h
STTypes.h
At least these are the only ones I had to use to make it work.
The BPMDetect class recieves raw samples through its inputSamples() method, it's capable of calculating a bpm value even when the whole file is not yet loaded into its buffer. I have found that these intermediate values differ from the one obtained once the whole file is loaded, which is more accurate, in my experience.
Hope this helps.
EDIT:
It's a kind of complex process to explain in a comment so I'm going to edit the answer.
The gist of it is that you need your android app to consume native code. In order to do that, you need to compile the files listed above from the soundtouch library with the Android NDK toolset.
That will leave you with native code that will be able to process raw sound data, but you still need to get the data from the sound file, which you can do several ways, I think. The way I was doing it was using the FMOD library for Android, here's a nice example for that: FMOD for Android.
Supposing you declared a method like this in your C code:
void Java_your_package_YourClassName_cPlay(JNIEnv *env, jobject thiz)
{
sound->play();
}
On the Android app you use your native methods in the following way:
public class Sound {
// Native method declaration
private native void cPlay();
public void play()
{
cPlay();
}
}
In order to have a friendlier API to work with you can create wrappers around these function calls.
I put the native C code I was using in a gist here.
Hope this helps.