OpenAL problem - changing gain of source - openal

I've recently been working on porting my game to be cross-platform, and decided to go with OpenAL for my cross-platform audio engine.
I have 16 "channels" (OpenAL sources) for playing up to 16 sounds concurrently. To play a sound, I switch which buffer is linked to a given source, and also set the gain, source position, and so on in order to play a sound in a given "channel" (source).
The problem is, I've noticed that my "gain" settings do not seem to have immediate effect. For instance, if a loud "lightning" sound plays in a given source at 0.5 gain, then when I have a button click sound play at 0.15 gain later, this click starts off FAR too loud. Then, each subsequent time it is played, the volume decreases until around the 3rd or 4th click it sounds like it's around the proper 0.15 gain.
The first button click is barely audible, and it seems to ramp up in volume until it reaches the 0.15 gain.
So in short, a "source" seems to be remembering the former gain settings, even though I am resetting those before playing a new sound in the same source! Is this a bug? Or something I don't understand about OpenAL? How can I get it to "instantly" change to the new gain/position settings?
Relevant code to play a sound:
[Channel is a value between 0 and 15, soundID is a valid index into the gBuffer array, stereoPosition is an integer between -255 and 255, and volume is between 0 and 255. This is from a function that's a wrapper for my game that used to use values between 0-255, so it converts the values to proper OpenAL values.]
// Stop any sound currently playing in this channel ("source")
alSourceStop( gSource[channel] );
// What sound effect ("buffer") is currently linked with this channel? (Even if not currently playing)
alGetSourcei( gSource[channel], AL_BUFFER, &curBuffer );
// attach a different buffer (sound effect) to the source (channel) only if it's different than the previously-attached one.
// (Avoid error code by changing it only if it's different)
if (curBuffer != gBuffer[soundID])
alSourcei( gSource[channel], AL_BUFFER, gBuffer[soundID] );
// Loop the sound?
alSourcei( gSource[channel], AL_LOOPING, (loopType == kLoopForever) );
// For OpenAL, we do this BEFORE starting the sound effect, to avoid sudden changes a split second after it starts!
volume = (volume / 2) + 1; // Convert from 0-255 to 0-128
{
float sourcePos[3] = {(float)stereoPosition / 50, 0.0, 2.0};
// Set Source Position
alSourcefv( gSource[channelNum], AL_POSITION, sourcePos ); // fv = float vector
// Set source volume
alSourcef( gSource[channelNum], AL_GAIN, (float)newVolume / 255 );
}
// Start playing the sound!
alSourcePlay( gSource[channel] );
I can post setup code too if desired, but nothing fancy there. Just calling
alSourcef( gSource[n], AL_REFERENCE_DISTANCE, 5.0f );
for each source.

We just faced the same problem when setting the ByteOffset for seeking to a position in a sample and found now the solution to get it working:
Just always delete and recreate the source before setting any parameter on it.
So if you want to change the gain and/or other parameters:
OpenAL.AL.DeleteSource(oldSourceID);
newSourceId = OpenAL.AL.GenSource();
OpenAL.AL.Source(newSourceId , OpenTK.Audio.OpenAL.ALSourcef.Gain, yourVolume);
OpenAL.AL.Source(newSourceId , OpenTK.Audio.OpenAL.ALSourcef.xxx, yourOtherParameter);
Hope it works for you and finally have a workaround after 10 years :-)

Related

AVAudioPlayer setVoume:x, when x > 1.0

I have been working with AVAudioPlayer on the iPad (original version) running IOS 4.3.3
The documentation for the volume property states:
"The playback gain for the audio player, ranging from 0.0 through 1.0."
Curiously it seems to allow you to use a value > 1.0, with the expected effect (the volume is increased accordingly). This means if you are playing a quieter track, you can (for example) mix it at volume 2.0 with the line
[myPlayer setVolume:2.0];
Reading back the volume property returns 2.0 as the current value.
so my question is: is this a mistake in the documentation, or a bug we can expect to be rectified in later releases?
It does turn out to be a useful feature, however does have potential to increase the playback volume to "over zero", should the audio being played happen to contain samples which when multiplied by the volume are over the supported bit resolution. In my app i am planning on using it to "level match" playback levels after scanning the audio.
Otherwise I'd need to turn down "loud" tracks to a predetermined nominal zero value, and not turn down the "quiet" tracks as much. It makes more sense to be able to increase the volume of the quieter tracks to actual "zero", thus giving more overall dynamic range.

AVAudioPlayer change sound tempo?

I am making a rhythm game. I need to play a sound with different tempo. In other words e.g. if I have [AVAudioPlayer play] 8 times in 2 seconds.
Check out the
enableRate
and
rate
property on the AVAudioPlayer Class. After you create the audioplayer, but before you play, set
audioPlayer.enableRate=YES;
then after you play, set rate to a number above or below 1.0 to speed up or slow down the track. For music, less than 0.8 or more than 1.2 starts to sound bad, but for a few BMP up or down, it will easily do the trick.
Note that play sets the rate to 1 and stop sets the rate to 0, so be sure to set the desired rate after playing.
I've used Pitch Shifting using the Fourier Transform – Source Code
http://www.dspdimension.com/download/

How to play a sound file using openAL command from particular time instant (Time input is double or float)

I want to know if there is a command in openAL that can be used to play a sound file from particular time instant ( seek time). Like if I am using a slider and slide it , once I release the slider, the sound file should play from that particular instant.
I have implemented it for iOS. But I have not found this openAL method that can play file from particular time instant.
alSourcei(sourceID, AL_BUFFER, 0);
alSourcei(sourceID, AL_BUFFER, bufferID);
// Set the pitch and gain of the source
alSourcef(sourceID, AL_PITCH, aPitch);
alSourcef(sourceID, AL_GAIN, aGain * fxVolume);
if(aLoop) {
alSourcei(sourceID, AL_LOOPING, AL_TRUE);
} else {
alSourcei(sourceID, AL_LOOPING, AL_FALSE);
}
// Set the source location
alSource3f(sourceID, AL_POSITION, aLocation.x, aLocation.y, 0.0f);
**Here we play the sound ***
alSourcePlay(sourceID);
Executing above code always play the sound track from initial position.
I want to know if there i any mehthod in openAL that can seek the track from some particular time instant.
You need to implement seeking. It can be quite tricky to do depending on how accurate you want the seeking to be (ie to nearest frame or seeking between frame boundaries).
How you implement this depends upon how your sound data is represented. I've previously implemented this for streaming with ogg vorbis data on openal. For that, I used implemented ov_seek as part of ov_open_callbacks.
There's a tutorial for OpenAL streaming on devmaster which I found useful.

iPhone/iPad sound playback with setCurrentTimeFunction. Non AVAudioPlayer

I've recently been trying to incorporate an intensive sound management class, where sound playback precision is a must.
What I'm looking for is the option to load a sound, set the playback starting position (or playhead), play for a certain time, pause the sound, set the 'playhead' position to a new interval and resume playback again. (with dynamic intervals).
I've tried using AVAudioPlayer for that matter - but it seems it's just too slow. The performance is just not what you expect, it lags when calling pause and setCurrentTime:.
It's the easiest library to use and the only one with stated setCurrentTime: function.
I come here asking for your help, a recommendation for a decent open-source SoundEngine that can handle interval setting (playhead movement) with low latency, or reference to where it is stated that OpenAL or AudioUnit tools can handle playback position setting.
Thank you in advance,
~ Natanavra.
It would be worth your time to check out the openAL programmer's guide that comes with the SDK. Its got all sorts of goodies!
From that:
Under source: Each source generated by alGenSources has properties which can be set or retrieved.
The alSource[f, 3f, fv, i] and alGetSource[f, 3f, fv, i] families of functions can be used to set or retrieve the following source properties:
...
AL_SEC_OFFSET f, fv, i, iv the playback position, expressed in seconds
AL_SAMPLE_OFFSET f, fv, i, iv the playback position, expressed in samples
AL_BYTE_OFFSET f, fv, i, iv the playback position, expressed in bytes
So you can get the playback position in seconds and divide by 60 to get your normalized time.
float pos = 0; alGetSourcef( sourceID, AL_SEC_OFFSET, &pos );
float normalizedPos = pos / 60.0f;
OpenAL definitely has the capabilities to playback sound pause whatever you like. remember OpenAL is often used in games as it delivers sound playback with low latency and on demand playback. You have a lot of control over the sound. compared to the AVAudioPlayer class.
Hope this helps
Do reply
Pk

Audio Recording iPhone - values of AudioStreamBasicDescription

These are the values I pass in, it's the only combination of values I have got working.
dataFormat.mSampleRate = 44100;
dataFormat.mFormatID = kAudioFormatLinearPCM;
dataFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsBigEndian;
dataFormat.mBytesPerPacket = 4;
dataFormat.mFramesPerPacket = 1;
dataFormat.mBytesPerFrame = 4;
dataFormat.mChannelsPerFrame = 2;
dataFormat.mBitsPerChannel = 16;
status = AudioQueueNewInput( &dataFormat, AudioInputCallback, self, NULL, NULL, 0,
&queue);
status = AudioFileCreateWithURL(fileUrl, kAudioFileCAFType, &dataformat, kAudioFileFlags_EraseFile, &audioFile
The recording works, but it is a lot of noise during the recording, and on the playback. Can it have anything to do with this code?
I can see two possible errors. First, as #invalidname pointed out, recording in stereo probably isn't going to work on a mono device such as the iPhone. Well, it might work, but if it does, you're just going to get back dual-mono stereo streams anyways, so why bother? You might as well configure your stream to work in mono and spare yourself the CPU overhead.
The second problem is probably the source of your sound distortion. Your stream description format flags should be:
kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsPacked
Also, don't forget to set the mReserved flag to 0. While the value of this flag is probably being ignored, it doesn't hurt to explicitly set it to 0 just to make sure.
Edit: Another more general tip for debugging audio on the iPhone -- if you are getting distortion, clipping, or other weird effects, grab the data payload from your phone and look at the recording in a wave editor. Being able to zoom down and look at the individual samples will give you a lot of clues about what's going wrong.
To do this, you need to open up the "Organizer" window, click on your phone, and then expand the little arrow next to your application (in the same place where you would normally uninstall it). Now you will see a little downward pointing arrow, and if you click it, Xcode will copy the data payload from your app to somewhere on your hard drive. If you are dumping your recordings to disk, you'll find the files extracted here.
What's your input device? The mic on the provided earbuds or the phone's built-in mic or what? Or are you recording into the Simulator?
Aside from the noise, does everything else sound right: speed, pitch, etc.?
It probably isn't causing any problems, but you're specifying two-channel input, while your input device is probably mono.
One last thought: is this a first generation iPhone? I think there's a weird issue with that model where 8 KHz input gets upconverted to 44.1.