FMOD runs out of channels, FMOD_CHANNEL_FREE seems to not to work - iphone

I am initializing FMOD with 32 channels and playing short samples (1 second) with the following code:
result = system->init(32, FMOD_INIT_NORMAL , NULL);
// here I load the sounds //
result = system->playSound(FMOD_CHANNEL_FREE, grid[_sound], false, &channel);
It works as intended, overlapping sounds, but now I realized that when I have played 32 samples (not at the same time), only one sound can be played at a time. It looks like FMOD_CHANNEL_FREE behaves like an incremental counter and when it hits 32, it stays there, stopping the last sound while it's still playing to play the new one.
Do I have to remove sounds when they have stopped playing? How? I feel like I am missing something basic
Thanks!
Marc

I had the same problem. Turns out that I forgot to call system->update() every frame. Once I put that in, it worked fine.

It sounds like the channels are still playing (but silent), can you check Channel::isPlaying and see if they are still going?
Perhaps post some more of your code if that doesn't help.

can u verify that u initializing fmod system with more than one max channels?
try to use following code for init your fmod system :
System->init(32, FMOD_INIT_NORMAL, 0);
or you forgot to call
System->Update();

Related

Gapless playback in pyglet

I understood this page to mean that queuing in pyglet provides a gapless transition between audio tracks. But when I test it out, there is a noticeable gap. Has anyone here worked with gapless audio in pyglet?
Example:
player = pyglet.media.Player()
source1 = pyglet.media.load([file1]) # adding streaming=False doesn't fix the issue
source2 = pyglet.media.load([file2])
player.queue(source1)
player.queue(source2)
player.play()
player.seek([time]) # to avoid having to wait until the end of the track. removing this doesn't fix the gap issue
pyglet.app.run()
I would suggest you either edit your url1 and url2 into caching them locally if they're external sources. And then use Player().time to identify when you're about to reach the end. And then call player.next_source.
Or if it's local files and you don't want to programatically solve the problem you could chop up the audio files in something like Audacity to make them seamless on start/stop.
You could also experiment with having multiple players and layer them on top of each other. But if you're only interested in audio playback, there's other alternatives.
It turns out that there were 2 problems.
The first one: I should have used
source_group = pyglet.media.SourceGroup()
source_group.add(source1)
source_group.add(source2)
player.queue(source_group)
The second one: mp3 files are apparently slightly padded at the beginning and at the end, so that is where the gap is coming from. However, this does not seem to be an issue with any other file type.

SWIFT - Is it possible to save audio from AVAudioEngine, or from AudioPlayerNode? If yes, how?

I've been looking around Swift documentation to save an audio output from AVAudioEngine but I couldn't find any useful tip.
Any suggestion?
Solution
I found a way around thanks to matt's answer.
Here a sample code of how to save an audio after passing it through an AVAudioEngine (i think that technically it's before)
newAudio = AVAudioFile(forWriting: newAudio.url, settings: nil, error: NSErrorPointer())
//Your new file on which you want to save some changed audio, and prepared to be bufferd in some new data...
var audioPlayerNode = AVAudioPlayerNode() //or your Time pitch unit if pitch changed
//Now install a Tap on the output bus to "record" the transformed file on a our newAudio file.
audioPlayerNode.installTapOnBus(0, bufferSize: (AVAudioFrameCount(audioPlayer.duration)), format: opffb){
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) in
if (self.newAudio.length) < (self.audioFile.length){//Let us know when to stop saving the file, otherwise saving infinitely
self.newAudio.writeFromBuffer(buffer, error: NSErrorPointer())//let's write the buffer result into our file
}else{
audioPlayerNode.removeTapOnBus(0)//if we dont remove it, will keep on tapping infinitely
println("Did you like it? Please, vote up for my question")
}
}
Hope this helps !
One issue to solve:
Sometimes, your outputNode is shorter than the input: if you accelerate the time rate by 2, your audio will be 2 times shorter. This is the issue im facing for now since my condition for saving the file is (line 10)
if(newAudio.length) < (self.audioFile.length)//audiofile being the original(long) audio and newAudio being the new changed (shorter) audio.
Any help here?
Yes, it's quite easy. You simply put a tap on a node and save the buffer into a file.
Unfortunately this means you have to play through the node. I was hoping that AVAudioEngine would let me process one sound file into another directly, but apparently that's impossible - you have to play and process in real time.
Offline rendering Worked for me using GenericOutput AudioUnit. Please check this link, I have done mixing two,three audios offline and combine it to a single file. Not the same scenario but it may help you for getting some idea. core audio offline rendering GenericOutput

I have get trouble when use fmod to setMusicSpeed()

I have a MID file to play,and it spend 10s in normal speed.but when I call setMusicSpeed() to set 0.1,it also stop after 10s. I want to know why and how to solve it.
Hm. Maybe you forget to call system->Update(); Also, try linking with the logging version of FMOD.
I have get email from FMOD support.To get around this, you should set the length of the stream to infinite length, using FMOD_CREATESOUNDEXINFO and set length value to (unsigned int)-1;and it's works.

GPUImageMovieWriter frame presentationTime

I have a GPUImageColorDodgeBlend filter with two inputs connected:
A GPUImageVideoCamera which is getting frames from the iPhone video camera.
A GPUImageMovie which is an (MP4) video file that I want to have laid over the live camera feed.
The GPUImageColorDodgeBlend is then connected to two outputs:
A GPUImageImageView to provide a live preview of the blend in action.
A GPUImageMovieWriter to write the movie to storage once a record button is pressed.
Now, before the video starts recording, everything works OK 100% of the time. The GPUImageVideo is blended over the live camera video fine, and no issues or warnings are reported.
However, when the GPUImageMovieWriter starts recording, things start to go wrong randomly. About 80-90% of the time, the GPUImageMovieWriter works perfectly, there are no errors or warnings and the output video is written correctly.
However, about 10-20% of the time (and from what I can see, this is fairly random), things seem to go wrong during the recording process (although the on-screen preview continues to work fine).
Specifically, I start getting hundreds & hundreds of Program appending pixel buffer at time: errors.
This error originates from the - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex method in GPUImageWriter.
This issue is triggered by problems with the frameTime values that are reported to this method.
From what I can see, the problem is caused by the writer sometimes receiving frames numbered by the video camera (which tend to have extremely high time values like 64616612394291 with a timescale of 1000000000). But, then sometimes the writer gets frames numbered by the GPUImageMovie which are numbered much lower (like 200200 with a timescale of 30000).
It seems that GPUImageWriter is happy as long as the frame values are increasing, but once the frame value decreases, it stops writing and just emits Program appending pixel buffer at time: errors.
I seem to be doing something fairly common, and this hasn't been reported anywhere as a bug, so my questions are (answers to any or all of these are appreciated -- they don't all need to necessarily be answered sequentially as separate questions):
Where do the frameTime values come from -- why does it seem so arbitrary whether the frameTime is numbered according to the GPUImageVideoCamera source or the GPUImageMovie source? Why does it alternative between each -- shouldn't the frame numbering scheme be uniform across all frames?
Am I correct in thinking that this issue is caused by non-increasing frameTimes?
...if so, why does GPUImageView accept and display the frameTimes just fine on the screen 100% of the time, yet GPUImageMovieWriter requires them to be ordered?
...and if so, how can I ensure that the frameTimes that come in are valid? I tried adding if (frameTime.value < previousFrameTime.value) return; to skip any lesser-numbered frames which works -- most of the time. Unfortunately, when I set playsAtActualSpeed on the GPUImageMovie this tends to become far less effective as all the frames end up getting skipped after a certain point.
...or perhaps this is a bug, in which case I'll need to report it on GitHub -- but I'd be interested to know if there's something I've overlooked here in how the frameTimes work.
I've found a potential solution to this issue, which I've implemented as a hack for now, but could conceivably be extended to a proper solution.
I've traced the source of the timing back to GPUImageTwoInputFilter which essentially multiplexes the two input sources into a single output of frames.
In the method - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex, the filter waits until it has collected a frame from the first source (textureInput == 0) and the second, and then forwards on these frames to its targets.
The problem (the way I see it) is that the method simply uses the frameTime of whichever frame comes in second (excluding the cases of still images for which CMTIME_IS_INDEFINTE(frameTime) == YES which I'm not considering for now because I don't work with still images) which may not always be the same frame (for whatever reason).
The relevant code which checks for both frames and sends them on for processing is as follows:
if ((hasReceivedFirstFrame && hasReceivedSecondFrame) || updatedMovieFrameOppositeStillImage)
{
[super newFrameReadyAtTime:frameTime atIndex:0]; // this line has the problem
hasReceivedFirstFrame = NO;
hasReceivedSecondFrame = NO;
}
What I've done is adjusted the above code to [super newFrameReadyAtTime:firstFrameTime atIndex:0] so that it always uses the frameTime from the first input and totally ignores the frameTime from the second input. So far, it's all working fine like this. (Would still be interested for someone to let me know why this is written this way, given that GPUImageMovieWriter seems to insist on increasing frameTimes, which the method as-is doesn't guarantee.)
Caveat: This will almost certainly break entirely if you work only with still images, in which case you will have CMTIME_IS_INDEFINITE(frameTime) == YES for your first input'sframeTime.

How can I delay sound in each single speaker with FMOD?

I setted up a sound with multichannelsupport so now I need to delay the sound in each single speaker. How can I do this with FMODex? Is it possible to do that?
Thanks for helping me! :)
So I've got an answer to my question by myself and fmod.org. I have to use the FMOD_DSP_TYPE_DELAY. With this type I can set the delay for each channel up to 10 seconds. More informations could be found in the documentation from fmod.
~Update~
Some code for interested fmod users:
FMOD_System_CreateDSPByType(system, FMOD_DSP_TYPE_DELAY, &dspDelay);
FMOD_Channel_AddDSP(channel, dspDelay, 0);
FMOD_DSP_SetActive(dspDelay, true);
while(true) {
FMOD_DSP_SetParameter(dspDelay, FMOD_DSP_DELAY_CH0, delayLeft);
FMOD_DSP_SetParameter(dspDelay, FMOD_DSP_DELAY_CH1, delayRight);
Sleep(10);
FMOD_System_Update(system);
}