preload sound with soundmanager2 - soundmanager2

In using soundmanager 2 is there a way to pre-load a track if you know what is the next and previous track that will be played. This will of course quicken the load times for the user in switching between sounds. In the documentation there is a function autoLoad() and load() and I am not sure which one should be used to achieve pre-loading music.
I should also ask when is the best time to start the preload I was thinking during the whileplaying() function.
Any feedback will be greatly appreciated

Related

Is there a way to start buffering video before playing it in Flutter?

I am trying to make an app that can play multiple videos at once. The goal is to record multiple music voices and then play them together. I am using the video_player plugin and each video has its own player. The problem is that when I start playing, it is not reliable when each player will start. So, I thought if I load the files before start playing this difference could be reduced. But I couldn't figure out how to do it and if it is possible to do. So, is there a way to load those files to reduce this difference? If there is another way to achieve my goal I would be glad to know. Thank you!
I have a similar problem where I want to control the exact moment the video starts playing.
For now, the best way seems to be (assuming you have a _controller constructed already)
await _controller.initialize()
await _controller.play()
await _controller.pause()
After that, the video starts playing almost immediately. For my purposes it’s good enough but if you need to control this down to the millisecond for synchronization purposes it might not cut it.
You might also want to hide the video and mute it during those first calls.

Is it possible to record the audio that comes out of the iPhone?

I am working on an app that allows the user to create a sort of dub. There is an audio file playing, and the user can tap at certain moments to insert sound (kind of like a censor button.) I'm wondering how to go about capturing the final product.
Capturing audio directly from the iPhone seems the easiest route, as the user already hears the finished product as it is made. However, I can't find anything on how to do this. If not possible, are there any suggestions?
The best way would probably be to be using the AV Foundation framework for mixing and then buffering the audio as well as playing it. This would allow for a high abstraction level while guaranteeing both played back and saved audio to be equal.
Apart from that: from a How can I achieve this with minimum code-perspective, without more information about your setup, the question is way too broad and/or opinion-based.
You will have to work with buffers. Don't know right now how it is done in Swift but you can implement it in Obj-C and then bridge it out.
You can refer to this answers here in StackOverflow (They are a bit old)
https://stackoverflow.com/a/11218339/2683201
https://stackoverflow.com/a/10101877/2683201
and a project also exists (but is in Obj-C)
https://github.com/alexbw/novocaine
Mainly the idea for your case would be to have 2 separated buffers and your sound effect.
Then, you will be playing from buffer A (your music) and copying played data into buffer B (final Output) unless you are playing the effect. In wich case you will be copying the effect data into your buffer B.
Other option is to do it offline:
Play your music (or audio) and keep a timer running synced with the elapsed time of your "to be censored audio".
Save the timestamp of when you start and end tapping the censor button (for example).
Overlap buffer A with your effect in those recorded (start-end) timestamps.
Save the buffer as a file (or do whatever you need to do with it)
UPDATE:
You should take a look into the Apple implementation of something like this:
https://developer.apple.com/library/ios/samplecode/AVAEMixerSample/Introduction/Intro.html

AVAudioPlayerNode - Get Player State?

In an iOS-project I am using the AVAudioPlayerNode in conjunction with the AVAudioEngine and an AVAudioUnitTimePitch. Everything works peachy. However, I was wondering if there is a way to figure out what the current player's state (e.g. isPlaying, isPaused) or at least the playback position is.
While AVAudioPlayer at least allows you to get the currentTime-parameter, I could not yet figure out how to get that information with AVAudioPlayerNode. I tried playing around with the nodeTimeForPlayerTime and playerTimeForNodeTime methods described in the swift documentation but I couldn't make any progress.
Any help would be highly appreciated.
Since the AVAudioPlayerNode is designed as an audio stream, it doesn't necessarily keep track of the time within a particular file. However, the AVAudioPlayerNode does keep a running time of how long its been playing all audio. This timer doesn't reset with each file, in order to change it, you must explicitly tell it where you want to start counting from.
So to find the current time the player has been playing you must do the following:
player.lastRenderTime.sampleTime / file.fileFormat.sampleRate
Now in order to get the timer to reset after each file, you must explicitly reset the players current time. To do this use the player.playAtTime: function.
If you would like an example, check one out here: https://github.com/danielmj/AEAudioPlayer

Playing multiple sounds simultaneously with AVAudioPlayer

Could someone help me with a function that fires multiple AVAudioPlayers at the same time? Right now I am trying to get a total of twelve AVAudioPlayers to fire at once if twelve buttons are activated, but there is a delay and it sounds like someone is running their finger down a piano instead of hitting all the keys at once.
I've looked at Audio Queue Services and can't understand how to actually implement that into code, but it says it can play synchronized sounds. I'm not sure sure how to set all of it up. I'm trying to remake a Tone Grid app.
Why don't you use http://www.hollance.com/2011/02/soundbankplayer-using-openal-to-play-musical-instruments-in-your-ios-app/
There you have a polyphonic player, beautifully coded and ready to rock!
I used it in my first app a while back: http://itunes.apple.com/gb/app/chordwheel-pro/id406836326?mt=8
Are you calling the method prepareToPlay on all 12 AVAudioPlayer instances. From the docs: "Calling this method, preloads the buffers and acquires hardware, to minimize delay."
See the AVAudioPlayer class reference.

AudioServices (Easy), AVAudioPlayer (Medium), OpenAL (Hard & Overkill?)

I need to play sounds (~5 seconds each) throughout my iphone application. When they're triggered, they need to play immediately.
For the moment I'm using AudioServices and (as you probably know) the first time you play a sound it lags, then every time there after it's perfect. Is there some code available that's clever enough to preload an AudioServices sound (by playing it silently maybe?). I've read adjusting the system volume programmatically will get your app rejected, so that's not an option. Seems AudioServices isn't made for volume correction from what I can see.
I've looked into OpenAL and while feasible seems a little over kill. AVAudioPlayer seems like a little bit of a better option, I'm using that for background music at present. Extending my music player to handle a 'sound board' might be my last resort.
On the topic of OpenAL, does anyone know of a place with a decent (app store friendly) OpenAL wrapper for the iPhone?
Thanks in advance
Finch could be perfect for you. It’s a tiny wrapper around OpenAL with very low latency and simple API. See also all SO questions tagged ‘Finch’.
If you use an AVAudioPlayer, you can call prepareToPlay when you initialize the object to reduce the delay between calling play and having the audio start.