I'm trying to sync abcjs with ToneJS. abcjs plays a melody (via notation) and ToneJS plays a loop. I would like to sync these two audio sources.
As I understand I need to create a shared AudioContext in which they both can play, but I'm not sure how to do it.
As far as I understand I need to create a shared audio context. From the ToneJS docs I can use Tone.setContext(ac). I also need a onClick (or similar) callback to run await Tone.play() before playback works. But setting audio context before Tone.play() I get the error "AudioContext is suspended ...".
Can someone please lead me in the right direction...?
Related
I am having an issue where my audio engine stops working (or experiences dropouts) after being interrupted by a phone call, change to the audio output device, or other system audio events. I'd like to simply re-initialize my audio classes after such an event.
I am using a multi-Provider to provide the audio service throughout the app. I have a couple models that keep track of settings/state and could restore the state and the expected behavior after an interruption.
I considered doing something similar to the solution described here
However my models and audio service are all being provided at the same level with multiprovider so any restarting of a widget tree will restart both.
I basically want to re-create my audio service with a fresh object. Any suggestions as to how to approach "restarting" a provider provided service?
I'm trying to make some simple flash games but before I start purring mega bytes of data into the game it's better to seek wisdom from pro coders since I;m totally new to AS3.
1st Question is: what's the best way to load the game faster?
since bandwidth is limited like hell for me and loading a flash game takes some time I'm trying to make the game start as fast as possible and load the rest of materials in the gaming process.I saw this on lots of facebook games.
2nd: how to keep the RAM usage low?
do you advise to remove the loaded image/movie clip from the stage as soon as its out of the frame? does this lower the RAM usage? and if I load the removed object again does it use the loaded one or it start to download it from the source folder again?
3rd: whats the trick to load the materials into client PC?
I saw many facebook games that take some time to load for the first time but next day it loads like its data is stored on the Hard disk. do I have to do something or flash player does it automatically?
4th:Is there a way to load the images/movie clips into flash player while playing? I mean if the level of games are movie clips and player is playing first level the game continue to load next level and on then ad them to the stage as demanded by code.
Lead me with your experience please.
These tricks are not as simple, but yes, all of this is possible.
The "best" way to load the game faster is a fiction, you will anyway need at least the main game code to be loaded in order to play the game. If your levels are pretty huge, yes, you can load levels after actually loading the game. You can also load music the same way. For this, you will need a separate SWF or a set of URLs to get these from, and after your game fully loads (without sounds, levels or whatever you were capable of placing aside) you initiate an asynchronous load request (use Loader class for this) and after it completes, you'll be able to play either one sound, or one level, or total set of sounds or levels, depending on how do you organize your external asset storage.
The short answer is "reduce, reuse, recycle", that is, you'd better store big assets like a bitmap (an instance of BitmapData class) as a single object, and use references to display numerous copies of it throughout the game. You'd better use an object several times, say a bullet that flied once and expired, hit something, missed and left screen, etc, can be told "go back right here, here's your new parameters" and the object will not be wasted. Other tricks are also possible.
This is an automatic action of Flash player, or rather the browser, known as "local cache". If you request something off an URL, the request is passed to the browser, which, after downloading, stores a local copy for future reference. The storage is still limited, and also the copy can "expire" if it was stored for too long, which makes the browser re-download the URL content. Flash player uses browser's URL retrival, thus local caching applies to SWFs or other kind of data.
You can make levels into metadata, that is, you are downloading a set of different data via any means possible, then you parse that set, create the required MovieClip (or a Sprite) via code, stuff all the assets into it at intended positions and go with that. A metadata can be fairly large, and can be placed elsewhere as any other file you can share via Internet and download by an URL. Use Loader class to call for URLs and get data retrieved from the net, devise a way to properly store the metadata and a generator+parser to manage those levels as you design the game.
Hope this helps.
I have to draw a waveform for an audio file (CMK.mp3) in my application.
For this I have tried this Solution
As this solution is using AVAssetreader, which is taking two much time to display the waveform.
Can anyone please help, is there any other way to display the waveform quickly?
Thanks
AVAssetReader is the only way to read an AVAsset so there is no way around that. You will want to tune the code to process it without incurring unwanted overhead. I have not tried that code yet but I intend on using it to build a sample project to share on GitHub once I have the time, hopefully soon.
My approach to tune it will be to do the following:
Eliminate all Objective-C method calls and use C only instead
Move all work to a secondary queue off the main queue and use a block to call back one finished
One obstacle with rendering a waveform is you cannot have more than one AVAssetReader running at a time, at least the last time I tried. (It may have changed with iOS 6 possibly) A new reader cancels the other and that interrupts playback, so you need to do your work in sequence. I do that with queues.
In an audio app that I built it reads the CMSampleBufferRef into a CMBufferQueueRef which can hold multiple sample buffers. (see copyNextSampleBuffer on AVAssetReader) You can configure the queue to provide you with enough time to process a waveform after an AVAssetReader finishes reading an asset so that the current playback does not exhaust the contents of the CMBufferQueueRef before you start reading more buffers into it for the next track. That will be my approach when I attempt it. I just have to be careful that I do not use too much memory by making the buffer too big or making the buffer so big that it causes issues with playback. I just do not know how long it will take to process the waveform and I will test it on my older iPods and iPhone 4 before I try it on my iPhone 5 to see if they all perform well.
Be sure to stay as close to C as possible. Calls to Objective-C resources during this processing will incur potential thread switching and other run-time overhead costs which are significant enough to be noticeable. You will want to avoid that. What I may do is set up Key-Value Observing (KVO) to trigger the AVAssetReader to start the next task quickly so that I can maintain gapless playback between tracks.
Once I start my audio experiments I will put them on GitHub. I've created a repository where I will do this work. If you are interested you can "watch" that repo so you will know when I start committing updates to it.
https://github.com/brennanMKE/Audio
Once a month the mp3 streams messes up and the only way to tell it has messed up is by listening to it as it streams. Is there a script or program or tool I can use to monitor the live streams at a given url and send some kind of flag when it corrupts?
What happens is normally it plays a song for example or some music but once a month, every month, randomly, the stream corrupts and starts random chimpmunk like trash audio. Any ideas on this? I am just getting started at this with no idea at all.
Typically, this will happen when you play a track of the wrong sample rate.
Most (all that I've seen) SHOUTcast/Icecast encoders (going straight from files) will compress for MP3 just fine, but assume a fixed sample rate of whatever they are configured for. Typically this will be 44.1kHz. If you drop in a 48kHz track, or a 22.05kHz track, they will play at different speeds while causing all sorts of random issues with the stream.
The problem is easy enough to verify. Simply create a file of a different sample rate and test it. I suspect you will reproduce the problem. If that is the case, to my knowledge there is no way to detect it, since your stream isn't actually corrupt... it just sounds incorrect. You will have to scan all of your files for sample rate. FFMPEG in a script should be able to help you with that.
Now, if the problem actually is a corrupt MP3 stream, then you have problems on your encoding side. I suspect simply swapping out whatever DLL or module you're using with a recent stable version of LAME will help.
To detect a corrupt MP3 stream, your encoder must be using CRC. If you enable it, you should be able to read through the headers of each frame to find the CRC, and then run it on the audio data. In the event you get an error (or several frames with errors), you can then trigger a warning.
You can find information on the MP3 stream header here:
http://www.mp3-tech.org/programmer/frame_header.html
I've been playing around with the SDK recently, and I had an idea to just build a personal autotuner (because I am just as awesome as T-Pain).
Intro aside, I wanted to attach a high-quality microphone into the headphone jack, and I wanted my audio to be processed in a callback, and then copied to the output buffer. This has several implications:
When my audio-in is being routed through the built-in microphone, I need to be able to process this input, and send it once my input has stopped (this works).
When my audio-in is being routed through the microphone-in input from the headset jack, I want the output to be sent immediately.
Routing, however, doesn't seem to work properly when using AudioSession modes and overrides, which technically should allow you to reroute output to the iPhone speakers, no matter where the input is coming from. This is documented to work, but in practice, doesn't really work.
Remote IO, however, is not documented at all. Anyone with experience using Remote IO audio units, can you give me a reasonable high-level overview on how to do this properly? I have been using the aurioTouch example code, but I am running into errors where I get error codes like -50 and -10863, none of which are documented.
Thanks in advance.
The aurioTouch example implements remoteIO play through.
You could modify the samples before passing them on.
It simply calls AudioUnitRender in the output render callback.
NB this trick does not seem to work if you port the code
to OSX style CoreAudio. There, 99% of the time, you need
to create two AUHALs (RemoteIO-a-likes) and pass
the samples between them.