Hey all, I've got a method of recording that writes the notes that a user plays to an array in real time. The only problem is that there is a slight delay and each sequence is noticeably slowed down when playing back. I upped the speed of playback by about 6 miliseconds, and it sounds right, but I was wondering if the delay would vary on other devices?
I've tested on an ipod touch 2nd gen, how would that preform on 3rd, and 4th as well as iphones? do I need to test on all of them and find the optimal delay variation?
Any Ideas?
More Info:
I use two NSThreads instead of timers, and fill an array with blank spots where no notes should play (I use integers, -1 is a blank). Every 0.03 seconds it adds a blank when recording. Every time the user hits a note, the most recent blank is replaced by a number 0-7. When playing back, the second thread is used, (2 threads because the second one has a shorter time interval) that has a time of 0.024. The 6 millisecond difference compensates for the delay between the recording and playback.
I assume that either the recording or playing of notes takes longer than the other, and thus creates the delay.
What I want to know is if the delay will be different on other devices, and how I should compensate for it.
Exact Solution
I may not have explained it fully, that's why this solution wasn't provided, but for anyone with a similar problem...
I played each beat similar to a midi file like so:
while playing:
do stuff to play beat
new date xyz seconds from now
new date now
while now is not > date xyz seconds from now wait.
The obvious thing that I was missing was to create the two dates BEFORE playing the beat...
D'OH!
It seems more likely to me that the additional delay is caused by the playback of the note, or other compute overhead in the second thread. Grab the wallclock time in the second thread before playing each note, and check the time difference from the last one. You will need to reduce your following delay by any excess (likely 0.006 seconds!).
The delay will be different on different generations of the iphone, but by adapting to it dynamically like this, you will be safe as long as the processing overhead is less than 0.03 seconds.
You should do the same thing in the first thread as well.
Getting high resolution timestamps - there's a a discussion on apple forums here, or this stackoverflow question.
Related
I am using a G0 with one ADC and 8 channels. Works fine. I use 4 channels. One is temperature that is measured constantly and I am interested in the value every 60s. Another one is almost the opposite: it is measuring sound waves for a couple a minutes per day and I need those samples at 10kHz.
I solved this by letting all 4 channels sample at 10kHz and have the four readings moved to memory by DMA (array of length 4 with 1 measurement each). Every 60s I take the temperature and when I need the audio, I retrieve the audio values.
If I had two ADC's, I would start the temperature ADC reading for 1 conversion every 60s. Non-stop. And I would only start the audio ADC for the the couple of minutes a day that it is needed. But with the one ADC solution, it seems simple to let all conversions run at this high speed continuously and that raised my question: Is there any true downside in having 40.000 conversions per second, 24 hours per day? If not, the code is simple. I just have the most recent values in memory all the time. But maybe I ruin the chip? I use too much energy I know. But there is plenty of it in this case.
You aren't going to "wear it out" by running it when you don't need to.
The main problems are wasting power and RAM.
If you have enough of these, then the lesser problems are:
The wasted power will become heat, this may upset your temperature measurements (this is a very small amount though).
Having the DMA running will increase your interrupt latency and maybe also slow down the processor slightly, if it encounters bus contention (this only matters if you are close to capacity in these regards).
Having it running all the time may also have the advantage of more stable readings, not being perturbed turning things on and off.
I read on the the Ticker class documentation that:
"Ticker class calls its callback once per animation frame."
I am using createTicker(TickerCallback onTick) to implement a stopwatch. So I need the elapsed variable passed to the TickerCallback to be extremely precise (i.e. i need that after 5 seconds, the value of elapsed is exactly 5 seconds).
Now my question is: what happens if I have a sluggish UI very badly coded and that misses a lot of frame due to bad optimization? I can think of 2 cases:
The time of the stopwatch gets updated not at 60fps (because of my bad coding) but once it gets updated, the time being displayed is correct
The time displayed is wrong
Other?
Which is the case? And why (most importantly)? Also, considering the above, is it adviceable to use ticker for a stopwatch? Thanks
To answer your question(s):
1.The time of the stopwatch gets updated not at 60fps (because of my bad coding) but once it gets updated, the time being displayed is correct.
If phone works at 120 fps, does that mean it will forward time :)
Flutter aims to provide 60 frames per second (fps) performance, or 120 fps performance on devices capable of 120Hz updates. For 60fps, frames need to render approximately every 16ms. Jank occurs when the UI doesn't render smoothly.
So you may use ticker, and even if animation is sluggish, still it will display right time. Like on let say we have some delays on frames 500, they will be delays of animation not the time passed. Like on second 3 we have 1 second delay we will have 5 after that, it updates the screen, but timer will continue.
Also, considering the above, is it adviceable to use ticker for a stopwatch?
It is. At worst case you will have drop frames, jumping seconds but timer will be exact.
I've been working on a very specific project for iOS, lately, and my researches lead me to an almost final code. I've solved all the extreme difficulties I've found until now, but on this one I don't seem to have a clue (about the reason nor the possibility of solving it).
I set up my audioqueue (sample rate 44100, format LinearPCM, 16 bits per channel, 2 bytes per frame, 1 channel per frame...) and start recording the sound with 12 audio buffers. However, there seems to be a delay after every 4 callbacks.
The situation is the following: the first 4 callbacks are called with an interval each of about 2 ms. However, between the 4th and the 5th, there is a delay of about 60ms. The same thing happens between the 8th and the 9th, the 12th and 13th and on...
There seems to be a relation between the bytes per frame and the moment of the delay. I know this because if I change to 4 bytes per frame, I start having the delay between the 8th and the 9th, then between the 16th and the 17th, the 24th and the 25th... Nonetheless, there doesn't seem to be any relation between the moment of the delay and the number of buffers.
The callback function does only two things: store the audio data (inBuffer->mAudioData) on a array my class can use; and call another AudioQueueEnqueueBuffer, to put the current buffer back on the queue.
Did anyone go through this problem already? Does anyone know, at least, what could be the cause of it?
Thank you in advance.
The Audio Queue API seems to run on top of the RemoteIO Audio Unit API, who's real audio buffer size is probably unrelated to, and in your example larger than, whatever size your Audio Queue buffers are. So whenever a RemoteIO buffer is ready, a bunch of your smaller AQ buffers quickly get filled from it. And then you get a longer delay waiting for some larger buffer to be filled with samples.
If you want better controlled (more evenly spaced) buffer latency, try using the RemoteIo Audio Unit directly.
What is the fastest I can run an NSTimer and still get reliable results? I've read that approaching 30ms it STARTS to become useless, so where does it "start to start becoming useless"...40ms? 50ms?
Say the docs:
the effective resolution of the time
interval for a timer is limited to on
the order of 50-100 milliseconds
Sounds like if you want to be safe, you shouldn't use timers below 0.1 sec. But why not try it in your own app and see how low you can go?
You won't find a guarantee on this. NSTimers are opportunistic by nature since they run with the event loop, and their effective finest granularity will depend on everything else going on in your app in addition to the limits of whatever the Cocoa timer dispatch mechanisms are.
What's you definition of reliable? A 16 mS error in a 1 second timer is under 2% error, but in a 30 mS timer is over 50% error.
NSTimers will wait for whatever is happening in the current run loop to finish, and any errors in time can accumulate. e.g. if you touch the display N times, all subsequent repeating NSTimer firings may be late by the cumulative time taken by 0 to N touch handlers (plus anything else that was running at the "wrong" time). etc.
CADisplayLink timers will attempt to quantize time to the frame rate, assuming that no set of foreground tasks takes as long as a frame time.
Depends on what kind of results you are trying to accomplish. NSTimer Class 0.5 - 1.0 is a good place to start for reliable results.
I have a sound that needs to get played 10 times per second. The sound is 1 second long. So it does overlap like 10 times. However, as far as I understand the Finch sound library, I would need 10 different instances of a sound in place so that I can play it 10 times at almost the same time.
When I have just one instance, the sound would stop and play from the beginning on every iteration, but not overlap with itself.
How to do that?
In Finch it depends on how many instances of the particular sound you want to play simultaneously. Pass this number to the initWithFile:rounds: initializer of the RevolverSound class and it will allocate the desired number of copies of the sample.
Unlikely. Depends on the sound system/card and the API you're using. Usually it's fire and forget (where fire is load the data stream, tell audio system to play stream X times). To get it to overlap, you'd may need to use multiple channels. I'm not familiar with finch to know how it handles that sort of thing.