I have an app that reads environmental data from a USB sensor connected to a Mac. Users are able to configure how often the app samples data and how often the app averages those samples and logs the average to a file.
I first used NSTimer but that was wildly inaccurate, especially when the display went to sleep. I am now using a DispatchSourceTimer but it is still losing 1 millisecond about every 21-23 seconds which is about 1 second every 6 hours or so. I'd ideally like that to be less than 1 second per day.
Any ideas how I can tune in the timer to be a little more accurate?
func setupTimer() -> DispatchSourceTimer {
let timer = DispatchSource.makeTimerSource(flags: .strict, queue: nil)
let repeatInterval = DispatchTimeInterval.seconds(samplingInterval)
let deadline : DispatchTime = .now() + repeatInterval
timer.schedule(deadline: deadline, repeating: repeatInterval, leeway: .nanoseconds(0))
timer.setEventHandler(handler: self.collectPlotAndLogDatapoint)
return timer
}
func collectPlotAndLogDatapoint() {
samplingIntervalCount += 1
let dataPoint : Float = softwareLoggingDelegate?.getCurrentCalibratedOutput() ?? 0
accumulatedTotal += dataPoint
if samplingIntervalCount == loggingInterval / samplingInterval{
let average = self.accumulatedTotal/Float(self.samplingIntervalCount)
DispatchQueue.global().async {
self.logDataPoint(data: average)
self.chartControls.addPointsToLineChart([Double(average)], Date().timeIntervalSince1970)
self.samplingIntervalCount = 0
self.accumulatedTotal = 0
}
}
}
The answers (and comments in response) to this seem to suggest that sub-millisecond precision is hard to obtain in Swift:
How do I achieve very accurate timing in Swift?
Apple apparently have their own dos & don'ts for high precision timers: https://developer.apple.com/library/archive/technotes/tn2169/_index.html
~3-4 seconds a day is pretty accurate for an environmental sensor, I'd imagine this would only prove an issue (or even noticeable) for those users who are wanting to take samples on an interval << 1 second.
Related
I understand the concept of CMTime and what it does. In the nutshell, we have very tiny fractions of a second represented as floating point numbers. When added, they accumulate an error, which becomes significant as decoding / playback progresses. For example, summing up one million times 0.000001 gives us 1.000000000007918. Okay, CMTime sounds like a great idea.
let d: Double = 0.000001
var n: Double = 0
for _ in 0 ..< 1_000_000 { n += d }
print(n)
// 1.000000000007918
However, when attempting to convert a random Double to and from CMTime the above error looks like a joke compared to the difference between the original Double and its CMTime value. You can guess what would that difference look like after adding these random CMTime values a million times!
import CoreMedia
print("Simple number after 1,000,000 additions and diff between random ")
print("number before/after converting to CMTime:")
print("add:", String(format: "%.20f", 1.000000000007918))
for _ in 0 ..< 10 {
let seconds = Double.random(in: 0 ... 10)
// Let's go with the max timescale!
let time = CMTime(seconds: seconds, preferredTimescale: .max)
print("dif:", String(format: "%.20f", seconds - time.seconds))
}
// Simple number after 1,000,000 additions and diff between random
// number before/after converting to CMTime:
// add: 1.00000000000791811061
// dif: 0.00000000025481305954
// dif: 0.00000000027779378797
// dif: 0.00000000000071231909
// dif: 0.00000000024774449159
// dif: 0.00000000028195579205
// dif: 0.00000000029723601358
// dif: 0.00000000029402880131
// dif: 0.00000000044737191729
// dif: 0.00000000036750824606
// dif: 0.00000000043562398133
On the other hand, yes, if any given Double can be accurately converted to CMTime, then this wouldn't be an issue.
Question. I'm trying to figure out if it makes sense to use CMTime on its own for time handing (apart from a million additions, obviously) or is it only useful for working with APIs that take and return values in CMTime format? To give some context, I have a video editing app with bespoke UI (player, tracks, timelines) that deals with playback speed adjustments, track trimming and rearranging, etc. Using Double to express time values works out great, it's clean, simple and does the job. But CMTime feels like the "right" way to do it. However, seeing what happens to a Double after converting it back and forth makes me wonder CMTime's field of use is as narrow as encoding and decoding media?
Your intuition is correct. Using a screwdriver as a hammer may work most of the time, but it's not the best use. More importantly, it may be missing some non-obvious edge cases where it just won't work or will cause more work to hammer in the nail (such as double processing).
Secondly, what is your conversion method? Perhaps you are missing an edge case such as varying timescale. I can't really give further guidance without a bit more information.
CMTime is already frame-accurate with AVPlayer without conversion. That's what it was made for, though make sure you set toleranceBefore and toleranceAfter to zero.
Note: I've been working with frame-accurate video/audio processing for over a decade.
I'm playing around in Swift 5.1 (on the Mac) running little simulations of tennis matches. Naturally part of the simulation is randomly choosing who wins each point.
Below is the relevant part of the code where I do the parallelism.
func combine(result: MatchTally)
{
overallTally.add(result: result)
}
DispatchQueue.concurrentPerform(iterations: cycleCount){iterationNumber in
var counter = MatchTally()
for _ in 1...numberOfSimulations
{
let result = playMatch(between: playerOne, and: playerTwo)
counter[result.0, result.1] += 1
}
combiningQueue.sync {combine(result: counter)}
}
With an appropriate simulation run count chosen, a single queue takes about 5s. If I set the concurrent queues to 2, the simulation now takes 3.8s per queue (i.e. it took 7.2s). Doubling again to 4 queues results in 4.8s / queue. And finally with 6 queues (the machine is a 6 core Intel i7) things take 5.6s / queue.
For those who need more convincing that this relates to random number generating (I'm using Double.random(0...1)) I replaced the code where most of the random outcomes are generated with a fixed result (I couldn't replace the second place as I still needed a tie-break) and adjusted the number of simulations appropriately, the outcomes were as follows:
1 queue: 5s / queue
2 queues: 2.7s / queue
4 queues: 1.9s / queue
6 queues: 1.7s / queue
So as you can see, it appears that the randomness part is resistant to running in parallel.
I've also tried with drand48() and encountered the same issues. Anybody know whether this is just the way things are?
Xcode 11.3,
Swift 5.1,
macOS 10.15.3,
Mac mini 2018,
6 core i7 (but have encountered the same thing over the years on different hardware)
For anyone interested in reproducing this themselves, here is some code I created and Alexander added to.
import Foundation
func formatTime(_ date: Date) -> String
{
let df = DateFormatter()
df.dateFormat = "h:mm:ss.SSS"
return df.string(from: date)
}
func something(_ iteration: Int)
{
var tally = 0.0
let startTime = Date()
print("Start #\(iteration) - \(formatTime(startTime))")
for _ in 1...1_000_000
{
tally += Double.random(in: 0...100)
// tally += 3.5
}
let endTime = Date()
print("End #\(iteration) - \(formatTime(endTime)) - elapsed: \(endTime.timeIntervalSince(startTime))")
}
print("Single task performed on main thread")
something(0) // Used to get a baseline for single run
print("\nMultiple tasks performed concurrently")
DispatchQueue.concurrentPerform(iterations: 5, execute: something)
Swapping out the random additive in the loop for a fixed one demonstrates how well the code scales in one scenario, but not the other.
Looks like the solution is to use a less 'fashionable' generator such as drand48(). I believed I had already testing that option, but seems that I was wrong. It seems this doesn't suffer from the same issue so I guess it is inherent to arc4random() which I believe Double.random() is based upon.
The other positive is that it is about 4 times faster to return a value. So my simulations won't be cryptographically secure, but then what tennis match is? ðŸ¤
I believe this question to be programming language agnostic, but for reference, I am most interested in Swift.
When we perform a Pythagoras calculation inside a method, we know that for 32 or 64 bit Int this will be a fast operation.
func calculatePythagoras(sideX x: Int, sideY y: Int) -> Int {
return sqrt(x*x+y*y)
}
We can call this method syncronously since we consider it to be fast enough.
Of course, it would be silly - assuming Int of max 64-bit size - to implement this in an asynchronous manner:
func calculatePythagorasAsync(sideX x: Int, sideY y: Int, done: (Int) -> Void) -> Void {
DispatchQueue.global(qos: .userInitiated).async {
let sideZ = sqrt(x*x+y*y)
DispatchQueue.main.async {
done(sideZ)
}
}
}
This is just overcomplicating the code since we can assume that disregarding of how old and slow the device we are running on, performing two multiplications and one square root of integers of max size 64 this will execute fast enough.
This is what I am interested in, what is fast enough?. Let's say that for the sake of simplicity we constrain this discussion to one specific device D. Let's say that the execute (wall clock time) for calculating Pythagoras on device D is 1 microsecond.
What do you think is a reasonable threshold for a method M should be changed to asynchronous? Imagine we would like to call this method M every time we display a table view (list view) cell (item), calling M on the main thread. And we would like to keep a smooth 60 fps when scrolling. 1 microsecond is surely fast enough. Of course, 100,000 microseconds (= 0.1 seconds) is not nearly fast enough. Probably 10,000 microseconds (= 0.01 s) is not fast enough either.
Is 1,000 microseconds (= 1 millisecond = 0.001 s) fast enough?
I hope you do not think this is a silly question, I am genuinely interested.
Is there any reference to some standard regarding this? Some best practice?
Expect:
When audioplayer.play(atTime: 1) is called, a timer resets to 0, and the audioplayer is played at the 1st second
Reality:
I tried delay = 0.000000001, 1, 100000000, but regardless, no noise would ever be played. The code was clearly executed however (because "function was called" appeared in console)
Why the discrepancy?
C = AVAudioPlayer() // assume other setups are done
C.play(atTime: 1)
print("function was called")
according to the official API Reference (translation to swift 3 by me):
Declaration
func play(atTime time: TimeInterval) -> Bool
Parameters
time:
The absolute audio output device time to begin playback. The value that you provide to the time parameter must be greater than the device’s current time. You can delay the start of playback by using code like this:
let playbackDelay = 3.0 // must be ≥ 0
myAudioPlayer.play(atTime: myAudioPlayer.deviceCurrentTime + playbackDelay)
I have set a NSTimer.scheduledTimerWithTimeInterval method which has an interval every 20 minutes. I want to be able to find out how much time is left when the the app goes into background mode. How do I find out how much time is left from the interval?
Thanks
You have access to a NSTimer's fireDate, which tells you when the timer is going to fire again.
The difference between now and the fireDate is an interval you can calculate using NSDate's timeIntervalSinceDate API.
E.G. something like:
let fireDate = yourTimer.fireDate
let nowDate = NSDate()
let remainingTimeInterval = nowDate.timeIntervalSinceDate(fireDate)
When using the Solution of #Michael Dautermann with normal Swift type Timer and Date I have noticed, that his solution will give you a negative value e.g.:
let fireDate = yourTimer.fireDate
let nowDate = Date()
let remainingTimeInterval = nowDate.timeIntervalSinceDate(fireDate)
//above value is negative e.g. when the timers interval was 2 sec. and you
//check it after 0.5 secs this was -1,5 sec.
When you insert a negative value in the Timer.scheduledTimer(timeInterval:,target:,selector:,userInfo:,repeats:) function it would lead to a timer set to 0.1 milliseconds as the Documentation of the Timer mentions it.
So in my case I had to use the negative value of the nowDate.timeIntervalSinceDate(fireDate) result: -nowDate.timeIntervalSinceDate(fireDate)