I understand the concept of CMTime and what it does. In the nutshell, we have very tiny fractions of a second represented as floating point numbers. When added, they accumulate an error, which becomes significant as decoding / playback progresses. For example, summing up one million times 0.000001 gives us 1.000000000007918. Okay, CMTime sounds like a great idea.
let d: Double = 0.000001
var n: Double = 0
for _ in 0 ..< 1_000_000 { n += d }
print(n)
// 1.000000000007918
However, when attempting to convert a random Double to and from CMTime the above error looks like a joke compared to the difference between the original Double and its CMTime value. You can guess what would that difference look like after adding these random CMTime values a million times!
import CoreMedia
print("Simple number after 1,000,000 additions and diff between random ")
print("number before/after converting to CMTime:")
print("add:", String(format: "%.20f", 1.000000000007918))
for _ in 0 ..< 10 {
let seconds = Double.random(in: 0 ... 10)
// Let's go with the max timescale!
let time = CMTime(seconds: seconds, preferredTimescale: .max)
print("dif:", String(format: "%.20f", seconds - time.seconds))
}
// Simple number after 1,000,000 additions and diff between random
// number before/after converting to CMTime:
// add: 1.00000000000791811061
// dif: 0.00000000025481305954
// dif: 0.00000000027779378797
// dif: 0.00000000000071231909
// dif: 0.00000000024774449159
// dif: 0.00000000028195579205
// dif: 0.00000000029723601358
// dif: 0.00000000029402880131
// dif: 0.00000000044737191729
// dif: 0.00000000036750824606
// dif: 0.00000000043562398133
On the other hand, yes, if any given Double can be accurately converted to CMTime, then this wouldn't be an issue.
Question. I'm trying to figure out if it makes sense to use CMTime on its own for time handing (apart from a million additions, obviously) or is it only useful for working with APIs that take and return values in CMTime format? To give some context, I have a video editing app with bespoke UI (player, tracks, timelines) that deals with playback speed adjustments, track trimming and rearranging, etc. Using Double to express time values works out great, it's clean, simple and does the job. But CMTime feels like the "right" way to do it. However, seeing what happens to a Double after converting it back and forth makes me wonder CMTime's field of use is as narrow as encoding and decoding media?
Your intuition is correct. Using a screwdriver as a hammer may work most of the time, but it's not the best use. More importantly, it may be missing some non-obvious edge cases where it just won't work or will cause more work to hammer in the nail (such as double processing).
Secondly, what is your conversion method? Perhaps you are missing an edge case such as varying timescale. I can't really give further guidance without a bit more information.
CMTime is already frame-accurate with AVPlayer without conversion. That's what it was made for, though make sure you set toleranceBefore and toleranceAfter to zero.
Note: I've been working with frame-accurate video/audio processing for over a decade.
Related
Swift released a new Duration object that is "a representation of high precision time."
I'm using it like this:
let clock = ContinuousClock()
let duration = clock.measure {
// Code or function call to measure here
}
print("Duration: \(duration)")
If the duration really short it prints out something like this:
8.2584e-05 seconds
Instead of scientific notation, I would like to always display as seconds: 0.000082584 seconds
Does anyone know how to always keep the format in seconds?
Just like Dates, Durations support the formatted method. You can give it either the TimeFormatStyle (time) or UnitsFormatStyle (units). For your desired format, it looks like the latter is more suitable. You basically want a fractionalPart that has a very large allowed length.
Though from my experiments, it still rounds everything to nanosecond-precision, even though Duration can support higher precisions. This is perhaps because nanoseconds is the smallest supported unit in Duration.UnitsFormatStyle.Unit.
For example:
let duration: Duration = .nanoseconds(1234)
print(
duration.formatted(.units(
width: .wide,
fractionalPart: .init(lengthLimits: 1...1000)
))
)
Output:
0.000001234 seconds
By default, this will also include hours and minutes if the duration is long enough. If you don't want that, pass allowed: [.seconds] as the first parameter:
duration.formatted(.units(
allowed: [.seconds],
width: .wide,
fractionalPart: .init(lengthLimits: 1...1000)
))
I am trying to get remainder using swift's truncatingRemainder(dividingBy:) method.
But I am getting a non zero remainder even if value I am using is completely divisible by deviser. I have tried number of solutions available here but none worked.
P.S. values I am using are Double (Tried Float also).
Here is my code.
let price = 0.5
let tick = 0.05
let remainder = price.truncatingRemainder(dividingBy: tick)
if remainder != 0 {
return "Price should be in multiple of tick"
}
I am getting 0.049999999999999975 as remainder which is clearly not the expected result.
As usual (see https://floating-point-gui.de), this is caused by the way numbers are stored in a computer.
According to the docs, this is what we expect
let price = //
let tick = //
let r = price.truncatingRemainder(dividingBy: tick)
let q = (price/tick).rounded(.towardZero)
tick*q+r == price // should be true
In the case where it looks to your eye as if tick evenly divides price, everything depends on the inner storage system. For example, if price is 0.4 and tick is 0.04, then r is vanishingly close to zero (as you expect) and the last statement is true.
But when price is 0.5 and tick is 0.05, there is a tiny discrepancy due to the way the numbers are stored, and we end up with this odd situation where r, instead of being vanishingly close to zero, is vanishing close to tick! And of course the last statement is then false.
You'll just have to compensate in your code. Clearly the remainder cannot be the divisor, so if the remainder is vanishingly close to the divisor (within some epsilon), you'll just have to disregard it and call it zero.
You could file a bug on this but I doubt that much can be done about it.
Okay, I put in a query about this and got back that it behaves as intended, as I suspected. The reply (from Stephen Canon) was:
That's the correct behavior. 0.05 is a Double with the value 0.05000000000000000277555756156289135105907917022705078125. Dividing 0.5 by that value in exact arithmetic gives 9 with a remainder of 0.04999999999999997501998194593397784046828746795654296875, which is exactly the result you're seeing.
The only rounding error that occurs in your example is in the division price/tick, which rounds up to 10 before your .rounded(.towardZero) has a chance to take effect. We'll add an API to let you do something like price.divided(by: tick, rounding: .towardZero) at some point, which will eliminate this rounding, but the behavior of truncatingRemainder is precisely as intended.
You really want to have either a decimal type (also on the list of things to do) or to scale the problem by a power of ten so that your divisor become exact:
1> let price = 50.0
price: Double = 50
2> let tick = 5.0
tick: Double = 5
3> let r = price.truncatingRemainder(dividingBy: tick)
r: Double = 0
I have written a sample project in Swift to try out the relatively new Core Audio V3 API stuff. Everything seems to work around creating a custom Audio Unit and loading it in process. But the actual audio rendering isn't going so well. I've often read that the rendering code needs to be in C or C++ but I've also heard Swift is fast and thought I could write some minimal audio rendering code in it.
the rendering code
override var internalRenderBlock: AUInternalRenderBlock {
get {
return {
(_ actionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
_ timeStamp: UnsafePointer<AudioTimeStamp>,
_ frameCount: AUAudioFrameCount,
_ outputBusNumber: Int,
_ bufferList: UnsafeMutablePointer<AudioBufferList>,
_ renderEvent: UnsafePointer<AURenderEvent>?,
_ pull: AudioToolbox.AURenderPullInputBlock?) -> AUAudioUnitStatus in
let bufferList = bufferList.pointee
let theBuffers = bufferList.mBuffers // only one (AudioBuffer) ??
guard let theBufferData = theBuffers.mData?.assumingMemoryBound(to: Float.self) else {
return 1 // come up with better error?
}
let amountFrames = Int(frameCount)
for frame in 0...amountFrames / 2 {
let frame = theBufferData.advanced(by: frame)
frame.pointee = sin(self.phase)
self.phase += 0.0001
}
return noErr
}
}
}
Sounds Bad
The resulting sound is not what I'd expect. My initial thoughts are that Swift is the wrong choice. Yet Interestingly, AudioToolbox does provide a typealias for this AUAudioUnit's rendering property which looks like:
public typealias AUInternalRenderBlock = (UnsafeMutablePointer<AudioUnitRenderActionFlags>, UnsafePointer<AudioTimeStamp>, AUAudioFrameCount, Int, UnsafeMutablePointer<AudioBufferList>, UnsafePointer<AURenderEvent>?, AudioToolbox.AURenderPullInputBlock?) -> AUAudioUnitStatus
This would lead me to believe that it is perhaps possible to write rendering code in Swift.
observed problems
But still, there are a few things going wrong here. (aside from my obvious lack of competency with Swift memory management stuff).
A) despite theBuffers saying that its mNumberOfBuffers is 2, theBuffers winds up not being an array but rather of type (AudioBuffer). I don't understand the need for parenthesis. I can't find a second AudioBuffer.
B) more importantly, when I write a basic sin wave to the one AudioBuffer I can access, the resulting sound is distorted and inconsistent. Could this be Swift's fault? Is it just impossible to write any audio unit rendering code in Swift? Or have a made some assumptions here that is breaking my rendering somehow?
Finally
If it is simply the case that writing this part in Swift is infeasible, then I would like to have some resources on interoperating Swift and C for Audio Unit rendering blocks. So, could the property returning the closure be written in Swift, but the closure's implementation calls down into C? or does the property have to simply return a C function whose prototype matches the closure's type?
Thanks in advance.
The rest of this project can be seen here for context.
The main reason that you were listening a distorted sound was that the phase increment of 0.0001 is too small, which would take 62832 samples to fill up one period of the sine wave -- merely 0.70 hertz! (Assuming your sample rate is 44100)
In addition to the ultra-low-frequency sine wave, you were listening to a sound of about 44100 / 512 = 86.1 Hz, because you were filling only the half of the audio buffer (amountFrames / 2). So the sound was a near-rectangular wave of the period of your audio rendering period, with slowly varying amplitude in about 0.70 Hz.
I could write a working sine wave generator unit based on your code:
override var internalRenderBlock: AUInternalRenderBlock {
return { ( _, _, frameCount, _, bufferList, _, _) in
let srate = Float(self.bus.format.sampleRate)
var phase = self.phase
for buffer in UnsafeMutableAudioBufferListPointer(bufferList) {
phase = self.phase
assert(buffer.mNumberChannels == 1, "interleaved channel not supported")
let frames = buffer.mData!.assumingMemoryBound(to: Float.self)
for i in 0 ..< Int(frameCount) {
frames[i] = sin(phase)
phase += 2 * .pi * 440 / srate // 440 Hz
if phase > 2 * .pi {
phase -= 2 * .pi // to avoid floating point inaccuracy
}
}
}
self.phase = phase
return noErr
}
}
Regarding the observed problem A, the AudioBufferList is a wrapper for variable length C struct, where the first field mNumberBuffers indicates the number of buffers (i.e. number of non-interleaved channels), and the second field is a variable length array:
typedef struct AudioBufferList {
UInt32 mNumberBuffers;
AudioBuffer mBuffers[1];
} AudioBufferList;
The user of this struct, in Objective-C or C++, is expected to allocate mNumberBuffers * sizeof(AudioBuffer) bytes, which is enough for storing multiple mBuffers. Since C does not perform boundary checks on arrays, the users could just write mBuffers[1] or mBuffers[2] to access the second or third buffer.
Because Swift doesn't have this variable length array feature, Apple provides UnsafeMutableAudioBufferListPointer, which can be used like a Swift collection of AudioBuffers; I used this in the outer for loop above.
Finally, I tried not to access self in the innermost loop in the code, because accessing a Swift or Objective-C object might involve unexpected lags, which was the reason why Apple recommends writing rendering loop in C/C++. But for simple cases like this, I would say writing in Swift is a lot easier and the latency is still manageable.
I need time since Epoch in ms for an API request, so I'm looking to have a function that converts my myUIDatePicker.date.timeIntervalSince1970 into milliseconds by multiplying by 1000. My question is what should the return value be?
Right now I have
func setTimeSinceEpoch(datePicker: UIDatePicker) -> Int {
return Int(datePicker.date.timeIntervalSince1970 * 1000)
}
Will this cause any issues? I need an integer, not a floating point, but will I have issues with overflow? I tested it out with print statements and it seems to work but I want to find the best way of doing this.
Looking at Apple Docs:
var NSTimeIntervalSince1970: Double { get }
There is a nice function called distantFuture. Even if you use this date in you func the result will be smaller then the max Int.
let future = NSDate.distantFuture() // "Jan 1, 4001, 12:00 AM"
print((Int(future.timeIntervalSince1970) * 1000) < Int.max) // true
So, until 4001 you're good to go. It will work perfectly on 64-bits systems.
Note: If your system supports iPhone 5 (32-bits) it's going to get an error on pretty much any date you use. Int in Iphone 5 corresponds to Int32.
Returning an Int64 is a better approach. See this.
I have some code to convert a time value returned from QueryPerformanceCounter to a double value in milliseconds, as this is more convenient to count with.
The function looks like this:
double timeGetExactTime() {
LARGE_INTEGER timerPerformanceCounter, timerPerformanceFrequency;
QueryPerformanceCounter(&timerPerformanceCounter);
if (QueryPerformanceFrequency(&timerPerformanceFrequency)) {
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
}
return 0.0;
}
The problem I'm having recently (I don't think I had this problem before, and no changes have been made to the code) is that the result is not very accurate. The result does not contain any decimals, but it is even less accurate than 1 millisecond.
When I enter the expression in the debugger, the result is as accurate as I would expect.
I understand that a double cannot hold the accuracy of a 64-bit integer, but at this time, the PerformanceCounter only required 46 bits (and a double should be able to store 52 bits without loss)
Furthermore it seems odd that the debugger would use a different format to do the division.
Here are some results I got. The program was compiled in Debug mode, Floating Point mode in C++ options was set to the default ( Precise (/fp:precise) )
timerPerformanceCounter.QuadPart: 30270310439445
timerPerformanceFrequency.QuadPart: 14318180
double perfCounter = (double)timerPerformanceCounter.QuadPart;
30270310439445.000
double perfFrequency = (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
14318.179687500000
double result = perfCounter / perfFrequency;
2114117248.0000000
return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
2114117248.0000000
Result with same expression in debugger:
2114117188.0396111
Result of perfTimerCount / perfTimerFreq in debugger:
2114117234.1810646
Result of 30270310439445 / 14318180 in calculator:
2114117188.0396111796331656677036
Does anyone know why the accuracy is different in the debugger's Watch compared to the result in my program?
Update: I tried deducting 30270310439445 from timerPerformanceCounter.QuadPart before doing the conversion and division, and it does appear to be accurate in all cases now.
Maybe the reason why I'm only seeing this behavior now might be because my computer's uptime is now 16 days, so the value is larger than I'm used to?
So it does appear to be a division accuracy issue with large numbers, but that still doesn't explain why the division was still correct in the Watch window.
Does it use a higher-precision type than double for it's results?
Adion,
If you don't mind the performance hit, cast your QuadPart numbers to decimal instead of double before performing the division. Then cast the resulting number back to double.
You are correct about the size of the numbers. It throws off the accuracy of the floating point calculations.
For more about this than you probably ever wanted to know, see:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
http://docs.sun.com/source/806-3568/ncg_goldberg.html
Thanks, using decimal would probably be a solution too.
For now I've taken a slightly different approach, which also works well, at least as long as my program doesn't run longer than a week or so without restarting.
I just remember the performance counter of when my program started, and subtract this from the current counter before converting to double and doing the division.
I'm not sure which solution would be fastest, I guess I'd have to benchmark that first.
bool perfTimerInitialized = false;
double timerPerformanceFrequencyDbl;
LARGE_INTEGER timerPerformanceFrequency;
LARGE_INTEGER timerPerformanceCounterStart;
double timeGetExactTime()
{
if (!perfTimerInitialized) {
QueryPerformanceFrequency(&timerPerformanceFrequency);
timerPerformanceFrequencyDbl = ((double)timerPerformanceFrequency.QuadPart) / 1000.0;
QueryPerformanceCounter(&timerPerformanceCounterStart);
perfTimerInitialized = true;
}
LARGE_INTEGER timerPerformanceCounter;
if (QueryPerformanceCounter(&timerPerformanceCounter)) {
timerPerformanceCounter.QuadPart -= timerPerformanceCounterStart.QuadPart;
return ((double)timerPerformanceCounter.QuadPart) / timerPerformanceFrequencyDbl;
}
return (double)timeGetTime();
}