This question already has answers here:
Leap year calculation
(27 answers)
Swift Date: How to tell if a month can have a leap day?
(1 answer)
Closed 6 months ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
There are a range of ways to calculate whether a year is a leap year, which have been discussed in previous answers, such as: Leap year calculation. Some of these approaches are based on mathematical approaches, which are platform-agnostic, while others rely on various calendar functions that are specific to particular programming languages (e.g., for one example in the Swift programming language, see Swift Date: How to tell if a month can have a leap day?).
What is generally not answered with data in these previous questions and answers is the performance implications of selecting one approach over another. Many answers suggest that calendar-based approaches may be more flexible and the most likely to be accurate in edge-cases. However, it is reasonable to wonder whether these potentially heavier functions may display undesirable performance in cases where the leap year status needs to be identified for many thousands or millions of candidate years. Such performance characteristics will also be programming language- and potentially also platform-specific.
If calculating whether a particular year is a leap year in the Gregorian calendar, the standard Western calendar, in the Swift programming language, which could be answered using mathematical approaches or using Calendar-based calculations, what is the most performant way to calculate this in Swift?
There are a number of ways that whether a year is a leap year can be calculated, at least for the Gregorian calendar: using mathematical rules based on the current definition of leap years, and using Calendar-based methods.
In the Gregorian calendar, the base definition of leap years is a simple mathematical formula of the year, so the simplest way to get the answer could potentially not require any Date-related functions in Swift. Those leap year rules are:
A year is a leap year if it is divisible by 4...
Unless it is also divisible by 100, when it isn't a leap year,
Except when it is again also divisible by 400, then it is a leap year after all.
The modulo operator % calculates the remainder when you divide one number by another. Therefore, when this remainder is 0, you have a number that is evenly divisible. The leap year rules are in the order that makes the most day-to-day sense (you rarely have to worry about the other two rules, but for our calculation we reverse the order to get the if–unless-except logic that we need built in.
private func isLeapYearUsingModulo(_ targetYear: Int) -> Bool {
if targetYear % 400 == 0 { return true }
if targetYear % 100 == 0 { return false }
if targetYear % 4 == 0 { return true }
return false
}
Swift also has a built-in function to calculate if something is a multiple, isMultiple(of:) which could also provide the same outcome:
private func isLeapYearUsingMultipleOf(_ targetYear: Int) -> Bool {
if targetYear.isMultiple(of: 400) { return true }
if targetYear.isMultiple(of: 100) { return false }
if targetYear.isMultiple(of: 4) { return true }
return false
}
These mathematical approaches do have potential limitations. They assume the rules for leap years will not change in the future, and perhaps more importantly treat past years as though they have had leap years even in cases where the rules were different or not in place at all.
A calendar-based approach might therefore be better. One approach that has been identified is to count the number of days in the target year, and see if it is 366 rather than 365:
private func isLeapYearUsingDaysInYear(_ targetYear: Int) -> Bool {
let targetYearComponents = DateComponents(calendar: Calendar.current, year: targetYear)
let targetYearDate = Calendar.current.date(from: targetYearComponents)
return Calendar.current.range(of: .day, in: .year, for: targetYearDate!)!.count == 366
}
Alternatively, given we know leap days only fall in February in the Gregorian calendar, we could just count the number of days in February:
private func isLeapYearUsingDaysInFebruary(_ targetYear: Int) -> Bool {
let targetYearFebruary = Calendar.current.range(of: .day, in: .month,
for: DateComponents(calendar: .current, year: targetYear, month: 2).date!)
return targetYearFebruary!.count == 29
}
The question here asks what is the most performant way to calculate a leap year. It would seem reasonable to speculate that pure mathematical approaches are likely to be more performant than methods that need to instantiate Calendar, Date and DateComponent instances. However, the best way to answer the question is through actual performance testing.
XCTest will automatically run performance tests of any code included in a self.measure block, running each measure block 10 times, averaging the results, and storing performance baselines for future regression testing.
In the case of these functions, we expect them to be fast, making single calls to these functions difficult to compare for performance testing. Therefore, we can embed a loop within the measure block, to call each function 1 million times. This test will be run through ten iterations, using ten million calls to each function to give us an average time each approach took to run 1 million times:
func testA1_mathematical_usingModulo_leapYearPerformance() throws {
self.measure {
for _ in 1...1_000_000 {
let targetYearInt = Int.random(in: 0...4000)
let result: Bool = isLeapYearUsingModulo(targetYearInt)
}
}
}
func testA2_mathematical_usingIsMultipleOf_leapYearPerformance() throws {
self.measure {
for _ in 1...1_000_000 {
let targetYearInt = Int.random(in: 0...4000)
let result: Bool = isLeapYearUsingMultipleOf(targetYearInt)
}
}
}
func testB1_date_usingDaysInYear_leapYearPerformance() throws {
self.measure {
for _ in 1...1_000_000 {
let targetYearInt = Int.random(in: 0...4000)
let result: Bool = isLeapYearUsingDaysInYear(targetYearInt)
}
}
}
func testB2_date_usingDaysInFebruary_leapYearPerformance() throws {
self.measure {
for _ in 1...1_000_000 {
let targetYearInt = Int.random(in: 0...4000)
let result: Bool = isLeapYearUsingDaysInFebruary(targetYearInt)
}
}
}
The results are instructive:
Modulo was the fastest of the functions, taking on average 0.501 seconds to calculate whether 1 million integers represented leap years.
While isMultipleOf would seem likely to simply call modulo in its own implementation, it was found to be about 20% slower taking on average 0.598 seconds for the same 1 million iterations.
Date-based methods were significantly slower. Counting the number of days in February took on average 10 seconds for the same 1 million runs—20 times slower than the mathematical methods. Meanwhile, counting the number of days in a year took on average 38 seconds, so was 75 times slower than the mathematical methods.
Calendar-based approaches are certainly going to be wholly accurate, and for many applications will be the right way to proceed as they are fully informed on the complexity of calendars, and also are able to be used with non-Gregorian calendars. However, for uncomplicated applications where performance matters at all, all approaches are relatively fast and so may be functionally as good as each other, but it is clear mathematical approaches do have a significant performance advantage.
There is potential for further optimisation, however. In a comment elsewhere, Anthony noted that simply examining whether a year can be divided by 4 will eliminate 75% of years as not being leap years, without further comparisons being required, since while not all years divisible by 4 are leap years, all leap years are divisible by four. A more optimized algorithm therefore would be:
private func isLeapYearUsingOptimizedModulo(_ targetYear: Int) -> Bool {
if targetYear % 4 != 0 { return false }
if targetYear % 400 == 0 { return true }
if targetYear % 100 == 0 { return false }
return true
}
func testA3_mathematical_usingOptimizedModulo_leapYearPerformance() throws {
self.measure {
for _ in 1...1_000_000 {
let targetYearInt = Int.random(in: 0...4000)
let result: Bool = isLeapYearUsingOptimizedModulo(targetYearInt)
}
}
}
This does indeed run slightly faster—averaging 0.488 seconds for 1 million calls. However, this is not as much of a speed increase as would be expected for reducing by 2/3 the number of comparisons being made in 75% of cases.
That draws attention to the potential performance of the shared component of all the performance tests: computing random Integers for the target year. We can test the time that portion of the tests takes in isolation:
func test00_randomInteger_portionOfPerformance() throws {
self.measure {
for _ in 1...1_000_000 {
let targetYearInt = Int.random(in: 0...4000)
}
}
}
This test runs on average in 0.482 seconds, representing about 95% of the execution time of the performance tests:
Results vary slightly for previous tests on re-running, but show the same pattern. More significantly, if we subtract the 0.482 seconds of random integer calculation portion of the time from each test, we find the performance differences between mathematical and Calendar-based are even more stark:
Average execution, subtracting random integer execution time:
Mathematical—optimized modulo approach: 0.006 seconds
Mathematical—modulo approach: 0.013 seconds (2.1x slower)
Mathematical—isMultipleOf approach: 0.105 seconds (17.5x slower)
Date—count days in February: 9.818 seconds (1,636x slower)
Date—count days in year: 37.518 seconds (6,253x slower)
If this approach of subtracting the time taken to calculate the random integers is valid, it suggests an optimized modulo approach is 6,253 times faster than a Calendar approach counting the days in the target year.
Here I've implemented it as a computed variable that is an extension on Int, so for any integer you can just ask 2024.isALeapYear and you'll get back a Bool: true or false. You could obviously instead put the same logic in a function elsewhere.
extension Int {
var isALeapYear: Bool {
if self % 4 != 0 { return false }
if self % 400 == 0 { return true }
if self % 100 == 0 { return false }
return true
}
}
Above Duncan's answer is also correct and I am posting this since this is a different approach.
The main difference (may be the only difference) of a leap year is it has an extra day in February. So using dateFormatter you can check if there is a 29th February available on that year.
func isLeapYear(year: Int) -> Bool {
let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "yyyy-MM-dd"
return dateFormatter.date(from: "\(String(year))-02-29") != nil
}
I'm currently working on a project trying to determine the how long different sorting algorithms take to sort different sized arrays. To measure the time, I've decided to use XCTest in Swift Playgrounds since it can automate the process of running the algorithm multiple times and averaging it out. But I have an issue with this method because I have to test a large variety array sizes from 15 elements up to 1500 or so, at 5 element intervals (ie. 15 elements, 20 elements, 25 elements...).
The only way I've been able to do this with one test is multiple functions with the different size and measuring the performance. Here is a sample of what that looks like:
class insertionSortPerformanceTest: XCTestCase {
func testMeasure10() {
measure {
_ = insertionSort(arrLen: 10)
}
}
func testMeasure15() {
measure {
_ = insertionSort(arrLen: 15)
}
}
func testMeasure20() {
measure {
_ = insertionSort(arrLen: 20)
}
}
}
insertionSort() works by generating an array of length arrLen and populating it with random numbers.
Is there a way to automate this process somehow?
Also, is there a way to take the output in the console and save it as a string so I can parse it for the relevant information later?
I have an app that reads environmental data from a USB sensor connected to a Mac. Users are able to configure how often the app samples data and how often the app averages those samples and logs the average to a file.
I first used NSTimer but that was wildly inaccurate, especially when the display went to sleep. I am now using a DispatchSourceTimer but it is still losing 1 millisecond about every 21-23 seconds which is about 1 second every 6 hours or so. I'd ideally like that to be less than 1 second per day.
Any ideas how I can tune in the timer to be a little more accurate?
func setupTimer() -> DispatchSourceTimer {
let timer = DispatchSource.makeTimerSource(flags: .strict, queue: nil)
let repeatInterval = DispatchTimeInterval.seconds(samplingInterval)
let deadline : DispatchTime = .now() + repeatInterval
timer.schedule(deadline: deadline, repeating: repeatInterval, leeway: .nanoseconds(0))
timer.setEventHandler(handler: self.collectPlotAndLogDatapoint)
return timer
}
func collectPlotAndLogDatapoint() {
samplingIntervalCount += 1
let dataPoint : Float = softwareLoggingDelegate?.getCurrentCalibratedOutput() ?? 0
accumulatedTotal += dataPoint
if samplingIntervalCount == loggingInterval / samplingInterval{
let average = self.accumulatedTotal/Float(self.samplingIntervalCount)
DispatchQueue.global().async {
self.logDataPoint(data: average)
self.chartControls.addPointsToLineChart([Double(average)], Date().timeIntervalSince1970)
self.samplingIntervalCount = 0
self.accumulatedTotal = 0
}
}
}
The answers (and comments in response) to this seem to suggest that sub-millisecond precision is hard to obtain in Swift:
How do I achieve very accurate timing in Swift?
Apple apparently have their own dos & don'ts for high precision timers: https://developer.apple.com/library/archive/technotes/tn2169/_index.html
~3-4 seconds a day is pretty accurate for an environmental sensor, I'd imagine this would only prove an issue (or even noticeable) for those users who are wanting to take samples on an interval << 1 second.
I believe this question to be programming language agnostic, but for reference, I am most interested in Swift.
When we perform a Pythagoras calculation inside a method, we know that for 32 or 64 bit Int this will be a fast operation.
func calculatePythagoras(sideX x: Int, sideY y: Int) -> Int {
return sqrt(x*x+y*y)
}
We can call this method syncronously since we consider it to be fast enough.
Of course, it would be silly - assuming Int of max 64-bit size - to implement this in an asynchronous manner:
func calculatePythagorasAsync(sideX x: Int, sideY y: Int, done: (Int) -> Void) -> Void {
DispatchQueue.global(qos: .userInitiated).async {
let sideZ = sqrt(x*x+y*y)
DispatchQueue.main.async {
done(sideZ)
}
}
}
This is just overcomplicating the code since we can assume that disregarding of how old and slow the device we are running on, performing two multiplications and one square root of integers of max size 64 this will execute fast enough.
This is what I am interested in, what is fast enough?. Let's say that for the sake of simplicity we constrain this discussion to one specific device D. Let's say that the execute (wall clock time) for calculating Pythagoras on device D is 1 microsecond.
What do you think is a reasonable threshold for a method M should be changed to asynchronous? Imagine we would like to call this method M every time we display a table view (list view) cell (item), calling M on the main thread. And we would like to keep a smooth 60 fps when scrolling. 1 microsecond is surely fast enough. Of course, 100,000 microseconds (= 0.1 seconds) is not nearly fast enough. Probably 10,000 microseconds (= 0.01 s) is not fast enough either.
Is 1,000 microseconds (= 1 millisecond = 0.001 s) fast enough?
I hope you do not think this is a silly question, I am genuinely interested.
Is there any reference to some standard regarding this? Some best practice?
I try to "profile" an expensive method by the means of just printing the system time. I've written a small method that prints the current time in seconds relative to the start-time. :
object Bechmark extends App {
var starttime = 0L
def printTime(): Unit = {
if (starttime == 0L) {
starttime = System.currentTimeMillis()
}
println((System.currentTimeMillis() - starttime) / 1000.0)
}
printTime()
Thread.sleep(100)
printTime()
}
I expect therefore that the first call to printTime prints something close to 0. But the output I get is
0.117
0.221
I don't understand why the first call already gives me ~120 miliseconds? What is the correct implementation for my purpose?
As others have mentioned the running time of your application does not necessarily represent the actual world time elapsed. There are several factors that affect it: warm up time of the JVM, JVM garbage collection, steady state of the JVM for the accurate measurement, OS process dispatching and shuffling.
For Scala-related purposes I suggest ScalaMeter
that allows you to tune all the aforementioned variables and measure the time quite accurately.