I've recently learned that NTP implements handling of leap seconds. I was really surprised, as I know that NTP stores time as just amount of seconds since January 1, 1900. Shouldn't the issue of leap seconds only be addressed during formatting of the date to human-readable form, as it is the case with leap years?
Related
I'm quite new to Rust and chrono library.
I checked the https://docs.rs/chrono/0.4.19/chrono/struct.Duration.html#method.num_weeks, but there's no num_years() or num_months() API in chrono::Duration.
Is there any work around solution for this ?
chrono::Duration provides date and time duration in "ISO 8601 time duration with nanosecond precision", which implies it is representing the duration internally as a number of nanoseconds, and then providing convenience methods to convert into other duration units such as days, weeks, milliseconds, etc.
This is a little at odds with the actual ISO 8601 duration standard, which is a standard of representations and formats. The standard represents durations by the format P[n]Y[n]M[n]DT[n]H[n]M[n]S - which might give you what you want. But this is not what chronos::Duration was designed to provide.
The problem is that in order to represent a duration of months or years, more information than the number of nanoseconds is needed. Durations define the amount of intervening time in a time interval: the time between two points. The start or end time is important, because months or years are not standard durations. There are months of 28, 29 30, 31 days, and years of 365 and 366 days.
If you were to write your own algorithm to format durations in terms of years, months, days, hours, minutes, seconds, etc.. you would have to know the start date. In addition, time zone is important, because daylight savings needs to be taken into account. You would also have to make decisions about how to represent parts of months or years. For example the month of January has 31 days, and February 28, say. What would it mean to represent a duration of 1.75 months from January 1? Would that mean 31 days for January then 0.75 * 28 days in February?
Or you could represent the duration from a start date in a cascading unit format: e.g., 5 years, 4 months and 3 days, 2 hours, 12 minutes and 3 seconds from 1 Jan 1970 12:00Z. Just like the ISO 8601 standard.
So, its not an easy solution, and it all depends what your requirements are. I can understand why the developers of chronos:Duration left off providing num_months() and num_years()!
The crate chronoutil provides RelativeDuration which is "extending Chrono’s Duration to add months and years".
Example from docs.rs:
let one_day = RelativeDuration::days(1);
let one_month = RelativeDuration::months(1);
let delta = one_month + one_day;
let start = NaiveDate::from_ymd(2020, 1, 30);
assert_eq!(start + delta, NaiveDate::from_ymd(2020, 3, 1));
Unix time is useful for measuring time, whereas other formats are more useful for telling the time.
This is because (apart from time synchronization), it just ticks forward one second at a time.
It doesn't change when our clock for telling the time has an hour change, for example.
However, there does seem to be one exception. It ignores leap seconds, meaning when there is a leap second, it basically jumps back a second.
I'm wondering is there a similar format to Unix time that also includes leap seconds and has no special cases at all?
Nevermind, unix time has no exception for leap seconds.
I believe the explanation for unix time on Wikipedia is awful:
It is the number of seconds that have elapsed since the Unix epoch, minus leap seconds
This is incorrect, it should be:
It is the number of clock / artificial Earth seconds that have elapsed since the Unix epoch, minus leap seconds
Or in simpler terms:
It is the numbers of measured seconds that have elapsed since the Unix epoch
Hope this solves anyone else's confusion.
Consider a machine whose time is smeared during a leap second with a noon-to-noon linear smear.
I'm wondering how the system clock provides accurate Epoch time during the smear period.
Example:
The leap second is scheduled at 31st dec of 2016.
On the machine, a Unix timestamp at 11:59:00 on 31st of December is 1483185540
At noon the smearing starts, which means the local clock of the system at 1:30 pm is already a few microseconds behind TAI and UTC. The Epoch timestamp should be 1483191000 (exactly 1 hour 31 minutes later), which is not accurate to TAI/UTC anymore since Epoch doesn't respect leap seconds
At 12pm UTC adds an extra second: 11:59:60 pm, the local smeared clock should continue normally
Till, at noon 1st of January global UTC and local UTC sync up again, the local Epoch clock is now an entire second behind global Epoch/TAI
How is this inaccuracy resolved? Does the local Epoch time skip a second once the system knows a leap second happened? Or how is this issue handled?
Does it depend on the implementation of the clock used to calculate the time? If so, how does GNU's coreutils date handle this?
The inaccuracy is not resolved. The Unix Time remains a count of seconds since 1970-01-01 00:00:00 UTC excluding the inserted leap seconds. This has the benefit of making the count of seconds easy to convert to {year, month, day, hour, minute, second} form.
It has the problem that the subtraction of two Unix Time time points that straddle a leap second insertion will result in a time duration that is one second less than reality.
For some duration-related calculations I need to convert values measured in "months" to other formats, such as years, days, or hours.
For example, what is the proper way to measure a month in terms of days? is it 30 days? or 30.4375 days? (365.25 / 12) and which format would be useful in which cases?
If you have any information on the casual/business use cases for such conversions it would be helpful too.
Unfortunately, there's really no single generally valid answer to your question.
If this is for business use, first check whether there are any existing relevant standards or business practices that define what a "month" means in your business context. If yes, you should follow that definition as closely as possible, however silly or awkward it may seem.
For casual use, the simplest solution is probably to pick any widely use date manipulation library and do whatever it does. The default behavior may not be perfect, but it's probably at least close to a fairly sensible compromise of the many contradictory expectations that users of such a library may have.
OK, but what if you insist on rolling your own solution? In that case, the first choice you should make is how you want to represent date / time values. There are at least two common choices:
The first option is to store dates / times using a simple linear count of fixed time units from a given epoch, such as Julian days or Unix timestamps. This provides a simple and compact date/time representation, makes comparing timestamps and simple date/time arithmetic (like adding n seconds to a time value) easy, and ensures that any time value corresponds to a (more or less) unique and well defined point in time.
The downside, as you've noticed, is that arithmetic using "fuzzy" time units like months or years gets difficult: you can define a year as 365.25 days (or as 365.2425 days, to take into account that only 97 out of every 400 years are leap years in the Gregorian calendar) and a month as 1/12 years, but this will cause adding a year to a date-time value to also shift the time of day by (about) 6 hours, which may be unexpected.
This approach also doesn't let you easily represent "floating" time value, like times of day without a specified date and time zone. (You can sort of deal with floating time zones by doing your time math in UTC and just pretending that it's in your local time zone, but this can cause weird stuff to happen around DST changeovers.) Conversely, it can also cause difficulties if you need to represent imprecise date/time values, such as dates without a time component.
In particular, if you choose the "natural" representation, where imprecise datetimes are represented by their starting point, so that e.g. an unspecified time of day defaults to 00:00:00.0, then anything that causes the time part to be reduced by even a fraction of a second — like, say, shifting to a later time zone, or subtracting a fuzzy time unit that is not an integral number of days — will flip the date part to the previous day. For example, with this representation, subtracting one year (= 265.2425 days) from January 1, 2014 will yield a date in 2012 (specifically, December 31, 2012, 17:56:32)!
You can avoid some of these issues by representing imprecise date/time values by their midpoints instead, so that e.g. the date 2014 is treated as shorthand for June 2, 2014, 12:00:00. What you lose, with this representation, is the ability to build datetimes just by adding up components: with this representation, 2014 + 5 months + 3 days isn't anywhere near May 3, 2014.
Also, just when you think you've at least got simple non-fuzzy time arithmetic unambiguously sorted out, someone's going to tell you about leap seconds...
The alternative approach is to store datetime values in decomposed year / month / day / hour / minute / second / etc. format. With this presentation, time intervals are also naturally stored in a decomposed format: "one month + 17 days" is, in itself, a valid time interval in such a representation, and need not (and should not) be simplified further.
This has a few obvious advantages:
Fuzzy unit arithmetic is (conceptually) simple: to add one year to a date, just increment the year component by one.
Imprecise date/time values can be naturally represented: for a pure date value, the time-of-day components can simply be left undefined (= e.g. represented by negative values for the undefined components, or simply by having each datetime value store its precision).
You have precise control over when and if rollover occurs: adding a year to a date in 2014 will always yield a date in 2015.
You can also support floating time values, such as times of day without a specified date, or dates of year without a specified year. Floating time zones also become supportable.
That said, there are some disadvantages, too:
Implementing date arithmetic gets more complex, since you have to deal with non-trivial carry/borrow rules. (Quick! What's the date 10,000,000 seconds after May 3, 2014?)
You'll still have ambiguities with month arithmetic: what's the date one month after January 31? And does it depend on whether it's a leap year or not?
You can allow such a format to store "impossible" dates like "February 31", with an optional method to normalize them to, say, February 28 (or 29, for a leap year) later. This has the advantage of preserving (some) arithmetic consistency: it allows (January 31 + 1 month) + 1 month to equal March 31 as expected.
In some ways, though this merely postpones the problem: presumably, January 31 + 24 hours should fall on February 1, but what day and month should January 31 + 1 month + 24 hours fall on? The "obvious" choice would be March 1, but whatever you choose, there will be some sequence of arithmetic operations that will yield inconsistent results.
Leap seconds are added to correct for variations in the Earth's rotation. However, unlike leap years these do not occur at regular intervals (because it is not due to a systematic error, like leap years).
How are leap seconds handled by systems that use unix timestamps? Are leap seconds hard-coded into updates of programming languages that implement timestamps?
Update: looks like Unix timestamps don't deal with leap seconds
It is handled in the time server, not in the programming languages per se. If a second is being lost, the time server reports a 61st second at the end of the day (with a value of 60); if gained, the seconds of UTC go 56, 57, 58, 00, 01.