What do you call the number of *days* since the unix epoch? - date

I initially learned that Unix time is the number of seconds that have elapsed since 00:00:00 (UTC) on 1 January 1970. With 24 hours in a day, that means that the unix timestamp grows by 86400 every day.
Then I heard about the concept of leap seconds, and thought that would mean that maybe on some days, the unix timestamp would grow by 86401 seconds in a day, but apparently this is not the case. From what I've read, every day is treated as if it contains exactly 86400 seconds. When you get a leap second, the operating system will 'fudge' it in some way to make sure there's still 86400 timestamps - either make every 'second' that day a little bit longer than a real SI second, or they'll report the same integer timestamp twice in a row.
So I think that this means that every date since 1 Jan 1970 can be mapped to a unique integer which is the timestamp at 00:00:00 (UTC) that day divided by 86400. (guaranteed to be an integer with no remainder because as discussed every day has to have 86400 timestamps). Alternatively you could take any timestamp during that day and calculate floor(timestamp / 86400).
For example, today, Fri 23rd April 2021 - timestamp at 00:00:00 UTC was 1619136000.
As expected, this is a multiple of 86400, and 1619136000 / 86400 = 18740.
There have been 18740 days since the unix epoch.
So my question is:
Does this integer already have a well-known name? Is it already widely used in software for representing dates? I've not been able to find any reference online to this concept.
Is my logic here correct - is there really a unique integer for each date, and you can easily calculate it in your code as timestamp_at_midnight_utc / 86400? Or is there some subtle problem that I've overlooked.
My motivation here is that I often have to do complicated calculations involving lots of dates without any time information (I work for a vacation rentals company where each unit has it's own availability calendar). I think I could make a lot of efficiency improvements in my code if I was working with integers uniquely representing a date, instead of DateTime objects, or strings like '2021-04-23'.

Yes, your logic is correct. Where I still get worried is that it requires you to do your calculations in UTC. Holiday rentals happen in a time zone, and associating a date in that time zone with the start of the day in UTC instead could get confusing soon.
And yes, the concept of a count of days since 1970-01-01 is sometimes used, though not often that I have seen.
In the Java documentation the terms “epoch day” and “epoch day count” are used, but this doesn’t make these terms a standard.
I think that the first avenue for you to consider is whether either your programming language comes with a library for counting days without the need to convert to and from seconds, or there is a trustworthy third-party library that you may use for the purpose.
This Java snippet confirms your calculation:
// A LocalDate in Java is a date without time zone or UTC offset
LocalDate date = LocalDate.of(2021, Month.APRIL, 23);
long epochDayCount = date.toEpochDay();
System.out.println("Epoch day: " + epochDayCount);
Output agrees with the result you got:
Epoch day: 18740
Link: Epoch day count in the Java documentation.

From my experience there is no official name for "days since epoch". Some nuances that can be detected about UNIX time (and its measurement units):
It appears to be (relatively) officially defined as the number of seconds since the UNIX epoch.
The main purpose of the UNIX time mechanism (regardless of measurement unit conventions) is to define a point in time.
In the context of point #2, in practice, it has already become traditional that the UNIX timestamp is often returned in milliseconds.
There are several factors that can influence the measurement unit that is available to you:
design decisions by APIs, libraries and programming languages
time resolution / clock frequency of the software & hardware that you are running on - e.g. some circuits, controllers or other entities aren't able to reach millisecond resolution or they don't have enough bits available in memory to represent big numbers.
performance reasons - offering a time service at millisecond or second resolution via HTTP might prove too much for networks / server CPUs. The next best thing would be a UNIX timestamp in minutes. This value can then be cached by intermediary caches for the duration of 1 minute.
use cases - there are epochs (e.g. in astronomy) where the day is the main measurement unit.
Here are a few examples of such day-based epochs:
The Julian Day system - which has a non-integer Julian Date (JD) but an integer Julian Day Number (JDN). Its epoch is at noon 24 November 4714 BC.
J2000 epoch - measured via Julian Date as well. Its epoch is January 1, 2000, 11:58:55.816 UTC.
If you have a look at one method of calculating the Julian Date, dividing by 86400 is an important step. So, given that the JD system seems to be widely used in astronomy, I think it would be safe to consider this division by 86400 as valid :)

This is a more complex question than you might initially realize. You want the days since 1970 to be the same for all times during the local day, and you also don't want daylight saving time changes in the local time zone and UTC date changes to affect the output.
The solution I found was to compute the seconds since 1970 in UTC but for the current local date at midnight, not the current UTC date. Here is a Linux shell script solution:
echo $(( $(date -u -d "$(date '+%Y-%m-%d') 00:00:00" '+%s') / 24 / 60 / 60 ))
date -u forces UTC time, while the second date returns the local year-month-day. This computation actually generates an integer result, even if you use a computation that supports non-integers. Computing the seconds since 1970 in local time, or using the current UTC date (and no the local day) will not work.

Related

Saving time in Postgres with or without TZ?

I have a column in the DB that supposed to represent a deadline for ordering.
Let's assume the value is 11am EST.
I've saved in the DB as UTC during Winter so it was saved as 4PM UTC.
Problem is, when DST starts, the conversion back to EST was 12PM...
How can I solve this? on the DB level? if so how would I save a constant time and convert it to the correct value? If I use timestamp with TZ, will it then be read as 5PM UTC?
Thanks.
The scenario you described is caused by converting a future time to UTC and back again, using two different UTC offsets (UTC-5 and UTC-4). In general, one should not store future times (whether recurring or single-instance) in terms of UTC, unless the original reference point is already in terms of UTC. The advice "Always use UTC" applies only for past/present timestamps. It does not apply for future values.
PostgreSQL has several different date/time data types. The "with time zone" types will convert to/from UTC implicitly. The "without time zone" types will not.
In your case, it sounds like you would like to have a time-only value that describes the deadline that applies every day. Thus, you should use a time (aka time without time zone) data type for that field. You should then also store the IANA time zone ID in a separate character field (varchar(50) should be sufficient). For US Eastern Time, you would store 'America/New_York'. That assumes the cutoff is based on the same time zone for all users. If rather you are cutting it off at their time zone, then it could be different per-user, and you will need to determine the user's time zone.
When evaluating whether the deadline has passed, you would take the current UTC timestamp and convert it to the local date and time in the target time zone to have the current date and time in that zone. You'd then take a copy of that and replace the time part with your deadline time. Then compare those two values to see if the deadline has passed.
You should also be thinking about when does the deadline reset for the next day. Is it strictly based on the local date? If they try to order past that deadline, do you disallow it (and if so when do you allow it again), or does it apply to the next date? Only you can answer such questions, as it will vary based on your business needs.
The manipulations I described could be done directly in PostgreSQL, using functions like AT TIME ZONE and others, but generally you are better off doing them in your application layer. Most programming platforms have functions for manipulating dates and times, and for working with time zones.

Why is Firestore giving me two timestamps with 2 different timezones?

Does this have something to do with daylight saving time since March 7th is winter time whereas June 10th is summer?
Today is 27th March and daylight saving time doesn't come into effect until tomorrow here in Western Europe.
Here's a screenshot from the Firestore document.
That isn't two different time zones - it's two different UTC offsets that are observed in one time zone.
A Firestore timestamp is just an instant in time. What you're seeing are two instants in time, converted into your local time zone - at which point the one in June has an offset of UTC+2, and the one in March has an offset of UTC+1.

What is the most convenient alternative to store quantity of time elapsed in a database?

Taking into account that the elapsed time. For example: 1 month, 1 hour and 30 minutes, 45 minutes, 2 weeks, etc., is a concept called duration in the library moment js or a data type interval in Postgres Databases.
Both are abstractions of the same concept with pros and cons for calculations from the backend / frontend.
What would be the best way to store this data:
an integer that stores the amount milliseconds, or
a field of type interval of Postgres.
So that when working with this data the duration is interpreted although possibly in the same or different units of time?
It it usually a good idea to use the data type that is designed for the purpose, in this case interval.
The main advantage is that an interval like 1 month or 1 year 1 hour (which does not correspond to a certain number of seconds) will behave as expected if you add it to or subtract it from a timestamp.
Storing an interval as a numeric data type will only work well if you don't need to represent intervals like the above correctly — but the you can also use an interval like 36173.034 seconds, which will work just as well.

Why is the kdb+ epoch date 2000.01.01?

I am new to kdb+ and I was wondering why the epoch date of 2000.01.01 for kdb is different to that of unix (1970.01.01).
Does the difference affect any interactions with the operating system or other languages?
KDB+ uses a different epoch because it follows a different standard.
KDB+ follows the J2000 international standard, which is based on the Julian year.
UNIX uses POSIX time, which was initially a 32 bit unsigned integer. Because the time was calculated to 60ths of a second, the 32 bit integer would only work for about 829 days, so a recent date had to be chosen.
The first edition Unix Programmer's Manual dated November 3, 1971 defines the Unix time as "the time since 00:00:00, Jan. 1, 1971, measured in sixtieths of a second"
This difference can cause issues if you don't make sure to convert to one standard before piping the chosen epoch time into applications.
Issues regarding interactions with system/other languages should be able to be dealt with by the fact KDB can parse UNIX epoch timestamps
from http://code.kx.com/q/ref/casting/#tok
Parsing Unix timestamps (from seconds since Unix epoch), string with
9…11 digits:
q)"P"$"10129708800"
2290.12.31D00:00:00.000000000
q)"P"$"00000000000"
1970.01.01D00:00:00.000000000
Kdb+ is available on many different operating systems, currently Windows, Linux-x86, Linux-ARM and OSX are available to download from kx, with solaris previous available.
From the page about system time on Wikipedia we can see that various operating systems use differing epoch dates and ranges. Considering two OS that kdb+ supports we can see that they have differing epoch ranges:
OS Epoch or range
-------------------------------------------------
Unix/Posix 1 January 1970 to 19 January 2038
Windows 1 January 1601 to AD 30,828
Using either the linux or windows epoch would mean that the other did not match up anyway. Further reading on that page also shows that many other languages also use their own distinct epoch dates and ranges.
In short, there is no reason a language needs to use the epoch time of the OS it is running on.

100+ dates in different timezones, calculate which 2 dates present day is between...more calculate

I am really having a heck of a time figuring out which way I should do this. Been coding objective-c for 4 months now, well trying to at least.
I have about 100+ different dates spanning 2012, down to the second, in multiple timezones. What I need is:
to grab the present time/date, see which 2 dates in my 100+ list it is between and give me time spent and time remaining.
to know which 2 dates it is between no matter what timezone a user is in.
all calculations need to take in consideration of daylight savings time. Which the dates and times of DST change is different depending on the timezone and country.
a user in Hawaii will have the same time remaining and spent as a user in England.
ablility to convert all times to local user time.
have this all realtime. have the clock or timer counting down to the second.
I have tried NSDate. Then I searched this site and found NSDateFormatter. I played with that for what seems like days. Then another search I found NSDateComponents. Do I put my 100+ dates in a multidimensional array. Do I convert everything to GMT first or can xcode do that for me. Or do I convert everytime to seconds since 1970. I am just lost on what would be the best most practical way of doing this.
Any help, thanks so much!!
I am not an iOS programmer, but if you could convert everything to the same time zone (MST, EST, GMT, whatever), then that would make your job far easier. Converting between time zones runs in constant time as there is nothing more involved than simple addition/subtraction.
As for DST, if you convert to MST or EST (as opposed to MDT or EDT), you inherently remove DST. What exactly do you mean by "take in consideration of DST?" I could help much more if you could provide that.