I have some documents that have a timestamp that I cannot seem to interpret it. The timestamp looks in the following ways:
131816723980463000
or
1,30989743231727E+32
or
130989741742563000
Does anyone know what to use to interpret these date formats? I tried the date time from pandas, most of the date formats from excel, I cannot find anything that resembles somethings that a human can read.
They are probably LDAP timestamps or Microsoft filetimes. Those are the same.
In that case
131816723980463000 would be Monday, September 17, 2018 3:39:58 PM GMT
130989741742563000 would be Wednesday, February 3, 2016 11:56:14 AM GMT
The middle one with the exponent is funny, though. Having the first 8 digits in common with the last one my guess is itโs the same only shifted 15 decimal digits to the left and printed in scientific notation.
A Microsoft filetime is a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC).
Links
LDAP, Active Directory & Filetime Timestamp Converter
FILETIME structure in Microsoft documentation
Related
I have downloaded a dataset called Real estate valuation data set from https://archive.ics.uci.edu/ml/datasets/Real+estate+valuation+data+set?source=post_page . But I am not able to understand in which format the transaction date is given. They have given some examples like 2013.250=2013 March, 2013.500=2013 June, etc.
I encountered this same problem and I figured that the months were represented as a fraction of a year. So, for instance, June is the 6th month of the year, hence it is 6/12 = 0.5; that's why we had June 2013 as 2013.500, etc.
I am getting an exif date value like
EXIFPhotoDate: 1506173228000 and
UploadDate: 1506485214000
but I know it is
EXIFPhotoDate 23/9/2017, 23:27 and
UploadDate 9/27/2017, 01:59
The former is when queried via REST and the latter is when queried via the table.
How can I get standard date/time from a value like this?
Looks like you have a number of milliseconds since January 01 1970 in UTC. If you remove the 000 from the end, you will have a Unix timestamp, that is, the number of seconds since January 01 1970 in UTC:
unixTimestamp = 1506173228000 / 1000
Once your question doesn't state any programming language, it's hard to give you further help.
What date format is this: -147114000000-0700. It is supposed to be 05/04/1965.
The first term looks like a unix timestamp. But then why would we need the second term?
I am using Redtail's api, but they provide negligible documentation on this. They are sending over a date looking like "/Date(-147114000000-0700)/". I have never seen this format before. Ignoring all the useless text, we get -147114000000-0700, still leaving me puzzled.
The -147114000000 value is a unix timestamp in milliseconds: it represents the number of milliseconds since unix epoch (which is 1970-01-01T00:00Z or January 1st 1970 at midnight in UTC).
As the number -147114000000 is negative, it represents a date before epoch. In this case, 1965-05-04T07:00:00Z (or May 4th 1965 at 7 AM in UTC).
-0700 is an UTC offset: it represents the difference from UTC. In this case, 7 hours behind UTC, which results in 1965-05-04T00:00-07:00 (or May 4th 1965 at midnight in -07:00 offset). Note that an offset can be written as -07:00, -0700 or -07.
But keep in mind that this same value can represent a different date and time in each timezone. For example, in Pacific/Honolulu timezone (that uses the -10:00 offset since 1947), the same timestamp corresponds to 1965-05-03T21:00-10:00 (May 3rd 1965 at 9 PM, in offset -10:00). So the corresponding date and time will depend on what timezone you convert this to.
That being said, probably the purpose of having the offset is just to tell you what's the offset that the date/time refers to, so it prevents you from converting to a different offset (where you can get different values for local date and time).
Just reminding that -0700 is not a timezone, it's just an offset. Actually, a timezone is the set of all offsets that a region had, has and will have during its history, while the offset is just the difference from UTC (check the section TimeZone != Offset in the timezone tag description). There can be more than one timezone that uses the same offset, so you can't really say in what timezone this is in.
I would like a spreadsheet row to contain the date of today, but only on every other Thursday, changing at 9:30 am.
To give you an example:
Next thursday the 21.07.16 it shell contain "21.07.16".
Until in exactly 14 days on thursday the 4.08.16 it shell contain this date and than change to 4.08.16.
Also I would like this change to happen at 9:30 am.
I can not think of a way how to do this. Can you point me into a direction?
One has to set a starting datetime somewhere in the past, such as July 7, 2016, at 9:30am.
Then find the difference between the current and the starting datetime. Truncate this difference down to a multiple of 14, and add this value to the starting datetime.
The datetimes are represented in Sheets numerically as the number of days since December 30, 1899. In this system, 2016-07-07 9:30 is about 42558.4 So the formula would be
=42558.4 + 14*floor((now()-42558.4)/14)
The output should be formatted as a date.
A less cryptic version is
=value("2016-07-07 09:30") + 14*floor((now() - value("2016-07-07 09:30"))/14)
(value follows the local convention for parsing dates, but I hope the format I used will be understood universally.)
This structure is a 64-bit value representing the number
of 100-nanosecond intervals since January 1, 1601.
Reference: http://msdn.microsoft.com/en-us/library/aa915351
Why it is set "since 1601"? Why not unix time 1970 or even 2000?
What can I do with the compatibility of so distant in time dates?
Answering to myself.
The ANSI Date defines January 1, 1601 as day 1, and is used as the origin of COBOL integer dates. This epoch is the beginning of the previous 400-year cycle of leap years in the Gregorian calendar, which ended with the year 2000.
as you can find in wikipedia under Julian_day entry.
Further:
Why is the Win32 epoch January 1, 1601
Because 1/1/1601 was the start of the epoch.
Take it from Raymond Chen:
Why is the Win32 epoch January 1, 1601?๐
The FILETIME structure records time in the form of 100-nanosecond intervals since January 1, 1601. Why was that date chosen?
The Gregorian calendar operates on a 400-year cycle, and 1601 is the first year of the cycle that was active at the time Windows NT was being designed. In other words, it was chosen to make the math come out nicely.
I actually have the email from Dave Cutler confirming this.
Bonus Chatter
RFC4122 UUIDs also measure 100ns ticks, but they start at 10/15/1582 (as opposed to FILETIME's 1/1/1601:
Date Ticks Uuid Epoch ticks
---------------------- ------------------ ------------------
1582-10-15 -5748192000000000 0 Start of uuid epoch
1601-01-01 0 0x00146BF33E42C000 Start of Windows epoch
1899-12-30 0x014F35A9A90CC000 0x0163A19CE74F8000 Lotus 123/Excel/Access/COM zero date
1900-01-01 0x014F373BFDE04000 0x0163A32F3C230000 SQL Server zero date
1970-01-01 0x019DB1DED53E8000 0x01B21DD213814000 Unix epoch timestamp
2000-01-01 0x01BF53EB256D4000 0x01D3BFDE63B00000
2010-01-01 0x01CA8A755C6E0000 0x01DEF6689AB0C000
2020-01-01 0x01D5C03669050000 0x01EA2C29A747C000
//FILETIME eras
1972-01-21 11:43:51 PM 0x01A0000000000000 0x01B46BF33E42C000 Start of 0x01A era
1986-04-30 11:43:13 AM 0x01B0000000000000 0x01C46BF33E42C000 Start of 0x01B era
2000-08-06 11:42:36 PM 0x01C0000000000000 0x01D46BF33E42C000 Start of 0x01C era
2014-11-14 11:41:59 AM 0x01D0000000000000 0x01E46BF33E42C000 Start of 0x01D era
2029-02-20 11:41:22 PM 0x01E0000000000000 0x01F46BF33E42C000 Start of 0x01E era
2043-05-31 11:40:44 AM 0x01F0000000000000 0x02046BF33E42C000 Start of 0x01F era
//UUID eras
1968-02-11 11:43:13 AM 0x019B940CC1BD4000 0x01B0000000000000 Start of uuid 0x01B era
1982-05-20 11:42:36 PM 0x01AB940CC1BD4000 0x01C0000000000000 Start of uuid 0x01C era
1996-08-27 11:41:59 AM 0x01BB940CC1BD4000 0x01D0000000000000 Start of uuid 0x01D era
2010-12-04 11:41:22 PM 0x01CB940CC1BD4000 0x01E0000000000000 Start of uuid 0x01E era
2025-03-13 11:40:44 AM 0x01DB940CC1BD4000 0x01F0000000000000 Start of uuid 0x01F era
Bonus Chatter
Excel uses a zero date of 12/30/1899 in order to be bug-for-bug compatible with Lotus 1-2-3. Which is also why Excel considers February 1900 to be a leap year (because the Lotus 1-2-3 guys thought it was). Which is why it's also impossible to represent dates before March 1, 1900 in Excel.
Well, 1 January 1601 was the first day of the 17th Century. And pendulum clocks were invented in the 17th century, allowing time to be measured to 1 second accuracy1. So (in theory) there might be references in extant literature from that period to timepoints measured with that accuracy.
But in reality the choice is arbitrary. There has to be an "epoch", and provided
the epoch is far enough back that "negative time" values are rare, and
the wrap-around time is far enough in the future to be a few generations away,
any choice will do.
But hey, if it worries you that much, send a letter to Steve Balmer2.
I'm inclined to believe Ian Boyd's answer, given the claimed source. And the reason therein is that it makes the math easier (for Gregorian leap year calculation). However, given how tiny that simplification is, and how weak the reasoning behind it, the choice is (IMO) essentially arbitrary. (Not that I'm saying it is wrong ...)
1 - OK ... probably not that accurate.
2 - Or Satya Nadella.
Its a pragmatic choice.
The modern western calendar was not consistent until 1752 when Britain (and its colonies) adopted the Gregorian calendar, which had been adopted in most of catholic Europe since 1582.
This is the modern calendar with leap years etc. to keep the 1st of January aligned with the winter solstice.
So why not start from 1st January 1752? Because the basic leap year rule "Its a leap year if the two digit year is divisible by four except if the four digit century is also divisible by four") established a 400 year cycle. The first full cycle starting on 1st January 1601, (at least in Rome).
The leap year and date calculations are painful enough without starting midway through a four hundred year cycle so 1600 is a pretty good start as long as you remeber that any dates before 1752 need to be qualified by a geographic location, as British dates were 10 days out of sync. with Roman dates by this time.
As has already been mentioned I think the popular answer is because the Gregorian calendar operates on a 400-year cycle, and 1601 is the first year of the cycle that was active at the time Windows NT was being designed.
January 1, 1601 is origin of COBOL integer dates.
It is also day 1 by ANSI date format.
And if you speculate further according to ISO8601 which is the format in which it is in, prior to 1583 time was based on the proleptic Gregorian calendar which has 366 days per year. Perhaps they just rounded up to the next century.