How to convert BSON Timestamp from Mongo changestream to a date in javascript? - mongodb

I already have a SO question and answer on how to convert BSON Timestamp in a MongoDB aggregation, but now I have a situation where I would like to convert in node.js.
So just to repeat. My goal is to convert a "Timestamp" datatype to a javascript date, without doing it in an aggregation - is this possible?

If the BSON library you are using provides Timestamp type, you can use its getTime method to return the seconds, and create a date from that:
Date(object.clusterTime.getTime())
If you don't have that function available, the timestamp is a 64-bit value where the high 32-bits are the seconds since epoch, and the low 32-bits are a counter.
Bitshift the value 32 bits right or divide by 2^32 to get the seconds:
Date(object.clusterTime/Math.pow(2,32))

Related

Mongo how to convert an ISO date to a BSON timestamp value

In Mongodb, if I have an ISODate of "2021-07-29T01:57:49.075Z", how do I convert this value to the equivalent BSON timestamp value? The BSON value should be in the form of Timestamp(nnnnnnnnnn, 1). The n value is 10 digits long. I've seen a close answer using the $toLong aggregate, but the long number returned doesn't fit into the 10 digits.
Also the reverse, how to convert a BSON timestamp value to an ISODate.
This seems so simple but I can't find any reference and similar questions convert to a long number not a BSON timestamp in the form described above.
Thanks for your help.

Can a 4 byte timestamp value in MongoDb ObjectId overflow?

If at some time, the epoch is ffffffff, than the objectId created at this moment is something like :
ffffffff15580625bcb65364
Then, what could be the ObjectId created after 1 second?
Then, what could be the ObjectId created after [the Unix epoch rolls over in 32 bits]?
This would depend on the specific implementation, its programming language and their handling of math calculations.
It is possible that some implementations and languages would error when they retrieve the number of seconds since the Unix epoch as a 64-bit integer (which is quite common today) and then try to use a value which exceeds 32 bits in size for ObjectId generation. If this happens the driver will cease to be able to generate ObjectIds, consequently it may be unable to insert documents without _id values being provided by the application using some other generation strategy.
In other implementations the timestamp itself may roll over to zero, at which point the ObjectId generation will succeed with a very small timestamp value.
Yet other implementations may truncate (from either most or least significant side) the timestamp to coerce it into the 32 available bits of an ObjectId.
The ObjectId value itself doesn't actually have to have an accurate timestamp - it is required to be unique within the collection and it is "generally increasing" but MongoDB-the-database wouldn't care if ObjectId values wrapped to around zero at some point.
As docs says, timestamp is represented by 4-byte.
4-byte timestamp value, representing the ObjectId’s creation, measured in seconds since the Unix epoch
4 bytes is from -2,147,483,648 to 2,147,483,647 values, so, that is 4,294,967,295 values.
And the date from 4,294,967,295 according to unix timestamp is: GMT: Sunday, 7 February 2106 6:28:15
After this date, ObjectId won't be able to store the timestamp.
So, can ObjectId overflow? In 85 years every new ObjectId created will fail because it won't be able to create the timestamp with only 4 bytes.

How do I convert a Julian date stored as a double-precision value to a timestamp with at least one-minute resolution?

I exported data from an SQLite table to a CSV file. The data includes a timestamp with at least one-minute resolution: "2019-11-15 01:30:06". The data is actually stored as a Julian date, in this case 2458802.35424295. I imported the data into a double-precision field. I need to convert that number into a timestamp with time zone. I tried casting the double-precision number to text and then using to_timestamp(), but that appears to work only with integer days. I can get a timestamp, but it is always at midnight of the correct date. I tried using to_timestamp() passing in my number, but that returns an epoch (number of milliseconds since 1/1/1970).
I could try to take the fractional part of my Julian date value, calculate the number of milliseconds since midnight that represents, use the to_timestamp(text,text) method to get the date I need, and then add the epoch since midnight to that date. But that's awfully cumbersome. Isn't there a better way?
I'm using PostgreSQL 9.3.
NOTE: The simple answer to my problem, which occured to me just before I clicked the Post button, is to export the data in the form I want, using SQLite's datetime() function to convert the number to a date string during export. But I remain curious. I would have thought there would be a standard way to do this conversion.

How to understand Bson Timestamp?

For Mongo Bson type Timestamp, there is a constructor: BsonTimestamp(final int seconds, final int increment), how to understand the increment? what is the design consideration?
Timestamp is an internal BSON type used by MongoDB to reflect the operation time (ts) for entries in the replication oplog.
The BSON Timestamp type is designed for the specific use case of logging ordered batches of time-based operations:
the first 32 bits (time_t) are an integer value representing seconds since the Unix epoch
the second 32 bits are an integer value (ordinal) indicating ordering within a given second
The design requirement for Timestamps is based on preserving strict ordering for oplog entries rather than time precision (eg milliseconds or microseconds). The leading time component gives a course granularity of seconds; appending an incrementing ordinal value ensures strict ordering of unique Timestamp values within a given second. Using an ordered sequence instead of time precision avoids pushing down the potential conflict of two operations that might occur in the same millisecond (or microsecond).
For application use cases you should use the BSON Date type instead of a Timestamp. A BSON Date is the same size (in bits) as a Timestamp, but provides more granularity for time:
BSON Date is a 64-bit integer that represents the number of milliseconds since the Unix epoch (Jan 1, 1970). This results in a representable date range of about 290 million years into the past and future.

Nanoseconds lost coming from MongoDB ISODate Object

I'm losing the nanoseconds from the MongoDb interface for the ISODate object. All the nanoseconds are set to zero when I read them in perl.
First, my environment:
MongoDB version: 1.8.2
perl v5.12.4
MongoDB perl module version: 0.701.4
I have a Mongo DB that has rtcTime coded as an ISODate, as follows:
"rtcTime" : ISODate("2013-05-13T18:54:55.918Z")
The code to extract the rtcTime looks something like this:
my #results = $db->get_collection( 'timings' )->find( )->all();
foreach my $record ( #results )
{
print $record->{rtcTime}->nanoseconds()."\n";
}
Output is all 0's.
To fully reproduce the problem, create an ISODate object with arbitrary (non-zero) hires_epoch values in the MongoDB database. Then try to use the MongoDB / DateTime / DateTime::Format::ISO8061 modules to extract any kind of hires time data.
Q: Why can't I get my milliseconds, microseconds, or nanoseconds from the MongoDB ISODate data?
MongoDB stores documents in BSON format and its specification says:
BSON Date is a 64-bit integer that represents the number of milliseconds since the Unix epoch (Jan 1, 1970).
So, date precision is limited to miliseconds. If you really need to store nanoseconds, you shouldn't use date type. You have to use a long and store the timestamp, so that you'll no lose precision.
This is a bug in the way that the MongoDB Perl driver interacts with DateTime. When ISODate values are retrieved from the database, the driver initializes the DateTime object using its from_epoch constructor. But the nanosecond portion is not passed to the constructor.
DateTime does support passing the full value including nanoseconds, and the driver should be updated to fix that.
I've created a ticket for this bug and I will fix it. But maybe not until after the holiday weekend. :)