The data I'm receiving has timestamps down to nanoseconds (which we actually care about). Is there a way for Postgres timestamps to go to nanoseconds?
As others have pointed out, Postgres doesn't provide such type out of the box. However, it's relatively simple to create an extension that supports nanosecond resolution due to the open-source nature of Postgres. I faced similar issues a while ago and created this timestamp9 extension for Postgres.
It internally stores the timestamp as a bigint and defines it as the number of nanoseconds since the UNIX epoch. It provides some convenience functions around it that make it easy to view and manipulate the timestamps. If you can live with the limited time range that these timestamps can have, between the year 1970 and the year 2262, then this is a good solution.
Disclaimer: I'm the author of the extension
Nope, but you could trim timestamps to milliseconds, and store nanosecond part to a separate column.
You can create index on both, and view or function to return your wanted nanosecond timestamp, and you can even create index on your function.
On the one hand, the documented resolution is 1 microsecond.
On the other hand, PostgreSQL is open source. Maybe you can hack together something to support nanoseconds.
Related
We have a simple application in which we have all user in same timezone & therefore we are not interested to store timezone information in mongo date object.
Reason for such extreme step is we have multiple micro service using common database managed by different developers. Each of them requires to explicitly set timezone related stuff in query & forgetting same results in invalid dataset.
Since currently MongoDB folks Mongo Data Types
doesn't support storing dates without timezone.
Just eager to know that is their any alternative approach to represent date without timezone in mongo by which we can still able to take advantage of mongo database queries date based syntax like date ranges, date etc.
At the same time it would be convenient for DBA's to read and manage records.
Look at this answer: https://stackoverflow.com/a/6776273/6105830
You can use two types of long representation (milliseconds or format yyyyMMddHHmmss). These are the only ways to not store timezone and still be able to make range queries.
Unfortunately you lost some aggregation properties. But you can do something like keeping two representations and use them at opportune times.
UPDATE:
Do not store date as I said before. You will lost many and many features of MongoDB and also will be hard to perform major operators on date fields.
Newer versions of MongoDB has operators to deal with timezone, and it should be enough to work with ISOTime formats. My application was using my own suggestion to store date. Now I have to let my users select their TimeZone (company has grown and we need to expand to other countries). We are struggling to change all models to use timestamp instead of a normalized date format.
For further more explore the link: https://docs.mongodb.com/manual/reference/method/Date/
and you can also use MongoDB official community channel for questioning
Here is the link: https://developer.mongodb.com/community/forums/
You could consider storing all the dates in UTC and presenting to the users in UTC, so you don't have the problem of silent conversion by either client JavaScript, server or MongoDB and therefore confusion.
You can do it like this: new Date(Date.UTC(2000, 01, 28))
Here's MDN link on the subject.
I have stored date in unix timestamp format in mongodb >> 1449060622
now I want to add milliseconds as well so if records inserted in same seconds can be sorted properly.
can someone suggest me that using js new Date() is more better or simply (new Date).getTime() ?
Whenever you store times in MongoDB you should really consider using the native Date type instead. Not only does it provide you with millisecond precision, it also unlocks a lot of features which are unavailable for simple integer, like date aggregation operators for example.
If you really don't want to use native dates for some obscure reason (I couldn't think of a good one) or don't want to convert your whole database (really, you should) and need a higher precision, you might consider to add new values as floating point values. This ensures interoperability with the old data because integers and floating point values usually can be converted and compared between each other easily.
I have started creating a product database using timestamp without timezone. Then, realizing my error, I started using timestamp with timezone. Now I'd like to unify this to the latter.
Question: Is it possible in an existing Postgres 8.4 DB already containing data to convert all the columns of type timestamp without TZ to ones with TZ?
The best solution would be a script that would do this in one execution (of course). Even a script that would fix a single column at a time would be great. The problem is that a naïve ALTERing the column fails on some existing VIEWs that use it in output (though I fail to see why it is bad in this case - it's just widening the output type a bit).
You want ALTER TABLE ... ALTER COLUMN ... TYPE ... USING (...) which does what you would expect. You will need to decide what timezone these times are in, and supply the suitable AT TIME ZONE expression for your USING clause.
These will ALTERs will rewrite each table, so allow for that. You may want to CLUSTER them afterwards.
However, you seem to think that the two types are interchangeable. They are not. That is why you need to drop and rebuild your views. Also you will want to rewrite any applications appropriately too.
If you can't see why they are different, make a good hot cup of tea or coffee, sit down and read the time & date sections of the manuals and spend an hour or so reading them thoroughly. Perhaps some of the Q&As here too. This is not necesarily a minor change. I'd be especially wary of any daylight-saving / Summer shifts in whatever time zone(s) you decide apply.
I am just starting to learn about MongoDB and hoping to slowly migrate from MySQL.
In MySQL, there are two different data types - DATE ('0000-00-00') and DATETIME ('0000-00-00 00:00:00'). In my MySQL, I use the DATE type, but I am not sure how to transfer them into MongoDB. In MongoDB, there is a Date object, which is comparable to DATETIME. It seems it would be most appropriate to use Date objects, but that would be wasting space, since hours, min, sec are not utilized. On the other hand, storing dates as strings seems wrong.
Is there a golden standard on storing dates ('0000-00-00') in MongoDB?
I'm actually in the process of converting a MongoDB database where dates are stored as proper Date() types to instead store them as strings in the form yyyy-mm-dd. Why, considering that every other answerer says that this is a horrible idea? Simply put, because of the neverending pain I've been suffering trying to work with dates in JavaScript, which has no (real) concept of timezones. I had been storing UTC dates in MongoDB, i.e. a Date() object with my desired date and the time set as midnight UTC, but it's unexpectedly complicated and error-prone to get a user-submitted date correctly converted to that from whatever timezone they happen to be in. I've been struggling to get my JavaScript "whatever local timezone to UTC" code to work (and yes, I'm aware of Sugar.js and Moment.js) and I've decided that simple strings like the good old MySQL standard yyyy-mm-dd is the way to go, and I'll parse into Date() objects as needed at runtime on the client side.
Incidentally, I'm also trying to sync this MongoDB database with a FileMaker database, which also has no concept of timezones. For me the simplicity of simply not storing time data, especially when it's meaningless like UTC midnight, helps ensure less-buggy code even if I have to parse to and from the string dates now and then.
BSON (the storage data format used by mongo natively) has a dedicated date type UTC datetime which is a 64 bit (so, 8 byte) signed integer denoting milliseconds since Unix time epoch. There are very few valid reasons why you would use any other type for storing dates and timestamps.
If you're desperate to save a few bytes per date (again, with mongo's padding and minimum block size and everything this is only worth the trouble in very rare cases) you can store dates as a 3 byte binary blob by storing it as an unsigned integer in YYYYMMDD format, or a 2 byte binary blob denoting "days since January 1st of year X" where X must be chosen appropriately since that only supports a date range spanning 179 years.
EDIT: As the discussion below demonstrates this is only a viable approach in very rare circumstances. Basically; use mongo's native date type ;)
If you really care about saving 4 bytes per field (in case you have many DATE fields per document) you can store dates as int32 fields in form 20110720 (note MySQL DATE occupies 3 bytes, so the storage will be greater in any case). Otherwise I'd better stick to standard datetime type.
I have a datetime field in a Sybase table, and need to use it in Perl for doing some calculations (differences to make sure I'm picking up the records that are at least 'n' mins apart). Now, if I simply do a select <datetime field>, Sybase returns a human readable field which is no use to me. I was looking at some way to convert this into say, an epoch based time, which will allow me to manipulate them easily in Perl.
I could be wrong but I didn't find an existing function for this in Sybase. The closest I got was datediff, which serves my purpose fine. In fact, I am using that as a workaround for now. The problem with datediff is because of the limitation on integer, it's capped at 68 years. I know my code probably won't be around till that time, but why introduce such a time-based limit on my own?
Is there a way to get around this?
Thanks!
If you're ok with doing it in the client, the Time::ParseDate module should be able to do the work for you.
The DateTime modules have served me well when I need to handle date/time calculations.
Use the DBD::Sybase syb_date_fmt() method to control the format, then use DateTime::Format::Strptime to parse your results and generate DateTime objects. Use the DateTime epoch method to get your epoch time, or better yet do your math using the appropriate DateTime::Duration objects.
If you can't get Sybase and Perl to cooperate to give you a genuine DateTime object or something similar (like Time::Piece), then the way I usually handle such things is to try to convince the database to give me ISO standard date format (YYYY-MM-DDTHH:MI:SS) and convert from there.