Lets start by providing the database definition of the entity in question:
#Column({ name: "last_seen", type: "timestamp", nullable: true })
#Index()
public lastSeen: Date | null = null;
#CreateDateColumn({ name: "created", type: "timestamp", nullable: false })
public created!: Date;
The database is UTC but the server (in development) is in the "Asia/Tehran" time zone. As far as I know, TypeORM should store dates in UTC and convert them back to local time on retrieval.
However, and I think this is happening recently since I don't remember having a similar issue on this project which is ongoing for 6 months now, but the Dates are now stored raw. For example, if I save 10:00+4:30 in javascript for the lastSeen property it will be saved invalidly as 10:00.
On the other hand, when reading dates that are created by the database in "UTC" automatically, for example as is the case for the created property, the value is read in the local timezone instead of getting converted from "UTC". In these cases, the data is stored correctly in the Database in "UTC".
Is there an option that I am missing here in regard to date time conversion with TypeORM?
Related
Hi I am new to golang and I am trying to insert a time.Now() value for a time.Time type of variable The weired part is that i am neither getting an error nor having the commit proccessed rather the execution of the code is been stopped when i try to insert it. Can someone please help me what should be the value that I should be trying?
db:
alter table abc add column created timestamptz NULL;
struct {
created time.Time db:"created"
}
value been set before the insert is
created = time.Now()
I expect the db to be saved with the new record
In order for any external library to be able to look at your struct fields, they need to be exported, i.e. the name must start with a capital letter.
Since your definition defines the created time with a lower case letter, only code inside your package can see that field. This is why your database interface is setting "created" to NULL - as far as it knows, no value was ever provided for that field.
type Foo struct {
Created time.Time db:"created"
}
Note: if you happen to be using GORM to interface with your database, it actually supports what you are trying to do by default, simply by naming the struct field CreatedAt: https://gorm.io/docs/conventions.html#CreatedAt
I want to backup data from CosmosDB to Storage.
I found DB's data is different from Storage's data when data has a value ended with .000Z .
Data in CosmosDB like this:
{
"start": "2021-09-12T15:00:00.000Z",
"end": "2022-10-30T15:00:00.000Z",
}
Data in Storage like this:
{
"start": "2021-09-12T15:00:00Z",
"end": "2022-10-30T15:00:00Z",
}
How can I let them be same?
.000 represents the fraction of seconds in the timestamp and Z represents the UTC timezone in ISO-8601 date format. And, 00Z corresponds to midnight in Greenwich ONLY.
The recommended format for DateTime strings in Azure Cosmos DB is yyyy-MM-ddTHH:mm:ss.fffffffZ which follows the ISO 8601 UTC standard. Where, .fffffff seven-digit fractional seconds
You may enable or disable this Detect datetime property to get this as a string instead. Also, if you choose sink as .json there is very less option (such as ability to choose column format if available for .csv sink).
further you can checkout Configure Azure Cosmos DB account with periodic backup
You can see the mapping first to check what data types are getting mapped for this field
I had one such use case where data types were different and I tried following steps to resolve:
Click on Import in Mapping section of Copy Activity
Check if all columns are correctly mapped
Extract the JSON from Copy activity for mapping
Check the data types being mapped in the JSON
If the data type for the field is not matching then you will get different data
To match the data you will have to dynamically pass the JSON which is explained in detail in this tutorial: https://www.youtube.com/watch?v=b27gmOufge4
I'm working on a basic CRUD service. I'm trying to test that when I create/store an object and then retrieve that object from the DB, the object I get is the same. For a bit of implementation detail, I'm trying to persist a struct into a Postgres DB, then retrieve that struct and compare the two structs to ensure they are equal.
I'm hitting an issue whereby the original struct's time.Time field has a higher resolution than the one retrieved from the DB, presumably because Postgres has a smaller resolution for timestamps? (I'm storing the time objects as Postgres's timestamp with time zone)
The original time.Time object: 2020-12-20 20:20:11.1699442 +0000 GMT m=+0.002995101
The time retrieved from the DB: 2020-12-20 20:20:11.169944 +0000 GMT
Is there any way around this?
My options seem to be:
truncate the original time's resolution. Issues: can't seem to find any way to do that, plus, I don't want storage implementation details leaking into my domain layer
instead compare the object IDs to ensure they're the same. Issues: this seems flimsy and doesn't assure me that everything I store from that struct is returned as it was
compare each field manually and do some conversion of the time objects so they are the same resolution. Issues: this is messy and only kicks this issue down the road
This situation can come up in a number of circumstances, any time there are multiple platforms in play, which use different precision for times.
The best way to handle such tests is to check that the delta between the two times is sufficiently small. i.e.:
var expected time.Time = /* your expected value */
var actual time.Time = /* the actual value */
if delta := expected.Sub(actual); delta < -time.Millisecond || delta > time.Millisecond {
t.Fail("actual time is more than 1ms different than expected time")
}
In my JMeter test case, I'm selecting a timestamp with timezone field from a postgresql DB.
The thing is, when I run the test case on a fresh instance of JMeter for the first time, the value is converted to my local datetime.
Value in DB: 2019-10-23 06:20:54.086605+00
Value when using select: 2019-10-23 11:50:54.086605
But often when I run the test case again on the same JMeter instance, it is not converted.
Value in DB: 2019-10-23 06:42:15.77647+00
Value when using select: 2019-10-23 06:42:15.77647
Restarting JMeter will again result in the 1st behavior. I'm not able to exactly pinpoint how and when this switch in behavior happens. It happens now and then, but restarting JMeter will always reset to the 1st behavior.
I have tried setting the timezone value in postgres.conf file as well as user.timezone value in system.properties in JMeter /bin directory to UTC, to no avail.
I'm using SELECT * to select all columns from the table and storing them in variables using the Variable names field in JDBC Request.
The reason is that PostgreSQL timestamp with time zone is being mapped to java.sql.Timestamp which doesn't contain timezone information.
The only workaround I can think of is converting the aforementioned Timestamp providing the TimeZone as the parameter like:
In the JDBC Request sampler define Result Variable Name
Add JSR223 PostProcessor as a child of the request and use the below Groovy code to convert the Timestamp into the TimeZone of your choice:
vars.getObject("resultSet").get(0).get("bar").format('yyyy-MM-dd HH:mm:ss.SSSSSSZ', TimeZone.getTimeZone('UTC'))
More information: Debugging JDBC Sampler Results in JMeter
I would like to get millisecond precision in my MariaDB. After some research, I found that I needed to change the columnDefinition - so I did this in my entity:
#NotNull
#Column(name = "createdDate", columnDefinition = "DATETIME(3) NOT NULL")
#Temporal(TemporalType.TIMESTAMP)
private TimeStamp createdDate;
#PrePersist
void onPersist() {
createdDate = new Timestamp(new Date().getTime());
}
The resulting SQL to create the column is:
`createdDate` DATETIME(3) NOT NULL
Now, in the DB the value has indeed 3 decimals:
2016-09-12 16:57:44.000
... but they are always 000
What did I do wrong, or what did I forget ?
Edit: I tried without JAVA:
CREATE TABLE `test` (
`id` BIGINT(20) NOT NULL AUTO_INCREMENT,
`createdDate` DATETIME(3) NOT NULL,
PRIMARY KEY (`id`)
)
COLLATE='latin1_swedish_ci'
ENGINE=InnoDB
;
And then:
INSERT INTO test (createdDate)
VALUES(current_timestamp())
Result:
2016-09-13 13:57:44.000
I had the same problem with MariaDB and date types. I've tried org.joda.DateTime and java.util.time types. Both, the server and the client code supported milliseconds correctly.
The problem was that I was using MySQL Connector instead of MariaDB Connector/J JDBC driver.
Background
In most situations using MariaDB with MySQL Connector works out well, but I would never ever recommend this. When I was searching for the issue I was debugging through Hibernate and Connector code and saw many feature detections that were based on the server version number instead of a real feature detection. The version numbering of course differs between MySQL and MariaDB. So there's a big probability that there are far more compatibility issues that are quietly ignored.
Your problem most probably comes from the fact that you mix Dates and Timestamps. Changing the createdDate type to java.sql.Timestamp should solve your issue.
Also, if your version of MySQL is prior to 5.6.4, DateTime won't let you save time fractions.
EDIT after OP's edit :
you are still mixing the Date Java type with Timestamp when you do this :
createdDate = new Timestamp(new Date().getTime());
Can you try createdDate = new Timestamp(System.currentTimeInMilliseconds()); instead ?
Ideally you should use objects from a library like JodaTime to avoid such issues, but that's beyond the point of your question, just a tip :)
Ultimately, if this way of creating your Timestamp does not work, I would use the Timestamp type in DB instead of Datetime, but that's just trial and error as Datetime should work as well in your example.
Edit :
excerpt from Oracle's Date API :
Date()
Allocates a Date object and initializes it so that it represents the time at which it was allocated, measured to the nearest millisecond.
In which case using System.currentTimeInMilliseconds() shouldn't change the outcome - my bad.
To troubleshoot the problem, I'd start to create a date via SQL (without passing via Java objects) with CURRENT_TIMESTAMP to make sure that the field can indeed contain decimal time precision.. If it is OK, verify the value in the Java object with a debugger.. Might give you a lead. If both contain milliseconds, I'd look at the usage of the annotations or start from a working sample.
To do this using pure JPA :
#Column(name="STMP", columnDefinition = "TIMESTAMP (6)")
private Timestamp timestamp = Timestamp.from(Instant.now());