When querying a source model in a VBD, with the source database being Informix 11, the values for a date column are sometimes returned as the prior day. For example, the actual value in Informix is Oct 10, but value shown when querying JDV source model is Oct 9. Querying Informix directly returns the correct date. I'm using JDV 6.4.0 with JDK 1.8.0_162 (x64) on Windows 10.
Any ideas? Thanks in advance!
To elaborate on what Ramesh is saying, you need to check the client and server jvm timezones. JDV will attempt to keep date/time calendar fields consistent across db, server, and client. If the Teiid client is in a different timezone than the server, the client will automatically alter UTC value for date/time values so that they match what the server would display - which is determined by the server timezone.
When a timestamp value is retrieved from the database we assume that it has already been adjusted by the driver to account for any timezone differences. If that is not the case there is a translator execution property called DatabaseTimeZone that will utilize the JDBC calendar based methods to adjust the retrieved date/time values.
A common issue would is a mismatch of daylight savings times - usually it's best to have the JDV server in a standard timezone.
Related
Unable to store date in IST format in mongo DB
Currently storing date in UTC format during every operations i have to change it into IST format using timezone feature provided by mongo
You can not do that. MongoDB will always save time in UTC.
MongoDB stores times in UTC by default, and converts any local time
representations into this form. Applications that must operate or
report on some unmodified local time value may store the time zone
alongside the UTC timestamp, and compute the original local time in
their application logic.
You can check the official docs for more info.
I'm trying to wrap my head around how the db.timezone property works on both the source and sink connectors.
For the source the docs say:
Name of the JDBC timezone used in the connector when querying with time-based criteria. Defaults to UTC.
What does this actually mean? Is that supposed to be set to the timezone of my source database? I have a db that is set to eastern timezone. Do I need to set this to US/Eastern? If I don't what will it do?
On the sink side the docs say:
Name of the JDBC timezone that should be used in the connector when inserting time-based values. Defaults to UTC.
Again what exactly does this mean. Does it use that to convert the all timestamps in your payload to the value you give here?
My specific problem is my source db has eastern timezone, but my sink db is set to UTC and I can't change it. How should I define these properties.
Also to add to this, I think it's slightly unrelated but I notice on my sink side the timestamps don't have all the decimals. But on both sides in have the timestamp columns set to timestamp(6). However on the sink side the decimal points always only just have 3 digits and the remaining 3 are all 0s. Why would this be?
Have a look at the source code:
https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/source/JdbcSourceConnectorConfig.java#L805
to get a feeling on how the value you specify for the db.timezone configuration option will be used by the kafka-connect-jdbc connector.
I'd assume that for your source connector you should use
db.timezone=US/Eastern
Name of the JDBC timezone used in the connector when querying with time-based > criteria. Defaults to UTC.
What does this actually mean?
The db.timezone setting comes in handy when reading/writing data from databases which don't use UTC timezone for storing the date/time columns.
Since your sink database uses UTC timezone, there's no additional setting related to setting the timezone in your jdbc sink configuration.
I have a PostgreSQL database containing a table with several 'timestamp with timezone' fields.
I have a tool (DBSync) that I want to use to transfer the contents of this table to another server/database.
When I transfer the data to a MSSQL server all datetime values are replaced with '1753-01-01'. When I transfer the data to a PostgreSQL database all datetime values are replaced with '0001-01-01'.
The smallest possible date for those systems.
Now i recreate the source-table (including contents) in a different database on the same PostgreSQL server. The only difference: the sourcetable is in a different database. Same server, same routing. Only ports are different.
User is different but in each database I have the same rights.
How can it be that the database is responsible for an apparant different interpretation of the data? Do PostgreSQL databases have database-specific settings that can cause such behaviour? What database-settings can/should I check?
To be clear, I am not looking for another way to transfer data. I have several available. The thing that I am trying to understand is: how can it be that, if an application reads datetime info from table A in database Y on server X, it gives me the the wrong date while when reading the same table from database Z on server X will give me the data as it should be.
It turns out that the cause is probably the difference in server-version. One is a Postgres 9 (works ok), the other is a Postgres 10 (does not work okay).
They are different instances on the same machine. Somehow I missed that (blush).
With transferring I meant that I am reading records from a sourcedatabase (Postgresql) and inserting them in a targetdatabase (mssql 2017).
This is done through the application, I am not sure what drivers it is using.
I wil work with the people who made the application.
For those wondering: it is this application: https://dbconvert.com/mssql/postgresql/
When a solution is found I will update this answer with the found solution.
I have an ODBC connection to an informix database from a MS Access database and want to show the time in a query as held in the native database i.e. "dd/mm/yy hh:nn:ss.000". It seems that no matter what format I try in Access I cannot emulate this, although I can do this in MS Excel!?
I've used, amongst others, but to no avail:
Format([startdatetime],"dd/mm/yy hh:nn:ss.000")
Format([startdatetime],"dd/mm/yy hh:nn:ss,SSS")
Any ideas?
Try:
Format([startdatetime],"dd/mm/yyyy hh:nn:ss AM/PM")
If you need to store Access dates to the millisecond, have a look here.
We have a DB2 running on z/OS and some tables use a timestamp as a Primary Key.
My opinion is, that it might be possible that two transactions calling CURRENT TIMESTAMP in the same nanosecond can have exactly the same Timestamp returned.
My colleague thinks that the CURRENT TIMESTAMP function on the same database is always unique.
The DB2 documentation here is not very clear.
Is there an offical statement from IBM, which proofs the one or the other thesis? I found only a statement for UNIX DB2, which is maybe not applicable for z/OS.
Thank you.
There are instances when it won't be unique. They are:
Datetime special registers are stored in an internal format. When two or more of these registers are implicitly or explicitly specified in a single SQL statement, they represent the same point in time.
If the SQL statement in which a datetime special register is used is in a user-defined function or stored procedure that is within the scope of a trigger, DB2 uses the timestamp for the triggering SQL statement to determine the special register value.
Source: http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/topic/com.ibm.db2.doc.sqlref/xfbb68.htm#xfbb68
You should use GENERATE_UNIQUE() if you want a unique timestamp. Good example here: http://www.mainframesupport.dk/tips/tip0925.html
There is no guarantee that CURRENT TIMESTAMP will return a unique value.
I have seen many examples of DB/2 SQL INSERT statements in a z/os environment failing on duplicate key when CURRENT TIMESTAMP was used to populate a column defined as unique.
Once upon a time CURRENT TIMESTAMP had a fine enough "granularity" that the probability of a collision was extremely small. This lead to quite a few applications treating them as unique identifiers. Processors are faster and parallelism has increased tremendously over the years. Any process that expects unique values from CURRENT TIMESTAMP today is likely to crash and burn on a very regular basis.
Your colleague is running a bit behind the times (on a couple of levels).