In my JMeter test case, I'm selecting a timestamp with timezone field from a postgresql DB.
The thing is, when I run the test case on a fresh instance of JMeter for the first time, the value is converted to my local datetime.
Value in DB: 2019-10-23 06:20:54.086605+00
Value when using select: 2019-10-23 11:50:54.086605
But often when I run the test case again on the same JMeter instance, it is not converted.
Value in DB: 2019-10-23 06:42:15.77647+00
Value when using select: 2019-10-23 06:42:15.77647
Restarting JMeter will again result in the 1st behavior. I'm not able to exactly pinpoint how and when this switch in behavior happens. It happens now and then, but restarting JMeter will always reset to the 1st behavior.
I have tried setting the timezone value in postgres.conf file as well as user.timezone value in system.properties in JMeter /bin directory to UTC, to no avail.
I'm using SELECT * to select all columns from the table and storing them in variables using the Variable names field in JDBC Request.
The reason is that PostgreSQL timestamp with time zone is being mapped to java.sql.Timestamp which doesn't contain timezone information.
The only workaround I can think of is converting the aforementioned Timestamp providing the TimeZone as the parameter like:
In the JDBC Request sampler define Result Variable Name
Add JSR223 PostProcessor as a child of the request and use the below Groovy code to convert the Timestamp into the TimeZone of your choice:
vars.getObject("resultSet").get(0).get("bar").format('yyyy-MM-dd HH:mm:ss.SSSSSSZ', TimeZone.getTimeZone('UTC'))
More information: Debugging JDBC Sampler Results in JMeter
Related
I am using DB2 9.7 (LUW) in a windows server, in which multiple DBs are available in a single DB instance. I just found that in one of these DBs, I am unable to add a column with DATE data type, during table creation or altering. The column been added is getting changed to timestamp instead.
Any help on this will be welcome.
Check out your Oracle Compatibility setting
Depending on that setting a date is interpreted as Timestamp(0) like in your example.
Because these settings take effect if the database has been created after setting the DB2_COMPATIBILITY_VECTOR registry variable your database can show a different behaviour.
I am currently learning MongoDB and while trying to explore the documentation I am encountering some anomaly while trying to use the ObjectId.getTimestamp() function. I executed the below is the lines of code at my machine's time 20:12. in mongo shell
>x= new ObjectId()
>ObjectId("5ac4e4362b7738556969dbba")
>x.getTimestamp()
>ISODate("2018-04-04T14:41:58Z")
I want to know why is there a different timestamp value being shown and not the exact timestamp when the Id was created ? Is it due to some UTC setting which should be done in a Mongo configuration file ?
i have a DB2 data source and an Oracle 12c target.
The Oracle has a DB link to the DB2 defined which is working in general.
Now i have a huge table in the DB2 which has a timestamp column (lets call it ROW_CHANGED) for row changes. I want to retrieve rows which have changed after a particular time.
Running
SELECT * FROM lib.tbl WHERE ROW_CHANGED >'2016-08-01 10:00:00'
on the DB2 returns exactly 1 row after ca. 90 secs which is fine.
Now i try the same query from the Oracle via the db link:
SELECT * FROM lib.tbl#dblink_name WHERE ROW_CHANGED >TO_TIMESTAMP('2016-08-01 10:00:00')
This runs for hours and ends up in a timeout.
I read some Oracle docs and found distributed query optimization tips but most of them refer to joining a local to a remote table which is not my case.
In my desperation, i have tried the DRIVING_SITE hint, without effect.
Now i wonder when the WHERE part of the query will be evaluated. Since i have to use Oracle syntax and not DB2 syntax for the query, is it possible the Oracle will try to first copy the full table and apply the where clause afterwards? I did some research but did not find anything which would help me in this direction.
The ROW_CHANGED is a hidden column in the DB2, if that matters.
Thx for any hint in advance.
Update
Thanks#all for help. I'll share what did the trick for me.
First of all i have used TO_TIMESTAMP since the DB2 column is also Timestamp (not date) and i had expected to circumvent implicit conversions by this.
Without the explicit conversion i ran into ORA-28534: Heterogeneous Services preprocessing error and i have no hope of touching the DB config within reasonable time.
The explain plan btw did not bring much. It showed a FULL hint and no conversion on the predicates. Indeed it showed the ROW_CHANGED column as Date, i wonder why.
I have tried Justins suggestion to use a bind variable, however i got ORA-28534 again. Next thing i did was to wrap it into a pl/sql block (will run in a SP anyway later).
declare
v_tmstmp TIMESTAMP := 01.08.16 10:00:00;
begin
INSERT INTO ORAUSER.TMP_TBL (SRC_PK,ROW_CHANGED)
SELECT SRC_PK,ROW_CHANGED
FROM lib.tbl#dblink_name
WHERE ROW_CHANGED > v_tmstmp;
end;
This was executing in the same time as in DB2 itself. The date format is DD.MM.YY here since it is the default unfortunately.
When changing the variable assignment to
v_tmstmp TIMESTAMP := TO_TIMESTAMP('01.08.16 10:00:00','DD.MM.YY HH24:MI:SS');
I got the same problem as before.
Meanwhile the DB2 operators have created an index in the ROW_CHANGED column which i requested earlier that day. This has solved the problem in general it seems. Even my original query finishes in no time now.
If you are actually using an Oracle-specific conversion function like to_timestamp, that forces the predicate to be evaluated on the Oracle side. Oracle isn't going to know how to convert a built-in function like to_timestamp into an exactly equivalent function call in DB2.
If you used a bind variable, that would be more likely to get evaluated on the DB2 side. But that may be complicated by the data type mapping between different databases-- there may not be a perfect mapping between one engine's date and another engine's timestamp data type. If this was a numeric column, a bind variable would be almost certain to get pushed. In this case, it probably involves playing around a bit to figure out exactly what data type to use for your variable that works for your framework, Oracle, and DB2.
If using a bind variable doesn't work, you can force the predicate to be evaluated on the remote server using the dbms_hs_passthrough package. That lets you send a query verbatim to the remote server which allows you to do things like use functions defined in your DB2 database. That's a bit of overkill in this situation, hopefully, but it's nice to have the hammer as your backup if the simpler solution doesn't work quickly enough.
Symmetricds server is configured with postgresql 9.4 and client nodes have sqlite3. I recently had to alter a table at the server end and then send the schema to the client with the command symadmin send-schema --engine <server> --node <node> <table>
One of the changes in the table was the addition of default value on date field update_date date DEFAULT ('now'::text)::date
Since the changes is applied, on symmetric log I am seeing the following error message on the server side now:
ERROR [<server>] [AcknowledgeService] [qtp1874154700-1322] The outgoing batch <node>-41837 failed. ERROR: invalid input syntax for type date: "'now'::text)::date"
Is this error showing up because sqlite3 does not support 'now'::text)::date" as default value? In such case how can I propagate the changes?
OR
If it is a symmetricds issue that it is not recognizing 'now'::text)::date" as default value for update_date field?
I am suspecting due to this error all the synchronization between client and server is stopping.
Any clue is appreciated.
hope the problem is not in production
you'll need to delete the outgoing batch with the change or just the link between it and the alter table
use the command line then to send the custom native SQL DDL statement to each node from the central node or do it manually connecting remotely to each node
A batch in error will hold up all other batches in error. You can ignore the batch in error by setting the status to "IG" on that particular batch. However this would result in all change captures in the batch to not be applied on the target.
Are you sure the default value applied correctly on SQLITE. Here is an example of a table with a default date value of now.
create table forum (id int primary key, some_date date default(date('now')));
You can then send the appropriate alters or creates to your clients through the send sql feature.
References:
Open source send documentation.
Professional send documentation.
I have a field in my postgres database using the time (without time zone) data type. I have a Microsoft Access front-end for the database connected using psqlODBC, which reads this field as a "Date/Time" data type.
If I try to insert something into the field through the front end, I get the following error:
ODBC - insert on a linked table "table_name" failed.
ERROR: column "column_name" is of type time without time zone but expression is of type date;
I'm assuming that access is trying to input a time stamp instead.
Basically my question is it even really possible to use the time data type with Access? Or should I just be using the timestamp datatype instead?
If you are manually typing data into a linked table then no this won't be possible at present, if you have the option of updating your table via forms or VB then you could try this to get access to produce only a time value:
TimeSerial(Hour(Now()), Minute(Now()), Second(Now()))
Otherwise as you say, it's probably a good idea to change your data type to timestamp.