App hangs when executing the query in prepared statement - select

I am trying to select rows which are older than 7 days from current date. Database used is DB2 version 9.
Can you please tell me how exactly can I use the datetime in the query? The date table field is of type timestamp.
I am able to manually run the query without issues. However, when I am using in the prepared statement,
The app hangs when executing the query result = pselect.executeQuery(); as a result of which we need to restart db2 instance in order to clear it.
Can you please help what might be the issue? I do not see any exceptions at all. Other parts of the code works fine if I remove the select_query part.
try{
String select_query = "SELECT URL_ID ,URLVAL FROM URL_TAB WHERE " +
"UPDATED_DATE < TIMESTAMP(CURRENT_DATE - 7 DAYS, '00.00.00')";
System.out.println("select_query=" + select_query);
conn = JDBCDataObjectFactoryManager
.getConnection("JDBCConnectionFactory-SDE");
pselect = conn.prepareStatement(select_query);
System.out.println("pselect=" + pselect);
try{
System.out.println("inside try");
result = pselect.executeQuery();
System.out.println("result=" + result);
}catch(Exception e){
System.out.println("inside catch");
System.out.println("error message==============>"+e.getMessage());
}
if ((result != null) && (result.next())) {
System.out.println("3 >>>>>>>>>>>>>>>>>>>>>>>>>");
url_id = result.getInt(1);
url = result.getString(2);
}//end if

There are two possibilities: either the query is in a lock wait, or it runs for so long that it appears to be hung.
Check what is the value of LOCKWAIT database configuration parameter --by default it is -1, which means infinity, and you normally want to set it to a more reasonable value, typically 30 or 60 seconds. If it is the lock wait that causes your application to "hang", you would get an exception instead, which will help you to debug further.
If the issue is caused by the poor query performance, you'll need to work with your DBAs to figure out the root cause and resolve it.

Related

Why is SQLErrorCode zero?

In my Java program I'm using JDBC to access my PostgreSQL database and that is running fine. Trying to catch SQLExceptions does not really work.
My code:
...
catch (SQLException se) {
System.out.println("\nSQLException: " + se.getMessage());
SqlErrorCode = se.getErrorCode();
System.out.println("\nSqlErrorCode: " + SqlErrorCode);
}
...
My query (generated by program):
SELECT osm_id, osm_type from (SELECT osm_dach_admin_boundaries_old.osm_id,
osm_dach_admin_boundaries_old.osm_type
FROM osm_dach_admin_boundaries_old
FULL OUTER JOIN osm_dach_admin_boundaries
ON (osm_dach_admin_boundaries_old.osm_id = osm_dach_admin_boundaries.osm_id
and osm_dach_admin_boundaries_old.osm_type = osm_dach_admin_boundaries.osm_type)
WHERE (osm_dach_admin_boundaries_old.osm_id IS NULL
OR osm_dach_admin_boundaries.osm_id IS NULL)) foo
WHERE osm_id is not null order by osm_id, osm_typeXXX;
Note the column osm_typeXXX, which I added to force an exception.
Running this query leads to
SQLException: ERROR: column "osm_typexxx" does not exist
Hinweis: Perhaps you meant to reference the column "foo.osm_type".
Position: 533
SqlErrorCode: 0
Closing source SQL-Connection
Closing destination SQL-Connection
Why is SqlErrorCode zero and how can I fix that?
java.sql.SQLException.getErrorCode() is documented as:
Retrieves the vendor-specific exception code for this SQLException object.
Now PostgreSQL doesn't have vendor-specific error codes – it uses the standard conforming SQLSTATE instead.
So you should use java.sql.SQLException.getSQLState().

Check if pg_prepare was already executed

Is there a way to check if pg_prepare already executed and remove it from the session?
Seems like pg_close doesn't remove prepared statement from the session. Kind of seems like a bug in php, but maybe I'm missing something or maybe there is a workaround.
public static function readSubdomains($dcName, $filter = null) {
// ...
$conn = pg_pconnect($connectionString);
// ...
$result = pg_prepare($conn, "subdomains", "SELECT subdomain
from tenants
where $where
order by 1 asc
");
$result = pg_execute($conn, "subdomains", $params);
// ...
pg_close($conn);
}
Second call to readSubdomains shows a warning like this:
Warning: pg_prepare(): Query failed: ERROR: prepared expression "subdomains" already exists in inc/TestHelper.php on line 121
Always check the official manuals for this sort of stuff.
https://www.postgresql.org/docs/current/view-pg-prepared-statements.html
Oh - if pg_close isn't dropping prepared statements then it isn't closing the connection. You might have some connection pooling involved.

Batching large result sets using Rx

I've got an interesting question for Rx experts. I've a relational table keeping information about events. An event consists of id, type and time it happened. In my code, I need to fetch all the events within a certain, potentially wide, time range.
SELECT * FROM events WHERE event.time > :before AND event.time < :after ORDER BY time LIMIT :batch_size
To improve reliability and deal with large result sets, I query the records in batches of size :batch_size. Now, I want to write a function that, given :before and :after, will return an Observable representing the result set.
Observable<Event> getEvents(long before, long after);
Internally, the function should query the database in batches. The distribution of events along the time scale is unknown. So the natural way to address batching is this:
fetch first N records
if the result is not empty, use the last record's time as a new 'before' parameter, and fetch the next N records; otherwise terminate
if the result is not empty, use the last record's time as a new 'before' parameter, and fetch the next N records; otherwise terminate
... and so on (the idea should be clear)
My question is:
Is there a way to express this function in terms of higher-level Observable primitives (filter/map/flatMap/scan/range etc), without using the subscribers explicitly?
So far, I've failed to do this, and come up with the following straightforward code instead:
private void observeGetRecords(long before, long after, Subscriber<? super Event> subscriber) {
long start = before;
while (start < after) {
final List<Event> records;
try {
records = getRecordsByRange(start, after);
} catch (Exception e) {
subscriber.onError(e);
return;
}
if (records.isEmpty()) break;
records.forEach(subscriber::onNext);
start = Iterables.getLast(records).getTime();
}
subscriber.onCompleted();
}
public Observable<Event> getRecords(final long before, final long after) {
return Observable.create(subscriber -> observeGetRecords(before, after, subscriber));
}
Here, getRecordsByRange implements the SELECT query using DBI and returns a List. This code works fine, but lacks elegance of high-level Rx constructs.
NB: I know that I can return Iterator as a result of SELECT query in DBI. However, I don't want to do that, and prefer to run multiple queries instead. This computation does not have to be atomic, so the issues of transaction isolation are not relevant.
Although I don't fully understand why you want such time-reuse, here is how I'd do it:
BehaviorSubject<Long> start = BehaviorSubject.create(0L);
start
.subscribeOn(Schedulers.trampoline())
.flatMap(tstart ->
getEvents(tstart, tstart + twindow)
.publish(o ->
o.takeLast(1)
.doOnNext(r -> start.onNext(r.time))
.ignoreElements()
.mergeWith(o)
)
)
.subscribe(...)

Read timed out after reading 0 bytes, waited for 30.000000 seconds in mongodb

When I am working on above 5,000,000 records in mongodb, it shows this error "Read timed out after reading 0 bytes, waited for 30.000000 seconds" in find() query. Please any one help me.
In PHP you can set timeout(-1) to your cursor.
PHP Example:
$conn = new MongoClient("mongodb://primary:27017,secondary:27017", array("replicaSet" => "myReplSetName"));
$db = $conn->selectDB(DBname);
$collection = $db->selectCollection(collection);
$cursor = $collection->find();
$cursor->timeout(-1);
Node.js Example:
// Database connect options
var options = { replset: { socketOptions: { connectTimeoutMS : conf.dbTimeout }}};
// Establish Database connection
var conn = mongoose.connect(conf.dbURL, options);
In PHP you could add the parameter socketTimeoutMS to the other parameters of connection string, in this case to 90 seconds.
$conn = new MongoClient("mongodb://primary:27017,secondary:27017", array("socketTimeoutMS" => "90000"));
Greetings!
Take a look at the mongodb log file and find your query in it -- how long does it take to execute? If it does take a long time, have you added indexes? Are they being used? Cut/paste the query from mongodb log file and try it from mongo shell -- and add ".explain()" at the end. It will tell you the execution plan that MongoDB is performing -- and perhaps you can attack your problem from that side. If your queries really do take longer than 30 seconds, you most likely need to address it anyway -- regardless of the driver timeout issues.
ensureIndex keys and try
MongoCursor::$timeout = 600000;
I've spotted this problem on removing 1-2kk logs records using php driver.
Basically i had to add timeout (i'm using indexes, just db is a bit huge)
$dtObject = new DateTime();
$dtObject->modify("- " . $interval);
$date = new MongoDate($dtObject->format("U"));
$filter = array('dtCreated' => array('$lt' => $date));
$num = $this->collection->count($filter);
if ($num > 0)
$this->collection->remove($filter)->timeout(-1);
return $num;
This worked for me

Firing Maximo workflow event from code

In our Maximo workflow we have a few schemas in which work order reaches a Condition node with a check on a startdate. If current date is less than it's startdate then work order goes to a Wait node with "maximo.workorder.update" condition. So when the scheduled date for WO comes people need to go to WO tracking and save this WO manually. Only then it continues it's way through the workflow. Otherwise WO will sit on that Wait node till the end of time.
What I want to do is to trigger this update event by crontask everyday so when the right date comes WO will wake up by itself.
I inspected source code for a Save button in WO tracking application and found that no matter what there's MboRemoteSet.save() method call. I assumed that you need to get some changes done and then call save() on the right MboSet. Also I know that in DB there's table called EVENTRESPONSE that keeps track of WOs sitting on the Wait nodes in workflow.
My crontask class contains this code:
MXServer mxServer = MXServer.getMXServer();
UserInfo userInfo = mxServer.getUserInfo("maxadmin");
woSet = mxServer.getMboSet("WORKORDER", userInfo);
...
String query = "select sourceid as WORKORDERID from EVENTRESPONSE"
+ " where eventname = 'maximo.workorder.update'"
+ " and sourcetable = 'WORKORDER'";
SqlFormat sqf = new SqlFormat("workorderid IN (" + query + ")");
woSet.setWhere(sqf.format());
MboRemote wo;
Date currentDate = new Date();
for (int i = 0; (wo = woSet.getMbo(i)) != null; i++) {
System.err.println(wo.getString("description"));
wo.setValue("CHANGEDATE", currentDate);
}
woSet.save();
workorder.changedate successfully refreshes but "maximo.workorder.update" event doesn't proc and WO stays on the Wait node.
So, how should I fire maximo.workorder.update?
This response comes a year late, I understand, but it may help others.
It is possible to use an "Escalation" to identify all work orders that have had their time to come and use an action on the escalation to update something on the work order. This will result in Maximo saving the change, thereby triggering the wait node of the workflow, all without any code, just configurations.
I have done something similar in the past and usually I end up flipping a YORN field that I had created for this purpose.