I am trying to figure out the reasons/errors related to aborted queries where the aborted queries could be found out from the stl_query table. I used stl_errors for this but found out that the error context is related to process id not to specific query id. Is there any way I could find out the reason from one of the system tables present in redshift?
Here is nice list of possible reasons, plus where - in which table - to find any info:
https://aws.amazon.com/premiumsupport/knowledge-center/redshift-query-abort/
But, in my case it was not enough, as it turned out that connection sometimes was just closed without proper committing changes, and for that there is another set of where to check out such cases:
https://aws.amazon.com/premiumsupport/knowledge-center/query-completed-with-no-updates/
The latter helped in my case as it just lead me to find the reason of mysterious aborted queries.
Happy hunting.
Related
After restoring db-server from snapshot something strange started happening with our database. Basically it can be described as all time-consuming queries are seems to be duplicated. At least as pg_stat_activity shows it
These lines are almost equal except for their PIDs and client addresses.
Usually I'd think that that's just a mistake of dev team (multiple equal queries at a time in code, cron misconfiguration, etc), but one of those time-consuming selects comes from PowerBI which I believe to be quite reliable in terms of loading data.
Has anybody ever stumbled upon this problem?
Turned out that's the way pg_stat_activity shows parallel workers processing single query. You can make sure that's the case by getting backend_type of these records.
Hi there I'm looking for advice from someone who is good at IBM db2 performance.
I have a situation in which many batch tasks are massively inserting rows in the same db2 table, at the same time.
This situation looks potentially bad. I don't think db2 is able to resolve the many requests quickly enough, causing the concurring tasks to take longer to end and even causing some of them to abend with a -904 or -911 sqlcode.
What do you guys think? Should situations like these be avoided? Are there some sort of techniques that could improve the performance of the batch tasks, keeping them from abending or running too slow?
Thanks.
Inserting should not be a big problem running ETL workloads i.e. with DataStage do this all the time.
I suggest to run an
ALTER TABLE <tabname> APPEND ON
This avoids the free space search - details can be found here
With the errors reported the information provided is not sufficient to get the cause of it.
There are several things to consider. What indexes are on the table is one.
Append mode works well to relieve last page contention, there are also issues where you could see contention with the statement itself for the variation lock. Then you could have issues with the transaction logs if they are not fast enough.
What is the table, indexes and statement and maybe we can come up with how to do it. What hardware are you using and what io subsystem is being used for transaction logs and database tablespaces.
For years, at least 8, our company has been running a process daily that has never failed. Nothing on the client side has changed, but we recently upgraded to V7R1 on the System i. The very first run of the old process fails with a Cursor not open message reported back to the client, and that's all that's in the job log as well. I have seen Error -501, SQLSTATE 24501 on occasions.
I got both IBM and DataDirect (provider of the ODBC driver) involved. IBM stated it was a client issue, DataDirect dug through logs and found that when requesting the next block of records from a cursor this error occurs. They saw no indication that the System i alerted the client that the cursor was closed.
In troubleshooting, I noticed that the ODBC driver has an option for WITH HOLD which by default is checked. If I uncheck it, this particular issue goes away, but it introduces another issue (infinite loops) which is even more serious.
There's no single common theme that causes these errors, the only thing that I see that causes this is doing some processing while looping through a fairly large resultset. It doesn't seem to be related to timing, or to a particular table or table type. The outside loops are sometimes large tables with many datatypes, sometimes tiny tables with nothing but CHAR(10) and CHAR(8) data types.
I don't really expect an answer on here since this is a very esoteric situation, but there's always some hope.
There were other issues that IBM has already addressed by having us apply PTFs to take us to 36 for the database level. I am by no means a System i expert, just a Java programmer who has to deal with this issue that has nothing to do with Java at all.
Thanks
This is for anyone else out there who may run across a similar issue. It turns out it was a bug in the QRWTSRVR code that caused the issue. The driver opened up several connections within a single job and used the same name for cursors in at least 2 of those connections. Once one of those cursors was closed QRWTSRVR would mistakenly attempt to use the closed cursor and return the error. Here is the description from the PTF cover letter:
DESCRIPTION OF PROBLEM FIXED FOR APAR SE62670 :
A QRWTSRVR job with 2 cursors named C01 takes a MSGSQL0501
error when trying to fetch from the one that is open. The DB2
code is trying to use the cursor which is pseudo closed.
The PTF SI57756 fixed the issue. I do not know that this PTF will be generally released, but if you find this post because of a similar issue hopefully this will assist you in getting it corrected.
This is how I fix DB problems on the iseries.
Start journaling the tables on the iseries or change the connection to the iseries to commit = *NONE.
for the journaling I recommend using two journals each with its own receiver.
one journal for tables with relatively few changes like a table of US States or a table that gets less than 10 updates a month. This is so you can determine when the data was changed for an audit. Keep all the receivers for this journal on-line for ever.
one journal for tables with many changes through out the day. Delete the receivers for these journals when you can no longer afford the space they take up.
If the journal or commit *none doesn't fix it. You'll need to look at the sysixadv table long running queries can wreck an ODBC connection.
SELECT SYS_TNAME, TBMEMBER, INDEX_TYPE, LASTADV, TIMESADV, ESTTIME,
REASON, "PAGESIZE", QUERYCOST, QUERYEST, TABLE_SIZE, NLSSNAME,
NLSSDBNAME, MTIUSED, MTICREATED, LASTMTIUSE, QRYMICRO, EVIVALS,
FIRSTADV, SYS_DNAME, MTISTATS, LASTMTISTA, DEPCNT FROM sysixadv
ORDER BY ESTTIME desc
also order by timesadv desc
fix those queries maybe create the advised index.
Which ODBC driver are you using?
If you're using the IBM i Access ODBC driver, then this problem may be fixed by APAR SE61342. The driver didn't always handle the return code from the server that indicated that the result set was closed and during the SQLCloseCursor function, the driver would send a close command to the server, which would return an error, since the server had already closed the cursor. Note, you don't have to be at SP11 to hit this condition, it just made it easier to hit, since I enabled pre-fetch in more cases in that fixpack. An easy test to see if that is the problem is to disable pre-fetch for the DSN or pass PREFETCH=0 on the connection string.
If you're using the DB2 Connect driver, I can't really offer much help, sorry.
I am getting this error:
com.ibm.db2.jcc.a.SqlException: DB2 SQL Error: SQLCODE=-289, SQLSTATE=57011, SQLERRMC=XXX32KTMP, DRIVER=3.51.90
on a select statement that has a couple of dozen sub-selects.
SQL0289N usually means the current table space size is not enough for allocating new pages for new data.
I want to modify my select such that it does not use as much table space.
While modifying the select I presumably will get this error several more times until I am successful.
My questions are:
A) Does this error only affect my select?
B) Are other users of the database more like to have a problem because I am running this select?
The context of those questions is that I want to know if I have to move my work to a different database to be reasonably sure that I am not impacting other users.
I am wary because the error description is not clear if it is running out of memory that is shared between all users, or memory that is only allocated to my connection.
Note: I am NOT asking how to increase table space or what this error means. I am NOT asking for help modifying my select (hence, I did not show the select). Any answers to that effect would be off topic.
Without knowing how exactly the tablespace in question is defined and why your query needs it it is hard to give you a definite answer.
In the best case the error affects any SQL statement, executed in any session, that requires the use of the same tablespace, especially if it is a system temporary tablespace.
In the worst case, e.g. if it is an SMS tablespace and it shares the file system with other tablespaces and log files, it might even bring the entire DB2 instance down.
Tuning your statement in a different database does not necessarily mean that it will resolve the problem in the original database.
firstly please excuse my relative inexperience with Hibernate I’ve only really been using it in fairly standard cases, and certainly never in a scenario where I had to manage the primary key’s (#Id) myself, which is where I believe my problems lies.
Outline: I’m bulk-loading facebook profile information through FB’s batch API's and need to mirror this information in a local database, all of which is fine, but runs into trouble when I attempt to do it in parallel.
Imagine a message queue processing batches of friend data in parallel and lots of the same shared Likes and References (between the friends), and that’s where my problem lies.
I run into repeated Hibernate ConstraintViolationException’s which are due to duplicate PK entries - as one transaction has tried to flush it’s session after determining an entity as transient when in fact another transaction has already made the same determination and beaten the first to committing, resulting in the below:
Duplicate entry '121528734903' for key 'PRIMARY'
And the ConstraintViolationException being raised.
I’ve managed to just about overcome this by removing all cascading from the parent entity, and performing atomic writes, one record per-transaction and trying to essentially just catching any exceptions, ignoring them if they do occur as I’d know that another transaction had already done the job, but I’m not very happy with this solution and cant imagine it's the most efficient use of hibernate.
I'd welcome any suggestions as to how I could improve the architecture…
Currently using : Hibernate 3.5.6 / Spring 3.1 / MySQL 5.1.30
Addendum: at the moment I'm using a hibernate merge() which checks initially for the existence of a row and will either merge (update) or insert dependant on existence, problem is even with an isolation level of READ_UNCOMMITTED sometimes the wrong determination is made, i.e. two transactions decide the same, and I've got an exception again.
Locking doesn't really help me either, optimistic or pessimistic as the condition is only a problem in the initial insert case and there's no row to lock, making it very difficult to handle concurrency...
I must be missing something but I've done the reading, my worry is that not being able to leave hibernate to manage PK's i'm kinda scuppered - as it checks for existence to early in the session and come time to synchronise the session state is invalid.
Anyone with any suggestion for me..? thanks.
Take this with a large grain of salt as I know very little about Hibernate, but it sounds like what you need to do is specify that the default mysql INSERT statement is instead made an INSERT IGNORE statement. You might want to take a look at #SQLInsert in Hibernate, I believe that's where you would need to specify the exact insert statement that should be used. I'm sorry I can't help with the syntax, but I think you can probably find what you need by looking at the Hibernate documentation for #SQLInsert and if necessary the MySQL documentation for INSERT IGNORE.