I have a cronjob that is kicked off every night that involves building up a temporary table, dropping the current table on Redshift, and swapping in the temporary table for the old one. More than half of the time, this specific job gets stuck when dropping the existing table and behaving as if there is some pending transaction that is stopping the drop from going through.
This is just one of dozens of jobs that uses the exact same script to run overnight, none of which have ever had this issue; however, there are a few minor differences:
The box that this particular job is running on is a different box from all of the other production jobs, as this one is currently in a testing state.
The S3 key used on this box is different from the other box.
In addition to the fact that I have never seen this on any other job, this issue has been extremely difficult to troubleshoot for the following reasons:
I have not been able to replicate this issue by running the script manually on the same box it is currently being run on; the script executes as expected, with the table drop occurring in mere seconds. The only difference I can think of here is that I'm executing the script as ubuntu whereas the cronjob is executed from root.
I have not had any success identifying or terminating the sessions that are causing the drop to stall; I've looked high and low on Stack Overflow (this is the most applicable question with answers - redshift drop or truncate table very very slow), the Redshift docs, and otherwise, but nothing I've found has been the answer. When I see that the job is stalled, I've checked the following tables on Redshift and usually find that things are in the following state:
The temporary table has been created, but the old version of the destination table is still there.
The stv_locks table shows that that there are three processes running, with the lock_status of "Holding write lock," "Holding delete lock," and "Holding insert lock" respectively. The process ID associated with these is NOT the ID related to the current job.
The stv_tr_conflict table shows nothing.
The stv_recents table shows the drop with a status of Running.
The query that's supposedly creating the lock described above shows up in the svl_qlog as finished, so that seems to contradict the stv_locks table.
Using pg_terminate_backend to stop the associated process does not actually remove the session when querying stv_sessions, but DOES free up something that allows the job to finish.
Any help in figuring out what exactly is going on here would be greatly appreciated!
I faced the same problem, I just rebot RS then it works again normally.
Related
I have a remote program which keeps inserting new row every second in a table of postgresql db.
Sometimes the program stops inserting new row, due to some wifi problem. At that time, is there a way i can get some notification when no new row is added in last 10 seconds.
Currently I run a cron job every sec, which keeps checking the recent id from the table. If recent id does not change after 10s then I create notification.
Actually I think, the cronjob is your best bet.
There is no AFTER NOTHING HAPPENS in the CREATE TRIGGER syntax ;)
The other options you have is to move the job to the database using pg_cron or background worker. But I really think any of the two options (specially the second one) is complicating things for no gain.
I have been working on a reporting database in DB2 for a month or so, and I have it setup to a pretty decent degree of what I want. I am however noticing small inconsistencies that I have not been able to work out.
Less important, but still annoying:
1) Users claim it takes two login attempts to connect, first always fails, second is a success. (Is there a recommendation for what to check for this?)
More importantly:
2) Whenever I want to refresh the data (which will be nightly), I have a script that drops and then recreates all of the tables. There are 66 tables, each ranging from 10's of records to just under 100,000 records. The data is not massive and takes about 2 minutes to run all 66 tables.
The issue is that once it says it completed, there is usually at least 3-4 tables that did not load any data in them. So the table is deleted and then created, but is empty. The log shows that the command completed successfully and if I run them independently they populate just fine.
If it helps, 95% of the commands are just CAST functions.
While I am sure I am not doing it the recommended way, is there a reason why a number of my tables are not populating? Are the commands executing too fast? Should I lag the Create after the DROP?
(This is DB2 Express-C 11.1 on Windows 2012 R2, The source DB is remote)
Example of my SQL:
DROP TABLE TEST.TIMESHEET;
CREATE TABLE TEST.TIMESHEET AS (
SELECT NAME00, CAST(TIMESHEET_ID AS INTEGER(34))TIMESHEET_ID ....
.. (for 5-50 more columns)
FROM REMOTE_DB.TIMESHEET
)WITH DATA;
It is possible to configure DB2 to tolerate certain SQL errors in nested table expressions.
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.5.0/com.ibm.data.fluidquery.doc/topics/iiyfqetnint.html
When the federated server encounters an allowable error, the server allows the error and continues processing the remainder of the query rather than returning an error for the entire query. The result set that the federated server returns can be a partial or an empty result.
However, I assume that your REMOTE_DB.TIMESHEET is simply a nickname, and not a view with nested table expressions, and so any errors when pulling data from the source should be surfaced by DB2. Taking a look at the db2diag.log is likely the way to go - you might even be hitting a Db2 issue.
It might be useful to change your script to TRUNCATE and INSERT into your local tables and see if that helps avoid the issue.
As you say you are maybe not doing things the most efficient way. You could consider using cache tables to take a periodic copy of your remote data https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.5.0/com.ibm.data.fluidquery.doc/topics/iiyvfed_tuning_cachetbls.html
As we know details of every job are stored in rdbms in table Hsp_Job_Status. But unfortunately this table gets truncated each time we re-start services. As per business requirement we needed to keep a record of BR's launched by user and it's details. So we had developed a work around and created a trigger on the table such that it inserted each new row/update in a backup table. This was working fine uptill now.
Recently after re-start the the values of old Job_id (i.e primary key), are not appearing in order. It started series form a previous number. It was going in series of 106XX but after re-start the numbering started from 100XX. As Hsp_job_status was truncated during restart, there was no issue of duplicate primary key in that table. But it created duplicate values in backup table. And this has created issues with backup table and procedure that we use.
Usually the series is continuous one even after table truncate. So may be some thing has gone wrong during restart. Can you please suggest me as to what should i check and do to resolve this issue.
Thanks in advance.
Partial answer: the simple solution is to insert an instance prefix to the Job_Id, and on service startup increment the active instance. The instance table can then include details from startup/shutdown events to help drive SLA metrics. Unfortunately, I don't know how you would go about implementing such a scheme, since it's been many years since I've spoken any SQL dialects.
I know this question was asked here but 1) it's relatively old and 2) It didn't help me much.
I am running into a relatively large number of deadlocks with a few operations on my database. The setup is as follows:
Tables:
Table A with foreign key into Table B.
Operations:
Insert into table A
Insert into table B
Update row in table B
Delete row in table B
Delete row in table A
Problem:
These operations can happen essentially in any order because I have multiple worker roles so these operations must be idempotent, however, each worker role will be working with a different primary key from table A. I am still trying to wrap my head around the concept of locks on tables and from what i understand, any delete on A will first lock table B, delete relevant rows there, and then delete the row from A. I currently assume that is an atomic operation and there is no time to execute additional locks between locking table B and locking table A because I can't imagine a way to get around that.
I am currently able to catch an exception in microsoft visual studio of the following format:
Transaction (Process ID xxx) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
This exception seems like it can happen on any of the above operations.
My question is: How do i know which locks/transactions are the ones causing the deadlock? Does anyone know any queries that would be useful AFTER we get the exception?
sys.event_log is the answer here.
It lives in your server's masterdb and should contain an entry with all of the deadlock graphs your database has hit in the last month.
Armed with the deadlock graph there are many tutorials on sql server deadlock graph debugging.
Currently profiling tools for Sql Azure are practically non existent.
The locking problem shouldn't differ much between standard Sql Server and Sql Azure world thus I would suggest trying to repro the problem in the 'old' world using standard techniques such as good old Profiler: quite useful article & this.
If that approach doesn't prove to be fruitful a dirty solution could be to work on catch/retry logic.
I ran into similar issues recently.
Try using your updates with "with (UPDLOCK)".
To try and find the root cause:
Start by just running a single worker role.
Then check:
Are you locking at the right level table lock, page lock or row lock?
Are you releasing the locks?
is your system designed in such a way, that all edits to the same row will be done by the same machine?
There is a blog post on finding blocking queries here: http://blogs.msdn.com/b/sqlazure/archive/2010/08/13/10049896.aspx
I created a set of partitioned tables in Postgres, and started inserting a lot of rows via the master table. When the load process blew up on me, I realized I should have declared the id row BIGSERIAL (BIGINT with a sequence, behind the scenes), but inadvertently set it as SERIAL (INTEGER). Now that I have a couple of billion rows loaded, I am trying to ALTER the column to BIGINT. The process seems to be working, but is taking a long time. So, in reality, I don't really know if it is working or it is hung. I'd rather not restart the entire load process again.
Any suggestions?
When you update a row to alter it in PostgreSQL, that writes out a new copy of the row and then does some cleanup later to remove the original. This means that trying to fix the problem by doing updates can take longer than just loading all the data in from scratch again--it's more disk I/O than loading a new copy, and some extra processing time too. The only situation where you'd want to do an update instead of a reload is when the original load was very inefficient, for example if a slow client programs is inserting the data and it's the bottleneck on the process.
To figure out if the process is still working, see if it's using CPU when you run top (UNIX-ish systems) or the Task Manager (Windows). On Linux, "top -c" will even show you what the PostgreSQL client processes are doing. You probably just expected it to take less time than the original load, which it won't, and it's still running rather than hung up.
Restart it (clarifying edit: restart the entire load process again).
Altering a column value requires a new row version, and all indexes pointing to the old version to be updated to point to the new version.
Additionally, see how much of the advise on populating databases you can follow.
Correction from #archnid:
altering the type of the column will trigger a table rewrite, so the row versioning isn't a big problem, but it will still take lots of disk space temporarily. you can usually monitor progress by looking at which files in the database directory are being appended to...