Cannot update large amount of records in orientdb - orientdb

while update 20000 record by the orientdb java api.got following warning message and start new another procces and update records from beginning .even though previous updating process is run, after updated 12000 record.
warning: connection re-acquired transparently after xxx ms and y retries : no errors will be thrown at application level
I tried to insert 20000 record by increasing time out period. but it doesn't work.
would please help me to stop, start new process.

Related

How to avoid long delay before finally getting "40001 could not serialize access due to concurrent update"

We have a Postgres 12 system running one master master and two async hot-standby replica servers and we use SERIALIZABLE transactions. All the database servers have very fast SSD storage for Postgres and 64 GB of RAM. Clients connect directly to master server if they cannot accept delayed data for a transaction. Read-only clients that accept data up to 5 seconds old use the replica servers for querying data. Read-only clients use REPEATABLE READ transactions.
I'm aware that because we use SERIALIZABLE transactions Postgres might give us false positive matches and force us to repeat transactions. This is fine and expected.
However, the problem I'm seeing is that randomly a single line INSERT or UPDATE query stalls for a very long time. As an example, one error case was as follows (speaking directly to master to allow modifying table data):
A simple single row insert
insert into restservices (id, parent_id, ...) values ('...', '...', ...);
stalled for 74.62 seconds before finally emitting error
ERROR 40001 could not serialize access due to concurrent update
with error context
SQL statement "SELECT 1 FROM ONLY "public"."restservices" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x"
We log all queries exceeding 40 ms so I know this kind of stall is rare. Like maybe a couple of queries a day. We average around 200-400 transactions per second during normal load with 5-40 queries per transaction.
After finally getting the above error, the client code automatically released two savepoints, rolled back the transaction and disconnected from database (this cleanup took 2 ms total). It then reconnected to database 2 ms later and replayed the whole transaction from the start and finished in 66 ms including the time to connect to the database. So I think this is not about performance of the client or the master server as a whole. The expected transaction time is between 5-90 ms depending on transaction.
Is there some PostgreSQL connection or master configuration setting that I can use to make PostgreSQL to return the error 40001 faster even if it caused more transactions to be rolled back? Does anybody know if setting
set local statement_timeout='250'
within the transaction has dangerous side-effects? According to the documentation https://www.postgresql.org/docs/12/runtime-config-client.html "Setting statement_timeout in postgresql.conf is not recommended because it would affect all sessions" but I could set the timeout only for transactions by this client that's able to automatically retry the transaction very fast.
Is there anything else to try?
It looks like someone had the parent row to the one you were trying to insert locked. PostgreSQL doesn't know what to do about that until the lock is released, so it blocks. If you failed rather than blocking, and upon failure retried the exact same thing, the same parent row would (most likely) still be locked and so would just fail again, and you would busy-wait. Busy-waiting is not good, so blocking rather than failing is generally a good thing here. It blocks and then unblocks only to fail, but once it does fail a retry should succeed.
An obvious exception to blocking-better-than-failing being if when you retry, you can pick a different parent row to retry with, if that make sense in your context. In this case, maybe the best thing to do is explicitly lock the parent row with NOWAIT before attempting the insert. That way you can perhaps deal with failures in a more nuanced way.
If you must retry with the same parent_id, then I think the only real solution is to figure out who is holding the parent row lock for so long, and fix that. I don't think that setting statement_timeout would be hazardous, but it also wouldn't solve your problem, as you would probably just keep retrying until the lock on the offending row is released. (Setting it on the other session, the one holding the lock, might be helpful, depending on what that session is doing while the lock is held.)

How to determine number of write transactions per second in Postgres

Is there a way to measure how many write transactions are happening per second in Postgres? As I understand pg_stat_database.xact_commit will show total number of transactions committed, but I want to exclude readonly queries and only see the number of commits that actually modified data.
Run
SELECT txid_current();
to get the current transaction number.
If you do that at two points in time and subtract the numbers, you know how many transactions (committed or rolled back) have occurred in the mean time.
Read-only transactions do not consume a transaction ID.
This script can be used to count number of transaction commits performed between starting the script and killing it: https://gist.github.com/dmos62/aa754a04ff8bf36d6565d74b2dad6513
Usage looks like this:
./count_txs.sh psql postgresql://x:y#z:1234/w
ctrl-c to stop counting
^C
55
This means that 55 transaction commits have been performed between starting the script and killing it.

hiveQL counter limit exceeded error

I am running a create table query in Hiveql and obtain the following error when it is run:
Status: Failed
Counters limit exceeded: Too many counters: 2001 max=2000
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Counters limit exceeded: Too many counters: 2001 max=2000
I have attempted to set the counters to to a greater number, i.e.
set tez.counters.max=16000;
However, it still falls over with the same error.
My query incorporates 13 left joins but the data sets are relatively small (1,000's rows). The query did work when there were roughly 10 joins but since I've added additional joins it has started to fail.
Any suggestions on how I can configure this to work would be greatly appreciated!
You need to find real initial error log from failed container. The error you have shown here is not initial error. 2001 containers (including their restart attempts) have failed because of some other error (which you really need to fix), then all job was terminated, all other containers were killed because of Failed Counters limit. Go to Job tracker and find some failed (not killed) container and read it's log. The real problem is not in limit and changing the Failed Counters limit will not help.
Divide your query into multiple step and then run it.
As you said your query works with 10 joins,So first create the table which has data with first 10 joins and then with the new table,create other table which has data from first table and three other tables.
I faced the same issue as I was applying union all statement on 100 tables.But when I started to run only 10 tables at a time it works.
Hope This Helps!!!!

handle locks in redshift

I have a python script that executes multiple sql scripts (one after another) in Redshift. Some of the tables in these sql scripts can be queried multiple times. For ex. Table t1 can be SELECTed in one script and can be dropped/recreated in another script. This whole process is running in one transaction. Now, sometimes, I am getting deadlock detected error and the whole transaction is rolled back. If there is a deadlock on a table, I would like to wait for the table to be released and then retry the sql execution. For other types of errors, I would like to rollback the transaction. From the documentation, it looks like the table lock isn't released until end of transaction. I would like to achieve all or no data changes (which is accomplished by using transaction) but also would like to handle deadlocks. Any suggestion on how this can be accomplished?
I would execute all of the SQL you are referring to in one transaction with a retry loop. Below is the logic I use to handle concurrency issues and retry (pseudocode for brevity). I do not have the system wait indefinitely for the lock to be released. Instead I handle it in the application by retrying over time.
begin transaction
while not successful and count < 5
try
execute sql
commit
except
if error code is '40P01' or '55P03'
# Deadlock or lock not available
sleep a random time (200 ms to 1 sec) * number of retries
else if error code is '40001' or '25P02'
# "In failed sql transaction" or serialized transaction failure
rollback
sleep a random time (200 ms to 1 sec) * number of retries
begin transaction
else if error message is 'There is no active transaction'
sleep a random time (200 ms to 1 sec) * number of retries
begin transaction
increment count
The key components are catching every type of error, knowing which cases require a rollback, and having an exponential backoff for retries.

Stateful quartz job starts running on node 2 before it finishes execution on node 1

We have a stateful quartz job which is responsible for sending an update to an external system using webservice for each record in a database table (only one message has to be sent to the external system per record). If the update to external system is successful, then the record is deleted from the database.
The trigger is configured to fire every 6 seconds. And the job normally finishes execution in 1 second. Our application runs in a clustered environment. And we have the following in our quartz.properties file
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.misfireThreshold = 60000
Since this is a stateful job we expected it not to execute concurrently. It was working as expected almost all the time. But once in a while, we see the job running concurrently which causes some issues for us.
The timing of the executions is as below:
Node 1:
Trigger 1 start time - 14:54:12 (picks record with id 10)
Trigger 1 end time - 14:54:33 (finishes after trigger 2 and tries to delete the record which is already deleted by trigger 2)
Node 2:
Trigger 2 start time - 14:54:22 (this also picks the record with id 10)
Trigger 2 end time - 14:54:23 (finishes before trigger 1 and deletes the record in the database)
We haven't set the org.quartz.jobStore.clusterCheckinInterval property, so it has to be 15000 ms as per quartz 1.x documentation (we use quartz 1.6.0).
We checked the system time on the nodes and they are in sync.
Could someone please help me in understanding the reason for this issue?
And how is the job trigger frequency related to org.quartz.jobStore.clusterCheckinInterval ?
Thanks.
Have you tried using #DisallowConcurrentExecution?
http://quartz-scheduler.org/documentation/quartz-2.x/examples/Example4