I'm using Eclipselink and Spring transaction management. I want the program to insert two sets of master/detail records atomically, so that either both are inserted or neither.
Now, if my explicit validation fails either structure the code throws an exception and rollbackfor takes care of it. An error thrown during processing the second record rolls back the first.
Where things go pear-shaped is when there's a data error in processing the second transaction which results in an SQLException during the purge/commit process, In this case only the second record seems to be rolled back, the first is left in place.
I've tried various tweaks. The SQLException is normally fired during a find call during the process of validating the record, when that generates a flush(), but I've tried changing the EntityManger settings to turn that automatic flush off, and although a different exception is thrown at a different point in the program, the rollback still does the same thing.
I've tried requesting a new transaction for the update that throws the error, but the system says it can't give me a new transaction at that point.
I've tried doing a flush just before the end of the transaction in hopes of catching a DatabaseException and converting to the exception for the rollback.
Now, I can get more picky about validation, reducing the likelihood of data errors on the SQL, but I'd prefer to insure against missing something. I can fix the unit test that way, but there's sure to be other ways this can happen.
Oh, setting flush mode to "COMMIT" does make a difference. In that case neither record gets rolled back. Even if I catch the PersistenceException and throw my rollbackfor exception instead.
Well, I stumbled across a work around, just changing things at random. Flipped persistence.xml from JTA to RESOURCE_LOCAL. Now it seems fine.
Related
As far as I know, we can't use start transaction within functions, thus we can't use COMMIT and ROLLBACK in functions.
But how then we ROLLBACK by some if-condition?
How then we can perform a sequence of statements in a specific level of isolation? I mean a situation when an application wants to call a SQL (plpgsql) function and that function really needs to be run in a transaction with a certain isolation level. What to do in such a case?
In which cases then it is really practical to run ROLLBACK? Only when we manually write a script, check something and then ROLLBACK manually if we don't like the result. And in the same case, I see the practicality of savepoints. However, I feel like it is a serious constraint.
If you want to rollback the complete transaction, RAISE an exception.
If you only want to roll back part of your work, start a new block with a BEGIN at the point to which you want to roll back and add an EXCEPTION clause to the block.
Since the transaction is started outside the function, the isolation level already has to be set properly when you are in the function.
You can query
SELECT current_setting('transaction_isolation', TRUE);
and throw an error if the setting is not correct.
is too general or too simple to answer.
You roll back a transaction if you have reached a point in your processing where you want to undo everything you have done so far in the transaction.
Often, that happens implicitly rather than explicitly by throwing an error.
I am writing a scalar plpgsql function that contains a C function that has a side-effect outside of the database. When the function is invoked, in some arbitrary SQL (trigger, select, write, etc), I want the side-effect to be committed or rolled back on the PostgreSQL unit of work (UOW) boundary. I can handle the UOW commit, but I don't know how to "catch" the database ROLLBACK and rollback the side-effect. The key point is I am writing the function, but don't have control of how it is called, i.e., I can not "force" the call to be in a block with EXCEPTION handlers. Any ideas?
For the commit, I plan to have the plpsql function INSERT into a database TABLE that has a trigger "... AFTER INSERT ... EXECUTE PROCEDURE commit_my_side_effect()", so when the UOW is committed, the row is committed, the AFTER INSERT trigger fires and presto, the side effect is committed;
The only idea I have so far is to pass out the txid_current() to a background worker process. Then on some heartbeat using SPI, check if the txid is not in flight or committed, then it must have been rolled back. But that seems like heavy lifting.
Broadly speaking, a transaction is considered "rolled back" if it's not committed and it's no longer running; in the interests of ACID compliance, an explicit ROLLBACK needs to be functionally identical to yanking the power cord on your server, so in general, there can't be any deliberate action associated with a rollback which you might be able to hook into.
The actual removal of rolled-back data is handled by vacuuming, which works more or less like your proposed background worker: anything written by a transaction which is not running and not committed is a candidate for removal. However, there's a bit more to it than that, as a transaction containing subtransactions (SAVEPOINTs or PL/pgSQL EXCEPTION blocks) can be partially rolled back. In other words, txid_current() alone isn't enough to decide if a change was committed, and I don't know if Postgres exposes enough information about subtransaction states to let you to cater for this.
I think the only sane approach is to move the application of side-effects to an external process, and trigger it after commit, once you know what has actually been committed. Two approaches come to mind:
Have your PL/pgSQL function insert into a work queue which is polled by the external process, or
Feed changes to the process via NOTIFY (notifications are only delivered on commit, and notifications from rolled-back subtransactions are discarded)
Notifications are more lightweight and lower latency (they're delivered asynchronously, so no polling is necessary), but less robust than a table-based approach, as the notification queue is wiped out in the event of a crash or an unexpected disconnection. Of course, if you want crash safety without the downsides of polling, you can simply do both.
I found a feature called ON_ERROR_ROLLBACK, and looking at the implementation,https://github.com/postgres/postgres/blob/master/src/bin/psql/common.c, I think I can "wrap" all the SQL commands using the following pseudo-code to add "fake" savepoint, and "fake" rollback to savepoint and fire off a "rollback_side_effect()":
side_effect_fired = false; // set true if the side_effect udf called
run("SAVEPOINT _savepoint");
run($sqlcommand);
if (txn_status == ERROR && side_effect_fired) {
run("ROLLBACK TO _savepoint"
rollback_side_effect()); // rollback the side effect
}
I probably need a stack of _savepoint. I will run with that!
I'm running jobs through Datastage with the DELETE then INSERT connector. I'm having several jobs failing for this error:
DB2_Connector: DB2 reported: SQLSTATE = 02000 Native Error Code = 100, Msg = IBM[CLIDriver][DB2/NT64] SQL01000W No row was found for FETCH, UPDATE, or DELETE
When I run the delete statement in Data Studio directly in DB2, it gives this same error so I know it's a DB2 error, not a Datastage error.
Is there anyway to supress the message in Datastage or when I run the statement in DB2 is there anyway I can keep that message from coming up? It's stopping my DS jobs now with a Fatal error and not continuing to load.
There has got to be a way to turn off the message. I know in SQL Server if no rows are found it does not give this error, it just says zero or doesn't return records but in DB2 this error is coming up and I'm not sure if there is a way to turn it off.
First of all, you seem to be confused about precisely what an error is, and what a message is.
An error is when something goes wrong.
A message is when some piece of software is kind enough to let you know that something went wrong.
From this it follows that suppressing a message has no bearing whatsoever on the actual error. Your software is not failing because of the message, your software is failing because something is going wrong. Receiving a message about it is actually a good thing: the alternative would be your software failing without you being given any clue whatsoever as to what is going wrong.
Suppressing or otherwise ignoring errors is like hiding your head in the sand: you are still going to end up as meal.
So, what you need to make go away is the error, not the message.
Which means that you have to figure out what you did wrong.
Luckily, you have the message giving you a hint as to what you did wrong, though you have to keep in mind that messages are sometimes misleading.
SQLState 02000 is not an error, it is a warning. (And note that DB2_Connector is not saying ERROR!!!1!:, it is saying DB2 reported:.) Luckily JDBC issues warnings when it detects situations that might be indicative of errors; there is a lot of software out there that ignores JDBC warnings, (essentially hiding your head in the sand for you, how nice,) but luckily DB2_Connector reports them.
What this means is that one of two things is going wrong:
Either your assumption that it is okay if no rows are found is wrong, and the fact that no rows were found is the cause of your problem, which means that you have to somehow make sure that some rows are found, or
Your assumption that it is okay if no rows are found is correct, in which case the warning reported has absolutely nothing to do with the problem at hand, so it can safely be ignored, and you have to look at the problem elsewhere.
In an application with managed context (Play!, Eclipselink) I do have a Method which uses JPA.withTransaction, but must not do a rollback. It has external communication and XML-marshalling and unmarshalling and so on, so different Exceptions may occur.
The normal behaviour of JPA.withTransaction is to rollback the current transaction on (most) Exceptions.
If such an Exception is thrown after external Ressources are touched, the database must keep the current step to enable a continue/cleanup afterwards.
I did not find a way to achieve autocommit or to disable rollback. I have read that just catching the Exception would not do the trick, since the transaction is already marked for rollback.
So which is a correct way to disable rollback and to commit every query as soon as possible? I do not want to disturb the rest of the application, so I would avoid
JPA.em().getTransaction().commit();
JPA.em().getTransaction().begin();
after every write.
Which way I can simply keep the written data?
In Code-First Entity Framework, is there a way to modify the transaction behavior so that it does not discard all changes and instead keep all the changes up to the point of failure?
For example:
foreach {var objectToSave in ArrayOfEntityObjects)
{
MyContext.Insert(objectToSave);
}
try
{
MyContext.SaveChanges();
}
catch (Exception x)
{
//handling code
}
In the above example, assuming the array is an array of 100 objects and that an error occurred on item 50, I want to keep the ones that were successful, up to the point of failure at least. Right know we are executing the MyContext.SaveChanges() command during each iteration of the foreach loop to do this, but we would like the performance boost of committing to the database in one commit (Our understanding is that EF sends all the commands at once for the transaction, over a single connection, thus only using one round trip).
No, there is no way to do this automatically. EF commits all changes in a single transaction, which means it's an all or nothing thing.
If you want this behavior, then you must save changes after each and every record you add.
The reason is that EF doesn't know what constitutes a "transaction" in your data. It might be one row, or it might be several rows in several tables. EF doesn't even try to understand what your transactional requirements might be. You have to do it manually if you want it.