How commit works in oracle - oracle10g

I have couple of statements, the pseudo code would look something like this
insert into local_table
crease savepoint sp1
insert into remote_db //using db_link
update local_table2
delete from local_table
commit
Now I am kind a confuse at insert into remote_db statement. Would there be any chance that the commit which is being applied has different affect on local db and on remote db?
The problem statement is kind a complex. the script which copies data from local db to remote db is producing duplicates. After going through investigation, thats the only place which looks suspicious but i am not sure. Would really appreciate if someone can shed light on COMMIT of oracle.

If you are asking whether the commit could potentially cause duplicate rows, no, that's not possible.
Given the way that distributed transactions take place, it is possible that that transaction would not be committed at all on the remote database (in which case it would be an in-doubt distributed transaction that the remote DBA would likely need to resolve). But if the transaction is committed successfully, it's going to be committed correctly. It's not possible that some rows would get committed and others wouldn't or that duplicate rows that didn't exist prior to the commit would be created by the act of committing.

Related

does sql statement ensure atomicity in postgres

I have a simple bug in my program that uses multi user support. I'm using knex to build sql queries, and I have a pseudocode that depicts the scenerio:
const value = queryBuilder().readDataFromTheDatabase();//executes this
//do some other work and get value
queryBuilder.writeValueToTheDatabase(updateValue(value));
This piece of code is being use in sort of a middleware function. And as you can see, this is a possible race condition i.e. when multiple users access the thing, one of them gets a stale value when they try to execute this at roughly the same amount of time.
My solution
So, I was think a possible solution would be create a single queryBuilder statement:
queryBuilder().readAndUpdateValueInTheDatabase();
So, I'll probably have to use a little bit of plpgsql. I was wondering if this solution will be sufficient. Will the statement be executed atomically? i.e. When one request reads and doesn't finish his write, does another request wait around to both read and write or just waits to write but, reads the stale value?
I think what you are looking for here is isolation, not atomicity. You could set all transactions to the highest isolation level, serializable (which is higher than the usual default level). With that level, if data that a transaction read (and presumably relied upon) is changed, then when it tries to commit it might get a serialization failure error. I say "might", because the system could conclude the situation would be consistent with the data change having happened after the commit, in which case the commit is allowed to stand.
To avoid a race condition with such a setup, you must run both the read and the write in the same database transaction.
There are two ways to do that:
Use the default READ COMMITTED isolation level and lock the rows when you read them:
SELECT ... FROM ... FOR NO KEY UPDATE;
That locks the rows against concurrent modifications, and the lock is held until the end of the transaction.
Use the REPEATABLE READ isolation level and don't lock anything. Then your UPDATE will receive a serialization error (SQLSTATE 40001) if somebody modified the row concurrently. In that case, you roll the transaction back and try again in a new REPEATABLE READ transaction.
The first solution is usually better if you expect conflicts frequently, while the second is better if conflicts are rare.
Note that you should keep the database transaction as short as possible in both cases to keep the risk of conflicts low.
Transaction in PostgreSQL use an optimistic locking model when accessing to tables, while some other DBMS do pessimistic locking (IBM Db2) or the two locking model (MS SQL Server).
Optimistic locking snapshot the data on which you are working, and the modifications are done on the snapshot until the transaction ended. When the transaction finishes, the snapshot modifications are postponed on the real database (table rows), but if some other user had made a change between the moment of the snapshot capture and the commit, then the commit cannot apply and the COMMIT is rejected as a ROLLBACK.
You can try to raise the ISOLATION LEVEL (REPEATABLE READ or SERIALIZABLE) to avoid the trouble.

Using two phase commits on postgres

Asumming that a have a table called "t1" in a "db1" and other table called "t2" in a "db2", and i need to insert a record on both tables or fails.
Connected to the db1 i guess i shall type this
BEGIN;
PREPARE TRANSACTION 'pepe'; -- this says the manual that makes your transaction gets stored on disk, so what is the purpose if i can't use it from another database?)
insert into t1 (field) values ('a_value');
COMMIT PREPARED 'pepe'
Connected to the db2 i guess that
BEGIN;
PREPARE TRANSACTION 'pepe'; -- this fails (the name of the transacttion, what is the meaning, what is use for?)
-- It complains about this "ERROR: transaction identifier "pepe" is already in use"
insert into t2 (field) values ('another_value');
COMMIT PREPARED 'pepe'
As you may see i don't get how to use two phase commits on postgres.
TL;DR
I'm not getting how to perform syncronization commands on differents DB within the same RDBMS.
I have read at oficial postgres documentation that for syncronizing works across two or more unrelated postgres databases an implementation of the so called "two-phases commits" protocol is at our disposal.
So i start trying to see how people do actually use them within the postgres, i do not see any actual example, at most i get to this post of a guy that was trying to experiment with several postgres client connected to the differents databases in order to emulate the multiple process running in pararell doing things to the several dbs that should end in a gratefully (all commit) or dreadfully way (all rollback).
Other sources i have peek looking foward examples were:
https://en.wikipedia.org/wiki/Two-phase_commit_protocol (this source
explain well the protocol but really makes me wonder where or who is
my "Coordinator" and how to send messages to the "participants"... i
only got prepare transaction <id>, commit prepared <id> or
rollback prepared <id> commands at my disposal)
Two phase commit
https://dba.stackexchange.com/questions/145656/dependent-transaction-in-separate-database-connections
https://www.endpointdev.com/blog/2010/07/distributed-transactions-and-two-phase/
https://www.citusdata.com/blog/2017/11/22/how-citus-executes-distributed-transactions/
(From a golang client-app) https://github.com/go-pg/pg/issues/490
Please i'm really confused, i hope horse_with_no_name to appear here and enlightme (as happen in the past) or any other charity soul that can help me.
Thanks in advance!
Resolution (After Laurenz's Answer)
Connected to the db1, these are the sql lines to execute:
BEGIN;
-- DO THINGS TO BE DONE IN A ALL OR NOTHING FASHION
-- Stop point --
PREPARE TRANSACTION 't1';
COMMIT PREPARED 't1' || ROLLBACK PREPARED 't1' (decision requires awareness and coordination)
meanwhile connected to the db2 these will be the script to execute:
BEGIN;
-- DO THINGS TO BE DONE IN A ALL OR NOTHING FASHION
-- Stop point --
PREPARE TRANSACTION 't2';
COMMIT PREPARED 't2' || ROLLBACK PREPARED 't2'
The -- Stop point -- is where a coordinator process (for example
an application executing the statement, or a human behind a psql
client console or pgAdminII) shall stop the execution of both
scripts (actually not execute any further instruction, that is what i mean by stop).
Then, first on db1 (and then on db2, or viceversa) the
coordinator process (whatever been human or not) must run PREPARE TRANSACTION on each connection.
If one of then fails, then the coordinator must run ROLLBACK PREPARED on those database where the transaction was already prepared and ROLLBACK on the others.
If no one fails the coordinator must run COMMIT PREPARED on all involved databases, an operation that shall not fail ever (like existing the home when you are one step outside your house with all the things properly set to exit safely)
I think you misunderstood PREPARE TRANSACTION.
That statement ends work on the transaction, that is, it should be issued after all the work is done. The idea is that PREPARE TRANSACTION does everything that could potentially fail during a commit except for the commit itself. That is to guarantee that a subsequent COMMIT PREPARED cannot fail.
The idea is that processing is as follows:
Run START TRANSACTION on all database involved in the distributed transaction.
Do all the work. If there are errors, ROLLBACK all transactions.
Run PREPARE TRANSACTION on all databases. If that fails anywhere, run ROLLBACK PREPARED on those database where the transaction was already prepared and ROLLBACK on the others.
Once PREPARE TRANSACTION has succeeded everywhere, run COMMIT PREPARED on all involved databases.
That way, you can guarantee “all or nothing” across several databases.
One important component here that I haven't mentioned is the distributed transaction manager. It is a piece of software that persistently memorizes where in the above algorithm processing currently is so that it can clean up or continue committing after a crash.
Without a distributed transaction manager, two-phase commit is not worth a lot, and it is actually dangerous: if transactions get stuck in the “prepared” phase but are not committed yet, they will continue to hold locks and (in the case of PostgreSQL) block autovacuum work even through server restarts, as such transactions must needs be persistent.
This is difficult to get right.

SQL Plus- no rows selected error;though the data has been inserted without any error

I am a very newbie with this SQL Plus and Oracle 10g thing.So,please don't mind the stupid questions.
See, what problem I am facing is that whenever i fire a query over a table;
SELECT * FROM emp;
The output comes out to be "no rows selected".
I am in utter dilemma as the table and its schema is clearly preserved but the data which I entered is not getting displayed. The same is happening for all the user generated tables. The tuples are not getting displayed. Is this the problem related to SQL Plus???
Kindly help and give me a proper guide.
In Oracle, every statement that you issue is part of a transaction. Those transactions need to either be committed (in which case the changes are made permanent) or rolled back (in which case the changes are reverted) before another session can see the data. Some databases either do not support transactions (i.e. MyISAM tables in MySQL) or do not implicitly start transactions (i.e. SQL Server). The Oracle approach is generally far superior-- when you inadvertently run a delete statement that is missing an important predicate, the ability to rollback the operation when it deletes many more rows that you are expecting can be a real career saver.
In Oracle, when you've run whatever statements comprise your transaction and you are confident that your changes are correct, you need to explicitly issue a commit to make those changes visible to other sessions, i.e.
SQL> insert into some_table( <<columns>> ) values( <<values>> );
SQL> insert into some_other_table( <<columns>> ) values ( <<more values>> );
SQL> commit;
If you are really, really, really sure that you prefer the behavior you might be accustomed to in other tools, you can tell SQL*Plus to autocommit your changes
SQL> set autocommit on;
SQL> <<do whatever>>
That is generally a really bad idea. The tiny benefit you get from not having to explicitly issue a commit is far outweighed by the ability to ensure that other sessions don't see data in an inconsistent state (i.e. if your transferring money from account A to account B by issuing two update statements, you don't want someone seeing an intermediate result where either both accounts have the money or neither account does) and the ability to rollback a change if it turns out to do something other than what you expected.

How to rollback an update in PostgreSQL

While editing some records in my PostgreSQL database using sql in the terminal (in ubuntu lucid), I made a wrong update.
Instead of -
update mytable set start_time='13:06:00' where id=123;
I typed -
update mytable set start_time='13:06:00';
So, all records are now having the same start_time value.
Is there a way to undo this change? There are some 500+ records in the table, and I do not know what the start_time value for each record was
Is it lost forever?
I'm assuming it was a transaction that's already committed? If so, that's what "commit" means, you can't go back.
Some data may be recoverable if you're lucky. Stop the database NOW.
Here's an answer I wrote on the same topic earlier. I hope it's helpful.
This might be too: Recoved deleted rows in postgresql .
Unless the data is absolutely critical, just restore from backups, it'll be lots easier and less painful. If you didn't have backups, consider yourself soundly thwacked.
If you catch the mistake and immediately bring down any applications using the database and take it offline, you can potentially use Point-in-Time Recovery (PITR) to replay your Write Ahead Log (WAL) files up to, but not including, the moment when the errant transaction was made. This would return the database to the state it was in prior, thus effectively 'undoing' that transaction.
As an approach for a production application database it has a number of obvious limitations, but there are circumstances in which PITR may be the best option available, especially when critical data loss has occurred. However, it is of no value if archiving was not already configured before the corruption event.
https://www.postgresql.org/docs/current/static/continuous-archiving.html
Similar capabilities exist with other relational database engines.

pgsql rollback function or revocation last step

I build a pgsql db, and using JDBC upload data to db. but sometime i upload wrong data.
so in pgsql it has some functions or sql statement can allow rollback to last step, without using backup file?
Thanks,
If you run things in a transaction, you may end the transaction with either "commit" or "rollback", hence in your script you may end up with an interactive question to the user: "does this seem OK?" and then either do a commit or rollback.
Beware that long-lasting "idle in transaction" may be harmful for the performance, so you should do rollback (or commit) after some timeout. Of course, it would be even better to ask the question to the user before doing any DB operations.
If something is already committed to the database, as far as I know it cannot be rolled back unless you have backups (dunno - perhaps it's theoretically possible to do it based on the WALs ... but probably not). Some years ago I was implementing a system with a "warm standby backup server" that would always be half an hour behind the production database - I could then issue a "distress signal" and the backup server would come online with the old state of the database, i.e. if I'd done some mistake like update users set password=md5('foo' || salt) instead of update users set password=md5('foo' || salt) where username='tobixen'. If the primary database went down, the "warm standby" would roll forward and then come online. I may post additional details about this if there is any interesst for it.
If you're expecting trouble, it's also usually possible to design the database and the scripts in such a manner that it's trivial to do a manual rollback.