Is there any way I can resolve this? - mysql-workbench

SELECT * FROM persons;
COMMIT;
START TRANSACTION;
savepoint SP_UPD;
UPDATE Persons SET Person_name = 'Sachin_Tendulkar'
where id = 1;
savepoint SP_INS;
INSERT INTO persons (Person_name, Nationality) VALUES ('Cillian_Murphy', 'British');
ROLLBACK to SP_INS;
When I ROLLBACK to SP_INS; it's executing the update command but does not executing ROLLBACK to SP_INS;
When I ROLLBACK to SP_UPD; it's gets executed but the table still showing the record I inserted.
Can someone tell me what's wrong here ?
I am using MYSQL btw.

Related

How to avoid deadlock when delete/update the same record in the Postgres

I have a scenario when I play with Postgres.
We have one table with primary key, and there are two concurrent process, the one can update record, another process can delete record.
Now we are facing deadlock, when two processes play with update/delete the same record in the table.
I google how to avoid deadlock, someone says to use "SELECT FOR UPDATE".
Suppose there are two statements as following
update table_A set name='aaaa' where cid=1;
delete table_A where cid=1;
My question is,
(1) Do I need to add "SELECT FOR UPDATE" to both statements or just one statement in order to avoid deadlock?
(2) Could you give a complete example how to add "SELECT FOR UPDATE" ? I mean, what does it look like after you add "SELECT FOR UPDATE"? I never do it before, I want to learn how to add it.
SELECT ... FOR UPDATE locks the selected rows so that any other transaction can neither perform an update nor a SELECT ... FOR UPDATE on these rows. These transactions must wait until the transaction with the first SELECT ... FOR UPDATE releases the lock on the rows again.
If SELECT ... FOR UPDATE is the first statement in all transactions, no deadlock can occur. Because no transaction can lock other lines, which could be used in the further course of other transactions.
So your two transactions should look like this:
BEGIN;
SELECT * FROM table_A WHERE cid = 1 FOR UPDATE;
-- some other statements
UPDATE table_A SET name = 'aaaa' WHERE cid = 1;
END;
and:
BEGIN;
SELECT * FROM table_A WHERE cid = 1 FOR UPDATE;
-- some other statements
DELETE FROM table_A WHERE cid = 1;
END;

Postgres getting duplicate key exception (DELETE AND INSERT) from inside psql function

I am trying to update the table with huge data. On checking I figured that upsert (Update and insert) is slow compared to using the temp table to delete and then insert from temp table. But I am facing issue with duplicate data in such scenario.
As far as I understand the delete and insert is happening in same transaction, so I could not understand why I am facing duplicate data issue when I have already deleted the data with the delete.
This is multithreaded scenario but i guess the other transaction will have its own set of data.
Any help is appreciated here.
Sample code
CREATE OR REPLACE FUNCTION insertUsingTempTable(a_id int, s_obj int[], p bigint[])
RETURNS BOOLEAN AS $BODY$
DECLARE passed BOOLEAN;
BEGIN
CREATE TEMP TABLE IF NOT EXISTS temp_lists AS SELECT * FROM acm_lists WHERE 0=1;
TRUNCATE TABLE temp_lists;
INSERT INTO temp_lists(a_id, s_obj_id, eff_p)
SELECT a_id , unnest($2::int[]), unnest($3::bit(64)[]) ;
IF NOT EXISTS(SELECT t.a_id, s_obj_id, count(1) FROM temp_lists t
GROUP BY t.a_id, s_obj_id HAVING count(1) > 1) THEN
DELETE FROM acm_lists t WHERE EXISTS (SELECT 1 FROM temp_lists t1
WHERE t1.a_id=t.a_id AND s_obj_id=t.s_obj_id);
INSERT INTO acm_lists (a_id, s_obj_id, eff_p)
SELECT t.a_id, t.s_obj_id, t.eff_p FROM temp_lists t;
RETURN true;
END IF;
RETURN false;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
Uniqueness is on a_id, s_obj_id in the above code.
I would like to know why duplicate exception is coming at times, if I am already deleting the data before inserting. This happens only when multiple transactions are running at the same time on the table.
INSERT ... ON CONFLICT DO UPDATE resolves the issue, but seems there is considerable performance hit. so I don't plan to use ON CONFLICT DO UPDATE approach.

Flyway fail on error in Transact-SQL migration

When using Flyway in combination with a Microsoft SQL Server, we are observing the issue described on this question.
Basically, a migration script like this one does not rollback the successful GO-delimited batches when another part failed:
BEGIN TRANSACTION
-- Create a table with two nullable columns
CREATE TABLE [dbo].[t1](
[id] [nvarchar](36) NULL,
[name] [nvarchar](36) NULL
)
-- add one row having one NULL column
INSERT INTO [dbo].[t1] VALUES(NEWID(), NULL)
-- set one column as NOT NULLABLE
-- this fails because of the previous insert
ALTER TABLE [dbo].[t1] ALTER COLUMN [name] [nvarchar](36) NOT NULL
GO
-- create a table as next action, so that we can test whether the rollback happened properly
CREATE TABLE [dbo].[t2](
[id] [nvarchar](36) NOT NULL
)
GO
COMMIT TRANSACTION
In the above example, the table t2 is being created even though the preceding ALTER TABLE statement fails.
On the linked question, the following approaches (outside of the flyway context) were suggested:
A multi-batch script should have a single error handler scope that rolls back the transaction on error, and commits at the end. In TSQL you can do this with dynamic sql
Dynamic SQL makes for hard-to-read script and would be very inconvenient
With SQLCMD you can use the -b option to abort the script on error
Is this available in flyway?
Or roll your own script runner
Is this maybe the case in flyway? Is there a flyway-specific configuration to enable proper failing on errors?
EDIT: alternative example
Given: simple database
BEGIN TRANSACTION
CREATE TABLE [a] (
[a_id] [nvarchar](36) NOT NULL,
[a_name] [nvarchar](100) NOT NULL
);
CREATE TABLE [b] (
[b_id] [nvarchar](36) NOT NULL,
[a_name] [nvarchar](100) NOT NULL
);
INSERT INTO [a] VALUES (NEWID(), 'name-1');
INSERT INTO [b] VALUES (NEWID(), 'name-1'), (NEWID(), 'name-2');
COMMIT TRANSACTION
Migration Script 1 (failing, without GO)
BEGIN TRANSACTION
ALTER TABLE [b] ADD [a_id] [nvarchar](36) NULL;
UPDATE [b] SET [a_id] = [a].[a_id] FROM [a] WHERE [a].[a_name] = [b].[a_name];
ALTER TABLE [b] ALTER COLUMN [a_id] [nvarchar](36) NOT NULL;
ALTER TABLE [b] DROP COLUMN [a_name];
COMMIT TRANSACTION
This results in the error message Invalid column name 'a_id'. for the UPDATE statement.
Possible solution: introduce GO between statements
Migration Script 2 (with GO: working for "happy case" but only partial rollback when there's an error)
BEGIN TRANSACTION
SET XACT_ABORT ON
GO
ALTER TABLE [b] ADD [a_id] [nvarchar](36) NULL;
GO
UPDATE [b] SET [a_id] = [a].[a_id] FROM [a] WHERE [a].[a_name] = [b].[a_name];
GO
ALTER TABLE [b] ALTER COLUMN [a_id] [nvarchar](36) NOT NULL;
GO
ALTER TABLE [b] DROP COLUMN [a_name];
GO
COMMIT TRANSACTION
This performs the desired migration as long as all values in table [b] have a matching entry in table [a].
In the given example, that's not the case. I.e. we get two errors:
expected: Cannot insert the value NULL into column 'a_id', table 'test.dbo.b'; column does not allow nulls. UPDATE fails.
unexpected: The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
Horrifyingly: the last ALTER TABLE [b] DROP COLUMN [a_name] statement was actually executed, committed and not rolled back. I.e. one cannot fix this up afterwards as the linking column is lost.
This behaviour is actually independent of flyway and can be reproduced directly via SSMS.
The problem is fundamental to the GO command. It's not a part of the T-SQL language. It's a construct in use within SQL Server Management Studio, sqlcmd, and Azure Data Studio. Flyway is simply passing the commands on to your SQL Server instance through the JDBC connection. It's not going to be dealing with those GO commands like the Microsoft tools do, separating them into independent batches. That's why you won't see individual rollbacks on errors, but instead see a total rollback.
The only way to get around this that I'm aware of would be to break apart the batches into individual migration scripts. Name them in such a way so it's clear, V3.1.1, V3.1.2, etc. so that everything is under the V3.1* version (or something similar). Then, each individual migration will pass or fail instead of all going or all failing.
Edited 20201102 -- learned a lot more about this and largely rewrote it! So far have been testing in SSMS, do plan to test in Flyway as well and write up a blog post. For brevity in migrations, I believe you could put the ##trancount check / error handling into a stored procedure if you prefer, that's also on my list to test.
Ingredients in the fix
For error handling and transaction management in SQL Server, there are three things which may be of great help:
Set XACT_ABORT to ON (it is off by default). This setting "specifies whether SQL Server automatically rolls back the current transaction when a Transact-SQL statement raises a runtime error" docs
Check ##TRANCOUNT state after each batch delimiter you send and using this to "bail out" with RAISERROR / RETURN if needed
Try/catch/throw -- I'm using RAISERROR in these examples, Microsoft recommends you use THROW if it's available to you (it's available SQL Server 2016+ I think) - docs
Working on the original sample code
Two changes:
Set XACT_ABORT ON;
Perform a check on ##TRANCOUNT after each batch delimiter is sent to see if the next batch should be run. The key here is that if an error has occurred, ##TRANCOUNT will be 0. If an error hasn't occurred, it will be 1. (Note: if you explicitly open multiple "nested" transactions you'd need to adjust trancount checks as it can be higher than 1)
In this case the ##TRANCOUNT check clause will work even if XACT_ABORT is off, but I believe you want it on for other cases. (Need to read up more on this, but I haven't come across a downside to having it ON yet.)
BEGIN TRANSACTION;
SET XACT_ABORT ON;
GO
-- Create a table with two nullable columns
CREATE TABLE [dbo].[t1](
[id] [nvarchar](36) NULL,
[name] [nvarchar](36) NULL
)
-- add one row having one NULL column
INSERT INTO [dbo].[t1] VALUES(NEWID(), NULL)
-- set one column as NOT NULLABLE
-- this fails because of the previous insert
ALTER TABLE [dbo].[t1] ALTER COLUMN [name] [nvarchar](36) NOT NULL
GO
IF ##TRANCOUNT <> 1
BEGIN
DECLARE #ErrorMessage AS NVARCHAR(4000);
SET #ErrorMessage
= N'Transaction in an invalid or closed state (##TRANCOUNT=' + CAST(##TRANCOUNT AS NVARCHAR(10))
+ N'). Exactly 1 transaction should be open at this point. Rolling-back any pending transactions.';
RAISERROR(#ErrorMessage, 16, 127);
RETURN;
END;
-- create a table as next action, so that we can test whether the rollback happened properly
CREATE TABLE [dbo].[t2](
[id] [nvarchar](36) NOT NULL
)
GO
COMMIT TRANSACTION;
Alternative example
I added a bit of code at the top to be able to reset the test database. I repeated the pattern of using XACT_ABORT ON and checking ##TRANCOUNT after each batch terminator (GO) is sent.
/* Reset database */
USE master;
GO
IF DB_ID('transactionlearning') IS NOT NULL
BEGIN
ALTER DATABASE transactionlearning
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;
DROP DATABASE transactionlearning;
END;
GO
CREATE DATABASE transactionlearning;
GO
/* set up simple schema */
USE transactionlearning;
GO
BEGIN TRANSACTION;
CREATE TABLE [a]
(
[a_id] [NVARCHAR](36) NOT NULL,
[a_name] [NVARCHAR](100) NOT NULL
);
CREATE TABLE [b]
(
[b_id] [NVARCHAR](36) NOT NULL,
[a_name] [NVARCHAR](100) NOT NULL
);
INSERT INTO [a]
VALUES
(NEWID(), 'name-1');
INSERT INTO [b]
VALUES
(NEWID(), 'name-1'),
(NEWID(), 'name-2');
COMMIT TRANSACTION;
GO
/*******************************************************/
/* Test transaction error handling starts here */
/*******************************************************/
USE transactionlearning;
GO
BEGIN TRANSACTION;
SET XACT_ABORT ON;
GO
IF ##TRANCOUNT <> 1
BEGIN
DECLARE #ErrorMessage AS NVARCHAR(4000);
SET #ErrorMessage
= N'Check 1: Transaction in an invalid or closed state (##TRANCOUNT=' + CAST(##TRANCOUNT AS NVARCHAR(10))
+ N'). Exactly 1 transaction should be open at this point. Rolling-back any pending transactions.';
RAISERROR(#ErrorMessage, 16, 127);
RETURN;
END;
ALTER TABLE [b] ADD [a_id] [NVARCHAR](36) NULL;
GO
IF ##TRANCOUNT <> 1
BEGIN
DECLARE #ErrorMessage AS NVARCHAR(4000);
SET #ErrorMessage
= N'Check 2: Transaction in an invalid or closed state (##TRANCOUNT=' + CAST(##TRANCOUNT AS NVARCHAR(10))
+ N'). Exactly 1 transaction should be open at this point. Rolling-back any pending transactions.';
RAISERROR(#ErrorMessage, 16, 127);
RETURN;
END;
UPDATE [b]
SET [a_id] = [a].[a_id]
FROM [a]
WHERE [a].[a_name] = [b].[a_name];
GO
IF ##TRANCOUNT <> 1
BEGIN
DECLARE #ErrorMessage AS NVARCHAR(4000);
SET #ErrorMessage
= N'Check 3: Transaction in an invalid or closed state (##TRANCOUNT=' + CAST(##TRANCOUNT AS NVARCHAR(10))
+ N'). Exactly 1 transaction should be open at this point. Rolling-back any pending transactions.';
RAISERROR(#ErrorMessage, 16, 127);
RETURN;
END;
ALTER TABLE [b] ALTER COLUMN [a_id] [NVARCHAR](36) NOT NULL;
GO
IF ##TRANCOUNT <> 1
BEGIN
DECLARE #ErrorMessage AS NVARCHAR(4000);
SET #ErrorMessage
= N'Check 4: Transaction in an invalid or closed state (##TRANCOUNT=' + CAST(##TRANCOUNT AS NVARCHAR(10))
+ N'). Exactly 1 transaction should be open at this point. Rolling-back any pending transactions.';
RAISERROR(#ErrorMessage, 16, 127);
RETURN;
END;
ALTER TABLE [b] DROP COLUMN [a_name];
GO
COMMIT TRANSACTION;
My fave references on this topic
There is a wonderful free resource online which digs into error and transaction handling in great detail. It is written and maintained by Erland Sommarskog:
Part One – Jumpstart Error Handling
Part Two – Commands and Mechanisms
Part Three – Implementation
One common question is why XACT_ABORT is still needed/ if it is entirely replaced by TRY/CATCH. Unfortunately it is not entirely replaced, and Erland has some examples of this in his paper, this is a good place to start on that.

Mysql workbench rollback not returning the table to last commit?

I'm using the rollback command to go to the previous commit but it doesn't seem to work.
I have tried executing the query multiple time and making sure that I didn't commit again but it doesn't work. I have copied the db from another table. I've executed the command as a block and as a whole but the result remains the same.
INSERT INTO departments_dup
select * from departments;
COMMIT;
UPDATE departments_dup
SET
dept_no = 'd011',
dept_name = 'Quality Control';
select * from departments_dup;
ROLLBACK;
The table should get back to its previous state but it doesn't seem to be working.
Simply use auto-commit before executing ROLLBACK or COMMIT. It worked perfectly for me.
set autocommit = 0;
INSERT INTO departments_dup
select * from departments;
COMMIT;
UPDATE departments_dup
SET
dept_no = 'd011',
dept_name = 'Quality Control';
select * from departments_dup;
ROLLBACK;
If your database tables were created using MyISAM, you will need to backup your database, do a find and replace 'MyISAM' to 'InnoDB' in your backup file and then restore your database.

tSQLt, triggers and testing

I have tried to wrap my brain around this, but can't make it work, so I present a little testcase here, and hopefully someone can explain it to me:
First a little test database:
CREATE DATABASE test;
USE test;
CREATE TABLE testA (nr INT)
GO
CREATE TRIGGER triggerTestA
ON testA
FOR INSERT AS BEGIN
SET NOCOUNT ON;
IF EXISTS (SELECT nr FROM Inserted WHERE nr > 10)
RAISERROR('Too high number!', 16, 1);
END;
And here is a tSQL test, to test the behaviour:
ALTER PROCEDURE [mytests].[test1] AS
BEGIN
EXEC tSQLt.FakeTable #TableName = N'testA'
EXEC tSQLt.ApplyTrigger
#TableName = N'testA',
#TriggerName ='triggerTestA'
EXEC tSQLt.ExpectException
INSERT INTO dbo.testA VALUES (12)
END;
This test will run ok - but the trigger doesn't do what I want: prevent user from entering values > 10. This version of the trigger does what I want:
CREATE TRIGGER triggerTestA
ON testA FOR INSERT AS BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION;
BEGIN TRY
IF EXISTS (SELECT nr FROM Inserted WHERE nr > 10)
RAISERROR('Too high number!', 16, 1);
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
THROW;
END CATCH;
END;
But now the test fails, stating A) there is an error (which was expected!) and B) that there is no BEGIN TRANSACTION to match a ROLLBACK TRANSACTION. I guess this last error is with the tSQLt surrounding transaction, and that my trigger somehow interferes with that, but it is sure not what I expect.
Could someone explain, and maybe help me do it right?
tSQLt is currently restricted to run tests in its own transaction and reacts, as you have seen, unwelcoming when you fiddle with its transaction.
So, to make this test work, you need to skip the rollback within the test but not outside.
I suggest this approach:
Remove all transaction handling statements from the trigger. You don't need to begin a transaction anyway as triggers are always executed inside of one.
If you find a violating row, call a procedure that does the rollback and the raiserror
Spy that procedure for your test
To test that procedure itself, you could use tSQLt.NewConnection