sqlworkbench-j not closing transaction connection - amazon-redshift

I am using sqlworkbench-j to query Redshift data. I am facing issue of locking tables whenever I do query on this table. It also happens for simple select statements. I know this is happening because workbench explicitly adds begin for every statement to take care for any changes happening for the data. So for every query we need to write end transaction.
Is there any option to disable the begin statement or to add end transaction statement in sqlworkbench-j?

When you set up redshift - click the "autocommit" option.
see here for more detailed instructions
https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-using-workbench.html
especially point 10

Related

What is the difference between executing a macro and executing the contained instructions within it in Teradata?

Sometimes it can well be observed that individual instructions within a macro execute way faster than a whole macro in Teradata ? Is this just a delusion or is there any logic behind it ? I am newbie to Teradata and I will appreciate if someone explains the reasons from the basics .
A Macro is exactly the same as a MultiStatement Request (MSR).
When you EXPLAIN EXEC mymacro you will notice that all the statements within the macro are done followed by a final END TRANSACTION step.
Now if you do something like a DELETE ALL as a standalone transaction it's a fast-path delete, a kind of TRUNCATE, because the optimizer knows it's the last modification of that table and it's committed.
Then you might have an INSERT SELECT into that table, which is also fast-path, because the table was empty at the begin of the transaction.
Now you put both in a Macro: The DELETE is not the last modification and the INSERT SELECT is not into an empty table, so both statements will be Transient Journaled. Of course this is much slower...

Recursive triggers - one trigger releasing another (update statements)

I created 5 triggers in my small (2 table database).
After I added the last one (to change INVPOS.INVSYMBOL after INVOICE.SYMBOL has been updated) these triggers activated each other and I got a
Too many concurrent executions of the same request.
error.
Could you please look at the triggers I created and help me out?
What can I do to avoid these problems in future? Should I merge a few triggers into one?
One solution could be to check has the intresting field(s) changed and only run the trigger's action if really nessesary (data has changed), ie
CREATE TRIGGER Foo FOR T
AS
BEGIN
-- only execute update statement when the Fld changed
if(new.Fld is distinct from old.Fld)then begin
update ...
end
END
Another option could be to check has the trigger already done it's thing in this transaction, ie
CREATE TRIGGER Foo FOR T
AS
DECLARE trgrDone VARCHAR(255);
BEGIN
trgrDone = RDB$GET_CONTEXT('USER_TRANSACTION', 'Foo');
IF(trgrDone IS NULL)THEN BEGIN
-- trigger hasn't been executed yet
-- register the execution
rdb$set_context('USER_TRANSACTION', 'Foo', 1);
-- do the work which might cause reentry
update ...
END
END
You should avoid circular references between triggers.
In general, triggers are not suitable for complex business logic, they work good for simple "if-then" business rules.
For the case you described you'd better implemenent a stored procedure where you could prepare data for all tables (perform data check, calculate necessary values, etc) and then insert them. It will lead to straightforward, fast and easy-to-maintain code.
Also, use CHECK for "preventing from inserting 0 to AMOUNT and PRICENET", and calculated fields for tasks like "calculate NETVAL".

Rollback DML statement in pgAdmin

In pgAdmin, if I execute an insert query, I don't see any way to either commit or rollback the statement I just ran (I know it auto commits). I'm used to Oracle and SQL developer, where I could run a statement, and then rollback the last statement I ran with a press of a button. How would I achieve the same thing here?
Use transaction in the SQL window:
BEGIN;
DROP TABLE foo;
ROLLBACK; -- or COMMIT;
-- edit --
Another example:
BEGIN;
INSERT INTO foo(bar) VALUES ('baz') RETURNING bar; -- the results will be returned
SELECT * FROM other_table; -- some more result
UPDATE other_table SET var = 'bla' WHERE id = 1 RETURNING *; -- the results will be returned
-- and when you're done with all statements and have seen the results:
ROLLBACK; -- or COMMIT
I also DEARLY prefer the Oracle way of putting everything in a transaction automatically, to help avoid catastrophic manual mistakes.
Having auto-commit enabled by default in an Enterprise product, IMO, is beyond vicious, and nothing but a COMPLETELY, UTTERLY INSANE design-choice :(
Anyways --- working with Postgres, one always needs to remember
BEGIN;
at the start of manual work or sql-scripts.
As a practical habit: then, when you would say: COMMIT;
in Oracle, I use the line
END; BEGIN;
in Postgres which does the same thing, i.e commits the current transaction and immediately starts a new one.
When using JDBC or similar, to create a connection, always use some method, e.g. getPGConnection(), that includes:
...
Connection dbConn = DriverManager.getConnection(dbUrl, dbUser, dbPassword);
dbConn.setAutoCommit(false);
...
to make sure every connection has auto-commit disabled.
If you are using pgAdmin4, you can turn the auto commit and/or auto rollback on and off.
Go to the File drop down menu and select Preferences option. In the SQL editor tab -> Options you can see the options to turn auto commit/rollback on and off.
Auto commit/rollback option

DB2 deadlock timeout Sqlstate: 40001, reason code 68 due to update statements called from servlet using SQL

I am calling update statements one after the other from a servlet to DB2. I am getting error sqlstate 40001, reason code 68 which i found it is due to deadlock timeout.
How can I resolve this issue?
Can it be resolved by setting query timeout?
If yes then how to use it with update statements in servlet or where to use it?
The reason code 68 already tells you this is due to a lock timeout (deadlock is reason code 2) It could be due to other users running queries at the same time that use the same data you are accessing, or your own multiple updates.
Begin by running db2pd -db locktest -locks show detail from a db2 command line to see where the locks are. You'll then need to run something like:
select tabschema, tabname, tableid, tbspaceid
from syscat.tables where tbspaceid = # and tableid = #
filling in the # symbols with the ID number you get from the db2pd command output.
Once you see where the locks are, here are some tips:
◦Deadlock frequency can sometimes be reduced by ensuring that all applications access their common data in the same order – meaning, for example, that they access (and therefore lock) rows in Table A, followed by Table B, followed by Table C, and so on.
taken from: http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.admin.trb.doc/doc/t0055074.html
recommended reading: http://www.ibm.com/developerworks/data/library/techarticle/dm-0511bond/index.html
Addendum: if your servlet or another guilty application is using select statements found to be involved in the deadlock, you can try appending with ur to the select statements if accuracy of the newly updated (or inserted) data isn't important.
For me, the solution was adding FOR READ ONLY WITH UR at the end of all my SELECT statements. (Apparently my select statements were returning so much data, it locked the tables long enough to interfere with other SQL statements)
See https://www.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/sqlref/src/tpc/db2z_sql_isolationclause.html

Multiple prepared statements disrupt a transaction using DBD::Sybase

In my Perl script, I use DBD::Sybase (via DBI module) to connect to a SQL Server 2008. The base program as below runs without problem:
use DBI;
# assign values to $host, $usr, $pwd
my $dbh = DBI->connect("dbi:Sybase:$host", $usr, $pwd);
$dbh->do("BEGIN TRAN tr1");
my $update = $dbh->prepare("UPDATE mytable SET qty = ? where name = ?");
$update->execute(100, 'apple');
$dbh->do("END TRAN tr1");
however, if I insert one more prepare statement right before the existing prepare statement, to have the program look like:
...
my $insert = $dbh->prepare("INSERT INTO mytable (name, qty) VALUES (?, ?)");
my $update = $dbh->prepare("UPDATE mytable SET qty = ? where name = ?");
...
and the rest is all the same, then when I run it, I got:
DBD::Sybase::db do failed: Server message number=3902 severity=16 state=1 line=1 server=xxx text=The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
So looks like the additional prepare statement somehow disrupted the entire transaction flow. I had been running the same code via the DBD::ODBC driver with no problem against a SQL SERVER 2005. (But my firm upgraded to 2008 and I had to use the DBD::Sybase to get around some other problems.)
Any help / suggestion on how to resolve this issue would be much appreciated. In particular, using a different db handle for the other prepare is not a desired solution since that will beat the purpose of having them in a single transaction.
UPDATE: Turns out if I execute at least once on the additional insert, then the program is again run fine. So looks like every prepared statement needs to be run under Sybase. But that isn't a requirement with ODBC and isn't a reasonable requirement in general. Anyway to get around it?
You are learning perl AND Sybase basics and making several incorrect conclusions.
Forget about what it does under ODBC for a moment. ODBC most probably has AUTOCOMMIT turned on, and thus you have no transaction control whatsoever. (Why anyone would use ODBC when the DBD:: supports DB-Lib and CT-Lib is beyond me, but that's a separate story.)
Re: "So looks like every prepared statement needs to be run under Sybase."
Rawheiser is correct. What exactly do you expect to achieve by preparing a batch but performing a Do instead ? Where else do you expect to execute the batch prepared under Sybase, other than under Sybase?
Do vs prepare/execute are quite different. prepare/execute for Sybase works just fine in millions of programs. you just have to learn what it does, not what you think it should do. prepare let's you load a batch, a block of commands terminated by GO in the normal Sybase sense. Execute executes the prepared batch (supplies the GO and sends the batch to the server), and captures whatever is returned (according to whatever array/variables you have set).
Do is immediate, single command, with no prepare. A prepare+execute combined.
Performing only single-statement do's, and only dynamic SQL, simply because that's all that you could get to work, is very limiting and quite unnecessary.
You currently have:
Prepare:
UPDATE
Execute (100)
ExecuteImmediate(Do):
COMMIT TRAN
So of course, there is no BEGIN TRAN. (The first "do" executed, the BEGIN TRAN is gone)
I think what you want (intended originally) is this. Forget the 'do':
Prepare:
BEGIN TRAN
UPDATE
COMMIT TRAN
Execute (100)
Then change it to:
BEGIN TRAN
INSERT
UPDATE
COMMIT TRAN
Execute (100)
Your $update and $insert will confuse you (you're executing a multi-statement batch, right ?not a isolated single command in the middle of a prepare batch). If you get rid of them, and think in terms of $execute [whatever you have prepared in the batch], it might help you to understand the problem better.
Do not form conclusions until you have all the above working as intended.
And read up on BEGIN/COMMIT TRAN.
Last, What exactly is a "END TRAN" ? I do not think the code block you have posted is real.
Don't dynamically create SQL, it is dangerous (sql injection).
You should be able to prepare multiple inserts/updates and your link to the DBI documentation does not say you cannot, it says some drivers may not be able to tell you much about a statement which is ONLY prepared.
I'd post a failing example with error to the dbi-users list for comment as the DBD::Sybase maintainer hangs out there (see dbi.perl.org).
Turns out that DBI's prepare method is not quite portable across various database drivers as noted here. For the Sybase driver, it is most likely that prepare is not working as intended. One way to tell is that after running prepare, the variable $insert->{NUM_OF_FIELDS} is undefined.
To get around the problem, do one of the following:
1) do not prepare anything. Just dynamically construct the statement in text string and run $dbh->do($stmt), or
2) run finish on all outstanding statement handles (under that database handle) before running COMMIT TRAN. I personally prefer this way much better.