Inserting Data manually into table in AWS Redshift, sql workbench - amazon-redshift

I am able to connect to redshift from SQL workbench and I am able to create a table but When I try to insert values into the table It throws me the below error.
Since I am using temp schema and the connectivity shows schema as public, is this still an issue even if my insert statement is
Insert into tempschema.temp_staging values

Postgres (and thus Redshift which is based on an ancient version of Postgres) has a very strict transaction concept: either all statements work or none.
As soon as one statement in your transaction fails, the whole transaction needs to be rolled back.
So all you need to do is to issue a ROLLBACK command and you can continue. There is no need to restart SQL Workbench/J.
If you don't want to do that for every statement that throws an error, just enable autocommit in the connection profile:
7.3.5. Autocommit
This check box enables/disables the "auto commit" property for the connection. If autocommit is enabled, then each SQL statement is automatically committed on the DBMS. If this is disabled, any DML statement (UPDATE, INSERT, DELETE, ...) has to be committed in order to make the change permanent. Some DBMS require a commit for DDL statements (CREATE TABLE, ...) as well. Please refer to the documentation of your DBMS.
Link to manual
I am part of SQL Workbench/J support

It's just a temporary acquired lock.
Disconnect the workbench from the datasource
Restart the workbench
Reconnect to your datasource.
You'll be able to resume from here.

Related

Postgres cursors behaviour

I am trying to understand a PostgreSQL behaviour.
In my application I am using a Postgres (AWS Aurora to be precise) database through ODBC.
I am not specifying "with hold" on the cursors used but I see in the logs that most of them are 'idle in transaction', although there is no transaction, there is just a 'select'. The statements look like:
RELEASE _EXEC_SVP_05A0C768;SAVEPOINT _EXEC_SVP_05A0C768;declare "SQL_CUR05A0E3F8" cursor with hold for select u_selskap,u_kodetyp,u_tekst from u_hrm.u_stilltyp where u_selskap=50 and u_kodetyp='T';fetch 100 in "SQL_CUR05A0E3F8"
In PgAdmin we can see how the statements are displayed.
The idling cursors seem to keep locks on the database hence subsequent queries cannot be run.
I am not able to understand why these cursors are in a transaction and which part of the chain is introducing the 'with hold' as the default creation mode is 'without hold'(as seen here: https://www.postgresql.org/docs/11/sql-declare.html).
Can it be a PostgreSQL ODBC issue?

Does dropping a database have to be done not in any transaction?

From https://wiki.postgresql.org/wiki/Psycopg2_Tutorial
PostgreSQL can not drop databases within a transaction, it is an all
or nothing command. If you want to drop the database you would need to
change the isolation level of the database this is done using the
following.
conn.set_isolation_level(0)
You would place the above immediately preceding the DROP DATABASE
cursor execution.
Why "If you want to drop the database you would need to change the isolation level of the database"?
In particular, why do we need to change the isolation level to 0? (If I am correct, 0 means psycopg2.extensions.ISOLATION_LEVEL_READ_COMMITTED)
From https://stackoverflow.com/a/51859484/156458
The operation of destroying a database is implemented in a way which
prevents undoing it - therefore you can not run it from inside a
transaction because transactions are always undoable. Also keep in
mind that unlike most other databases PostgreSQL allows almost all DDL
statements (obviously not the DROP DATABASE one) to be executed inside
a transaction.
Actually you can not drop a database if anyone (including you) is
currently connected to this database - so it does not matter what is
your isolation level, you still have to connect to another database
(e.g. postgres)
"you can not run it from inside a transaction because transactions are always undoable". Then how can I drop a database not from inside a transaction?
I found my answer at https://stackoverflow.com/a/51880577/156458
I'm unfamiliar with psycopg2 so I can only provide steps to be performed.
Steps to be taken to perform DROP DATABASE from Python:
Connect to a different database, which you don't want to drop
Store current isolation level in a variable
Set isolation level to 0
Execute DROP DATABASE query
Set isolation level back to original (from #2)
Steps to be taken to perform DROP DATABASE from PSQL:
Connect to a different database, which you don't want to drop
Execute DROP DATABASE query
Code in psql
\c second_db
DROP DATABASE first_db;
Remember, that there can be no live connections to the database you are trying to drop.

Postgres returns errors on future transactions

I am currently migrating from MySQL to postgres using pgbouncer for my connection pool.
We select/insert/update/delete lots of data from postgres and all comes from remote sources so we try to make the data quality as good as possible before an insert but sometimes some bad data slips through.
This causes in postgres to report current transaction is aborted, commands ignored until end of transaction block
This is fine except that connection through pgbouncer will report this error for every query. I get the same logic if i connect directly to postgres instead of pgbouncer too. I'd expect it to roll back whichever transaction that caused this issue.
Is there a way to just rollback and continue working like normal? Everything i've read just says fix the query but in this case it's not always possible.
You need to use the ROLLBACK command. This will undo everything since the last BEGIN TRANSACTION or START TRANSACTION. Note that transactions do not nest; if you've begun multiple transactions without committing, this will roll back the outermost transaction.
This will drop you into autocommit mode. You may want to issue a new BEGIN TRANSACTION command to open a new transaction.
You should also be able to ROLLBACK TO SAVEPOINT, if you have a savepoint from before the error.
(If at all possible, it is preferred to just fix the query, but depending on what you're doing, that may be prohibitively difficult.)

Run Alter Database with Set READ_COMMITTED_SNAPSHOT to ON

I am trying to run the SQL statement below:
ALTER DATABASE DBNAME
SET READ_COMMITTED_SNAPSHOT ON
However when I ran it does not complete the execution I have to terminate after an 1hr.
Is there any suggestion on how to run this without disconnecting all other user from the database?
Thanks
Completion of this command requires, for just an instant, to be the only transaction open against the database. It seems to me that this almost requires that you put the DB into single-user mode briefly. But maybe if you just leave the query (trying to) run overnight, at some point you'll get that magic instant.
There's a bit more on the topic here: http://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/
Edit: the books online offers a bit more detail:
When you set ALLOW_SNAPSHOT_ISOLATION to a new state (from ON to OFF, or from OFF to ON), ALTER DATABASE does not return control to the caller until all existing transactions in the database are committed. If the database is already in the state specified in the ALTER DATABASE statement, control is returned to the caller immediately. If the ALTER DATABASE statement does not return quickly, use sys.dm_tran_active_snapshot_database_transactions to determine whether there are long-running transactions. If the ALTER DATABASE statement is canceled, the database remains in the state it was in when ALTER DATABASE was started. The sys.databases catalog view indicates the state of snapshot-isolation transactions in the database. If snapshot_isolation_state_desc = IN_TRANSITION_TO_ON, ALTER DATABASE ALLOW_SNAPSHOT_ISOLATION OFF will pause six seconds and retry the operation.

FDW seems to lock table on foreign server

I try to use foreign table to link 2 postgresql databases
everything is fine and I can retrieve all data I want
the only issue is that the data wrapper seems to lock tables in foreign server and it's very annoying when I unit test my code
if I don't do any select request I can initialize data and truncate both tables in local server and tables in remote server
but I execute one select statement the truncate command on remote server seems to be in deadlock state
do you know how I can avoid this lock?
thanks
[edit]
I use this data wrapper to link 2 postgresql databases: http://interdbconnect.sourceforge.net/pgsql_fdw/pgsql_fdw-en.html
I use table1 of db1 as foreign table in db2
when I execute a select query in foreign_table1 in db2, there is an AccessShareLock for table1 in db1
the query is very simple: select * from foreign_table1
the lock is never released so when I execute a truncate command at the end of my unit test, there is a conflict because the truncate add an AccessExclusiveLock
I don't know how to release the first AccessShareLock but I think it would be done automatically by the wrapper...
hope this help
AccessExclusiveLock and AccessShareLock aren't generally obtained explicitly. They're obtained automatically by certain normal statements. See locking - the lock list says which statements acquire which locks, which says:
ACCESS SHARE
Conflicts with the ACCESS EXCLUSIVE lock mode only.
The SELECT command acquires a lock of this mode on referenced tables.
In general, any query that only reads a table and does not modify it
will acquire this lock mode.
What this means is that your 1st transaction hasn't committed or rolled back (thus releasing its locks) yet, so the 2nd can't TRUNCATE the table because TRUNCATE requires ACCESS EXCLUSIVE which conflicts with ACCESS SHARE.
Make sure the 1st transaction commits or rolls back.
BTW, is the "foreign" database actually the local database, ie are you using pgsql_fdw as an alternative to dblink to simulate autonomous transactions?