ERROR: current transaction is aborted, commands ignored until end of transaction block --- export data from Aqua studio - postgresql

I am trying to export one table from Aquastudio into CSV file. The table has approximately 4.4 million rows. When I am trying to use the export window function in the aqua studio, I am facing the following error:
Error: ERROR: current transaction is aborted, commands ignored until end of transaction block
I am not understanding what the problem is. I read few articles regarding this error and found that this is happening due to some error in the last postgreSQL command. I did not use any SQL commands for this export and I dont know how to debug this. I am also unable to view the log files.

Use rollback to cancel the previous query. After that, you will be able to execute your current query.

You probably shouldn't be exporting millions of rows through a JDBC/ODBC connection, especially for Redshift.
For Redshift, please use the UNLOAD command documented here. You'll have to UNLOAD the file to S3 and download it from there.
For Postgres, use COPY TO as documented here.

Related

Postgres alter system command fails using Hibernate fails

I want to make a change to the postgres.conf file at runtime. However, when I execute the sql with "alter system" via hibernate I get an error
Transaction is marked for rollback only or has timed out
I think this has something to do with alter system commands not allowed to execute inside a transaction block as per the documentation
Only superusers can use ALTER SYSTEM. Also, since this command acts directly on the file system and cannot be rolled back, it is not allowed inside a transaction block or function.
Im trying to understand if its possible to execute this type of command with hibernate and what I need to do to be able to do that?

ERROR: cannot execute SELECT in a read-only transaction when connecting to DB

When trying to connect to my Amazon PostgreSQL DB, I get the above error. With pgAdmin, I get "error saving properties".
I don't see why to connect to a server, I would do any write actions?
There are several reasons why you can get this error:
The PostgreSQL cluster is in recovery (or is a streaming replication standby). You can find out if that is the case by running
SELECT pg_is_in_recovery();
The parameter default_transaction_read_only is set to on. Diagnose with
SHOW default_transaction_read_only;
The current transaction has been started with
START TRANSACTION READ ONLY;
You can find out if that is the case using the undocumented parameter
SHOW transaction_read_only;
If you understand that, but still wonder why you are getting this error, since you are not aware that you attempted any data modifications, it would mean that the application that you use to connect tries to modify something (but pgAdmin shouldn't do that).
In that case, look into the log file to find out what statement causes the error.
This was a bug which is now fixed, Fix will be available in next release.
https://redmine.postgresql.org/issues/3973
If you want to try then you can use Nightly build and check: https://www.postgresql.org/ftp/pgadmin/pgadmin4/snapshots/2019-02-17/

Import data fails in DB2

I'm using Data Studio to connect to a DB2 server. When I'm trying to use the 'import utility' in the Data Studio, it succeeds with a warning and the result show that no record has been inserted into the database. The Import wizard is generating the following SQL command
CALL SYSPROC.ADMIN_CMD( 'IMPORT FROM "/home/xyz/backup/TRANSACTION" OF DEL MODIFIED BY coldel| delprioritychar INSERT INTO S.TRANSACTION' );
If I copy this command and paste it in a sql script in DB2 and then run it it give another error
An I/O error (reason = "sqlofopn -2029060079") occurred while opening the input file.. SQLCODE=-3030, SQLSTATE=
If I use the db2 shell to execute the IMPORT part of the command (without CALL SYSPROC.ADMIN_CMD) it succeed without any issue. What is wrong here?
When you (or DataStudio) runs SYSPROC.ADMIN_CMD (which is the default method used by DataStudio for import), the action happens on the Db2-server using the account of the Db2-instance-owner (for Db2-LUW).
That account (for example db2inst1) requires read access to the specified filename. In your case, the Db2-instance owner did not have access to the file (and/or the path containing the file), so the exception got thrown.
You may see additional detail in the Db2-server diagnostic file (db2diag.log) for the failed action, depending on the diagnostics level that is active on the Db2-server.
ADMIN_CMD expects the input file to be on the server, because it (as any other stored procedure) runs on the server; it has no access to your local file system.
Commands you run in the Db2 command line processor execute on the client and therefore can access the file locally.

using executable in Liquibase changesets

I am using execute command tag from my liquibase changesets and this inturn is configured to run the sqls in oracle instant client sql plus.
when i run a liquibase update on my changelogxml everything works fine and the liquibase update is sucessfull.I can see the changes to the table also.
But when i try to fail the update process by giving a syntax error in my sql file refered in the changeset.Liquibase still returns liquibase update sucessfull.I expected it to throw sql errors.The sql when run seperately in toad throws syntax error.What should i do to get the error displayed out.?
Datical has created a custom Liquibase change tag that executes SQL using the sqlplus command line client. It was surprisingly much more complicated that you might think.
Some of the issues we had to deal with:
we had to do things to ensure that the sql files always had certain statements in place, and never had certain other statements. This might include things like setting the schema, ensuring that the only spool commands were ones we knew about, that the script had an 'EXIT' command, and ensuring that whenever there was a SQL error that the exit code was returned.
The sqlplus executable does not return an exit code (i.e. a non-zero exit code form the native process) in all cases, and instead will write errors to an error table in the database. The table where sqlplus writes errors is called sperrorlog, and this may be what you will need to look into.
I can't really go into all the details, but just know that what you are attempting to do is neither simple nor straightforward.

Enterprise library semantic logging block. SQLDatabase sink. Out of process

I am using Enterprise library semantic logging block (out of process) and using SQL Database sink to dump all the message. After putting everything in place and doing a test run, I am getting the following error - could not find stored procedure 'dbo.WriteTraces'.
Anybody faced similar issue ? Pl suggest.
Out of process semantic logging assembly comes with some powershell scripts and .sql files. We have to edit (to change DB name) and run these scripts. This will generate the stored procs and the associated table for us.
I encountered this same error but it was because we were trying to use a schema other than dbo for our logging database. Once we changed it back to dbo that resolved the problem. We were using the out of process SemanticLogging-svc.exe, which, from what I can tell, assumes that dbo is the schema name.