Db2 error : SQL0901N, SQLSTATE=58004 - db2

Can i use Atomic in the parrent procedure as well as in the procedure which the mail procedure calls.
My procedure compiles perfectly, but sometimes when I execute it. I receive following error:
DB2 Database Error: ERROR [58004] [IBM][DB2/NT64] SQL0901N The SQL statement failed because of a non-severe system error. Subsequent SQL statements can be processed. (Reason "Sdir len bad: 1542!=1520+14".) SQLSTATE=58004
However, surprisingly when i commented the "ATOMIC" keyword in the main procedure and ran again it ran perfectly. But when I ran it again after uncommenting it still did not give any errors and ran perfectly.
So the error that I recieve is not something that I recive everytime.Could some one please let me know what could be the issue and what needs to be done to resolve this. As on goolgeing I did not find any leads on this.
Thanks,
Harveer

Found the following statement from an IBM employee on DeveloperWorks. Not sure if this helps.
3 In running rebind of all packages , I get an error
"SQL0901N The SQL statement failed because of a non-severe system
error.
Subsequent SQL statements can be processed. (Reason "Sdir len bad:
1171!=1160+9".) SQLSTATE=58004"
SQLSTATE 58004: A system error (that does not necessarily preclude the
successful execution of subsequent SQL statements) occurred."
How do we identify which stored procedure,function is creating this
error?
SQL0901 means: call IBM. There is
nothing you can do about this (only
work around it, possibly).

Related

ERROR: cannot execute CREATE TABLE in a read-only transaction - Jasper reporting

I'm facing this annoying problem in Jasper. I have created a report based on a PostgreSQL function. When I watch the preview, I do not have any problem with the results. However, when I publish the report and try to execute it, I get this error:
org.postgresql.util.PSQLException: ERROR: cannot execute CREATE TABLE in a read-only transaction
I've checked on the internet for a possible solution, so far this is the only thing that I have found with a similar problem:
https://community.jaspersoft.com/questions/814793/report-execution-fails-due-read-only-transaction-mode
However, adding the property to the URL does not work, or I'm not so sure if I have to write it in this way:
jdbc:postgresql://server:5432/data_base?defaultReadOnly="false"
In Jasper, what else can I do? I only can query the function, and requiring any change in it is an HUGE bureaucratic issue.
Jasper Studio 6.3.0
According to the documentation the JDBC connection parameter would be readOnly=false.
Have you verified that you are not connecting to a streaming replication standby server?

Using JDBCBatchItemWriter for calling an Oracle Procedure gets EmptyResultDataAccessException

I am using Java based configuration for my Spring Batch. I am calling a stored procedure "writer.setSql("call proc (:_name)");"
The data is getting inserted through the procedure. However, I am getting exception " <<<<<<<<<
Thanks
Note: I am skipping "Exception.class" in my step.
The issue is due to the assertion of updates from the JDBCBatchItemWriter. The proc does not return the no.of rows affected like a sql statement. The java code throws the Exception of the count of updates is 0. The solution to the problem stated above is to setAssertUpdates to False " writer.setAssertUpdates(false)".
However, the question still remains on the best writer to use to execute DB objects like procedure or functions and how transactions should be managed.
Refer to the source code from the url below:
http://grepcode.com/file/repo1.maven.org/maven2/org.springframework.batch/spring-batch-infrastructure/3.0.0.RELEASE/org/springframework/batch/item/database/JdbcBatchItemWriter.java
I use Java Configuration. Set the writer to avoid 'assert updates' does the job.
writer.setAssertUpdates(false);

Npgsql syntax error at or near "DISCARD"

I'm running a standard query against Redshift, and every other time I run it, I get:
[ConciergeClientException: We encountered a problem fulfilling your request: 42601: syntax error at or near "DISCARD"]
I'm opening and closing the connection properly, and the query looks fine. I've queried Redshift's STL_QUERY and the statement looks fine. I turned on logging, and I can't see where this DISCARD command is being sent.
Yet every other query gives me this error.
Thoughts?
Assuming you're using Ngsql 3.2.0, this looks like a duplicate of https://github.com/npgsql/npgsql/issues/1426. In a nutshell, Npgsql's pooling was changed in 3.2.0 in a way which is incompatible with Reshift.
As a workaround, specify No Reset On Close in your connection string. This will be fixed for 3.2.1.

ActiveRecord find_or_initialize_by race conditions

I have a scenario where 2 db connections might both run Model.find_or_initialize_by(params) and raise an error: PG::UniqueViolation: ERROR: duplicate key value violates unique constraint
I'd like to update my code so it could gracefully recover from it. Something like:
record = nil
begin
record = Model.find_or_initialize_by(params)
rescue ActiveRecord::RecordNotUnique
record = Model.where(params).first
end
return record
The trouble is that there's not a nice/easy way to reproduce this on my local machine, so I'm not confident that my fix actually works.
So I thought I'd get a bit creative and try calling create 2 times (locally) in a row which should raise then PG::UniqueViolation: ERROR, then I could rescue from it and make sure everything is handled gracefully.
But I get this error: PG::InFailedSqlTransaction: ERROR: current transaction is aborted, commands ignored until end of transaction block
I get this error even when I wrap everything in individual transaction blocks
record = nil
Model.transaction do
record = Model.create(params)
end
begin
Model.transaction do
record = Model.create(params)
end
rescue ActiveRecord::RecordNotUnique
end
Model.transaction do
record = Model.where(params).first
end
return record
My questions:
What's the right way to gracefully handle the race condition I mentioned at the very beginning of this post?
How do I test this locally?
I imagine there's probably something simple that I'm missing here, but it's late and perhaps I'm not thinking too clearly.
I'm running postgres 9.3 and rails 4.
EDIT Turns out that find_or_initialize_by should have been find_or_create_by and the errors I was getting was from the actual save call that happened later on in execution. #VeryTiredWhenIWroteThis
Has this actually happenend?
Model.find_or_initialize_by(params)
should never raise an ´ActiveRecord::RecordNotUnique´ error as it is not saving anything to db. It just creates a new ActiveRecord.
However in the second snippet you are creating records.
create (without bang) does not throw exceptions caused by validations, but
ActiveRecord::RecordNotUnique is always thrown in case of a duplicate by both create and create!
If you're creating records you don't need transactions at all. As Postgres being ACID compliant guarantees that only one of the both operations succeeds and if it responds so it's changes will be durable. (a single statement query against postgres is also a transaction). So your above code is almost fine if you replace through find_or_create_by
begin
record = Model.find_or_create_by(params)
rescue ActiveRecord::RecordNotUnique
record = Model.where(params).first
end
You can test if the code behaves correctly by simply trying to create the same record twice in row. However this will not test ActiveRecord::RecordNotUnique is actually thrown correctly on race conditions.
It's also no the responsibility of your app to test and testing it is not easy. You would have to start rails in multithread mode on your machine, or test against a multi process staging rails instance. Webrick for example handles only one request at a time. You can use puma application server, however on MRI there is no true concurrency (GIL). Threads only share the GIL only on IO blocking. Because talking to Postgres is IO, i'd expect some concurrent requests, but to be 100% sure, the best testing scenario would be to deploy on passenger with multiple workers and then use jmeter to run concurrent request agains the server.

continue insert when exception is raised in postgres

HI,
Iam trying to insert batch of records at a time when any of the record fails to insert i need to trap that record and log that to my failed record maintanance table and then the insert should continue. Kindly help on how to do this.
If using a Spring or EJB container there is a simple trick which works very well : provide a LogService witn a logWarning(String message) method. The method must be annotated/configured with the REQUIRES_NEW transaction setting.
If not then you'll have to simulate it using API calls. Open a different connection for the logging, when you enter the method begin the transaction, before leaving commit the transaction.
When not using transactions for the insert, there is actually nothing special you need to do, as by default most database run in autocommit and commit after every statement.