Using JDBCBatchItemWriter for calling an Oracle Procedure gets EmptyResultDataAccessException - spring-batch

I am using Java based configuration for my Spring Batch. I am calling a stored procedure "writer.setSql("call proc (:_name)");"
The data is getting inserted through the procedure. However, I am getting exception " <<<<<<<<<
Thanks
Note: I am skipping "Exception.class" in my step.

The issue is due to the assertion of updates from the JDBCBatchItemWriter. The proc does not return the no.of rows affected like a sql statement. The java code throws the Exception of the count of updates is 0. The solution to the problem stated above is to setAssertUpdates to False " writer.setAssertUpdates(false)".
However, the question still remains on the best writer to use to execute DB objects like procedure or functions and how transactions should be managed.
Refer to the source code from the url below:
http://grepcode.com/file/repo1.maven.org/maven2/org.springframework.batch/spring-batch-infrastructure/3.0.0.RELEASE/org/springframework/batch/item/database/JdbcBatchItemWriter.java

I use Java Configuration. Set the writer to avoid 'assert updates' does the job.
writer.setAssertUpdates(false);

Related

How to force to set Pipelines' status to failed

I'm using Copy Data.
When there is some data error. I would export them to a blob.
But in this case, the Pipelines's status is still Succeeded. I want to set it to false. Is it possible?
When there is some data error.
It depends on what error you mentioned here.
1.If you mean it's common incompatibility or mismatch error, ADF supports built-in feature named Fault tolerance in Copy Activity which supports below 3 scenarios:
Incompatibility between the source data type and the sink native
type.
Mismatch in the number of columns between the source and the sink.
Primary key violation when writing to SQL Server/Azure SQL
Database/Azure Cosmos DB.
If you configure to log the incompatible rows, you can find the log file at this path: https://[your-blob-account].blob.core.windows.net/[path-if-configured]/[copy-activity-run-id]/[auto-generated-GUID].csv.
If you want to abort the job as soon as any error occurs,you could set as below:
Please see this case: Fault tolerance and log the incompatible rows in Azure Blob storage
2.If you are talking about your own logic for the data error,may some business logic. I'm afraid that ADF can't detect that for you, though it's also a common requirement I think. However,you could follow this case (How to control data failures in Azure Data Factory Pipelines?) to do a workaround. The main idea is using custom activity to divert the bad rows before the execution of copy activity. In custom activity, you could upload the bad rows into Azure Blob Storage with .net SDK as you want.
Update:
Since you want to log all incompatible rows and enforce the job failed at the same time, I'm afraid that it can not be implemented in the copy activity directly.
However, I came up with an idea that you could use If Condition activity after Copy Activity to judge if the output contains rowsSkipped. If so, output False,then you will know there are some skip data so that you could check them in the blob storage.

Spring batch instance id duplicate key error, trying to start from #1

I am copying java code(using springboot spring batch) and database from dev server to local(desktop) and run it. Getting an error.
It works fine in Dev server. In local , spring-batch is resetting Job instance to 1 and causing primary key error.Is there any option in spring batch so that it starts with next instance id instead of 1? Please let me know
Referred to stackoverflow link below , seems related but posted few years back and reference links does not work anymore.
Duplicate Spring Batch Job Instance
#Configuration
#EnableBatchProcessing
public class Jobclass {
#Rest of the code with Job Bean and steps which works fine in Dev server
}
Error:
com.microsoft.sqlserver.jdbc.SQLServerException: Violation of PRIMARY KEY
constraint 'PK__BATCH_JO__4848154AFB5435C7'. Cannot insert duplicate key
in object 'dbo.BATCH_JOB_INSTANCE'. The duplicate key value is (5).
I've had the same thing happening to me when moving an anonymized production database to another system. Turns out that the anonymization tool in question (PostgreSQL Anonymizer), has a bug which results in stripping the commands which set the next value for the exported sequences, so that was the root cause.
This would also cause the ID reported in the stacktrace to be incremented by 1 on with every attempt - since the sequence was erroneously starting at 1, but a lot of previous executions were stored in Spring Batch's tables.
When I sidestepped the issue by setting the next value myself, the problem vanished. In my case, this amounted to:
SELECT pg_catalog.setval('public.batch_job_execution_seq', 6482, true);
SELECT pg_catalog.setval('public.batch_job_seq', 6482, true);
SELECT pg_catalog.setval('public.batch_step_execution_seq', 6482, true);
Is there any option in spring batch so that it starts with next instance id
To answer your question, the "option" you are looking for is the RunIdIncrementer. It will increment a job parameter called "run.id" each time so you will have a new instance on each run.
However, this is not how I would fix the issue (See my comment). You need to check why this duplicate key exception is happening and fix it. Check if you are launching the job with same parameters resulting in the same instance (and even if this happens, you should not have such an exception if the transaction's isolation level of your job repository is correctly configured, I would expect a JobInstanceAlreadyCompleteException if the last execution succeeded or a JobExecutionAlreadyRunningException if the last execution failed and another one is currently running).

spring batch passing param from ItemProcessor to next ItemReader sql

I have following requirement.I am generating unique id from ItemProcessor and writing the same to database using JdbcItemWriter.
I wanted to pass this unique id as a query param in next JdbcItemReader,so that i can select all the records from database based on this unique id.
currently i am using max(uniqueid) from database.I have tried using {jobParameters['unqueid']} but it didn't worked.
Please let me know how to pass value from ItemProcessor to DataBaseItemReader.
I think using step execution context might work for you here. There is the option for setting some transient data on the step execution context and having that be available to other components in the same step.
There is a previous answer here that elaborates a bit more on this and a quick google search on "spring batch step execution context" also provides quite a few q/a on the subject.

How a Java client app. can "catch" (via JDBC) the result produced by a trigger procedure query?

I'm trying to understand how a java (client) application that communicates, through JDBC, with a pgSQL database (server) can "catch" the result produced by a query that will be fired (using a trigger) whenever a record is inserted into a table.
So, to clarify, via JDBC I install a trigger procedure prepared to execute a query whenever a record is inserted into a given database table, and from this query's execution will result an output (wrapped in a resultSet, I suppose). And my problem is that I have no idea how the client will be aware of those results, that are asynchronously produced.
I wonder if JDBC supports any "callback" mechanism able to catch the results produced by a query that is fired through a trigger procedure under the "INSERT INTO table" condition. And if there is no such "callback" mechanism, what is the best approach to achieve this result?
Thank you in advance :)
Triggers can't return a resultset.
There's no way to send such a result to the JDBC driver.
There are a few dirty hacks you can use to get results from a trigger to the client, but they're all exactly that. Things like:
DECLARE a cursor for the resultset, then send the cursor name as a NOTIFY payload, so the app can FETCH ALL FROM <cursorname>;
Create a TEMPORARY table and report the name via NOTIFY
It is more typical to append anything the trigger needs to communicate to the app to a table that exists for that purpose and have the app SELECT from it after the operation that fired the trigger ran.
In most cases if you need to do this, you're probably using a trigger where a regular function is a better fit.

continue insert when exception is raised in postgres

HI,
Iam trying to insert batch of records at a time when any of the record fails to insert i need to trap that record and log that to my failed record maintanance table and then the insert should continue. Kindly help on how to do this.
If using a Spring or EJB container there is a simple trick which works very well : provide a LogService witn a logWarning(String message) method. The method must be annotated/configured with the REQUIRES_NEW transaction setting.
If not then you'll have to simulate it using API calls. Open a different connection for the logging, when you enter the method begin the transaction, before leaving commit the transaction.
When not using transactions for the insert, there is actually nothing special you need to do, as by default most database run in autocommit and commit after every statement.