Example :
I have two different batches
batch-a , batch-b running on azure connecting to onprem db.
batch-a is deployed first and creates a meta data table.
lets say batch-b is deployed after few months .
Can it use the same meta tables that was created and used by batch-a?
If batch-a and batch-b is different Jobs, then you can use the same Spring Batch Metadata tables, it depends if batch-a and batch-b is connecting to same DB then yes Spring Batch Framework will automatically take care of it.
See my article here: https://prateek-ashtikar512.medium.com/spring-batch-metadata-in-different-schema-c18813a0448a
Related
I have the need of creating 2 data sources in spring, one pointed to a read replica and one to a primary db. Both the db's are the same since the replica is just a copy of the primary. Therefore, they both use the same exact entities. Since I've created 2 data sources, I can no longer use the #EntityScan annotation. Before, when using the #EntityScan on my singular data source my connections to the database worked great. When I define two data sources and custom entity managers for each I can still connect to the databases just fine, but all my SQL queries will fail. It's as if the schemas are now incorrect and I'll get errors like columns don't exist when I know they do. Are there resources on how to achieve this design? I have been trying to follow this example:
https://www.baeldung.com/spring-boot-configure-multiple-datasources
I have a use case where I am using spring batch and writing to 3 different data sources based on the job parameters. All of this mechanism is working absolutely fine but the only problem is the meta data. Spring batch is using the default data Source to write the metadata . So whenever I write the data for a job, the transactional data always goes to the correct DB but the batch metadata always goes to default DB.
Is it possible to selectively write the meta data also to the respective databases based on the jobs parameter?
#michaelMinella , #MahmoudBenHassine Can you please help.
We have a spring batch application which inserts data into few tables and then selects data from few tables based on multiple business conditions and writes the data in feed file(flat text file). The application while run generates empty feed file only with headers and no data. The select query when ran separately in SQL developer runs for 2 hours and fetches the data (approx 50 million records). We are using the below components in the application JdbcCursorItemReader and FlatFileWrtier. Below is the configuration details used.
maxBatchSize=100
fileFetchSize=1000
commitInterval=10000
There are no errors or exceptions while the application is run. Wanted to know if we are missing anything here or is any spring batch components not properly used.Any pointers in this regard would be really helpful.
My question is simple - is spring batch made in the way that it allows batch processing hosts to be connected to the same database which contains all spring batch's tables like BATCH_JOB_EXECUTION, BATCH_JOB_STATUS etc?
I'm asking because now I have application organized in this way and I have DB deadlocks on those tables...
In Spring Batch , when are meta data tables dropped?
I see drop sql file at - /org/springframework/batch/core/... but not sure if its some trigger from program ( Batch Job itself ) that drops these tables or these tables need to be dropped manually or does it have anything to do with batch admin?
I suppose they are never dropped automatically but a manual action is always required (from SB admin) or from your application (as part of your application service layer)
The meta data tables are not created automatically nor are they dropped automatically.
You need to do it yourself once. (This can be automated if necessary but need not be.)
Spring boot does provide a facility that automatically will create the tables needed, but that is not part of the native Spring Batch functionality.