I am trying to create a Spring Batch POC with Java Configuration and PostGreSQL.
I have successfully created beans that would have otherwise been provided via the in memory DB using #EnableBatchProcessing and #EnableAutoConfiguration.
I am not able to get the beans (JobExplorer) to return a JobExecution list given a JobInstance bean created from the same JobExplorer bean.
The error I am getting is "Unable to deserialize the execution context" which seems to be coming from the method trying to deserialize the "SHORT_CONTEXT" field of the JOB_EXECUTION_CONTEXT table.
I have passed the created JobExplorer bean DefaultExecutionContextSerializer. Later passed a DefaultLobHandler with "wrapAsLob" set to True when I was still getting the error.
#Bean
public JobRegistry jobRegistry() {
JobRegistry jr = new MapJobRegistry();
return jr;
}
#Bean
public JobRegistryBeanPostProcessor jobRegistryBeanPostProcessor() {
JobRegistryBeanPostProcessor jrbpp = new JobRegistryBeanPostProcessor();
jrbpp.setJobRegistry(jobRegistry());
return jrbpp;
}
#Bean
public JobOperator jobOperator() {
SimpleJobOperator sjo = new SimpleJobOperator();
sjo.setJobExplorer(jobExplorer());
sjo.setJobLauncher(jobLauncher());
sjo.setJobRegistry(jobRegistry());
sjo.setJobRepository(jobRepository());
return sjo;
}
#Bean
public JobExplorer jobExplorer() {
JobExplorerFactoryBean jefb = new JobExplorerFactoryBean();
jefb.setDataSource(dataSource());
jefb.setJdbcOperations(jdbcTemplate);
jefb.setTablePrefix("batch_");
jefb.setSerializer(new DefaultExecutionContextSerializer());
DefaultLobHandler lh = new DefaultLobHandler();
lh.setWrapAsLob(true);
jefb.setLobHandler(lh);
JobExplorer je = null;
try {
je = jefb.getObject();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return je;
}
#ConfigurationProperties(prefix = "spring.datasource")
#Bean
#Primary
public DataSource dataSource() {
return DataSourceBuilder.create().build();
}
#Bean
public JobRepository jobRepository() {
JobRepositoryFactoryBean jrfb = new JobRepositoryFactoryBean();
jrfb.setDataSource(dataSource());
jrfb.setDatabaseType("POSTGRES");
jrfb.setTransactionManager(new ResourcelessTransactionManager());
jrfb.setSerializer(new DefaultExecutionContextSerializer());
jrfb.setTablePrefix("batch_");
JobRepository jr = null;
try {
jr = (JobRepository)jrfb.getObject();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return jr;
}
Below is the get method in my rest controller where I am trying handle generate a list of failed Job executions
#Autowired
JobLauncher jobLauncher;
#Autowired
JobRegistry jobRegistry;
#Autowired
JobOperator jobOperator;
#Autowired
JobExplorer jobExplorer;
#SuppressWarnings("unchecked")
#GetMapping("batch/failedJobs")
public Map<String, List<JobExecution>> getFailedJobs() {
try {
if (jobRegistry == null || jobOperator == null || jobExplorer == null) {
System.out.println("job registry, operator or explorer is null");
} else {
Map<String, List<JobExecution>> allJobInstances = new HashMap<String, List<JobExecution>>();
// Get all jobs
jobRegistry.getJobNames().stream().forEach(jobName -> {
jobExplorer.getJobInstances(jobName, 1, 1000).forEach(l -> {
System.out.println("jobName: " + jobName + " instance: " + l);
});
jobExplorer.getJobInstances(jobName, 1, 1000).stream().forEach(jobInstance -> {
List<JobExecution> execultionList = jobExplorer.getJobExecutions(jobInstance); //Failing here
if (execultionList != null) {
System.out.println("" + execultionList);
execultionList.stream().forEach(l2 -> {
System.out.println("jobName: " + jobName + " instance: " + jobInstance
+ " jobExecution: " + l2);
});
if(allJobInstances.get(jobName) == null) {
allJobInstances.put(jobName, new ArrayList<JobExecution>());
}
allJobInstances.get(jobName).addAll((Collection<? extends JobExecution>) jobExplorer.getJobExecutions(jobInstance).stream().filter(e -> e.getStatus().equals(BatchStatus.FAILED)));
}else {
System.out.println("Could not get jobExecution for jobName " + jobName + " jobInstance: " + jobInstance);
}
});
});
return allJobInstances;
}
}catch (Exception e) {
System.out.println(e.getMessage());
logger.info(e.getMessage());
}
return null;
}
I fixed a similar issue by changing to the Jackson2 serializer:
jefb.setSerializer(new Jackson2ExecutionContextStringSerializer());
You may try it.
Related
Normally, when we define a class-level #KafkaListener and method level #KafkaHandlers, we can define a default #KafkaHandler to handle unexpected payloads.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#class-level-kafkalistener
But, what should we do if we don't have a default method?
With version 2.6 and later, you can configure a SeekToCurrentErrorHandler to immediately send such messages to a dead letter topic, by examining the exception.
Here is a simple Spring Boot application that demonstrates the technique:
#SpringBootApplication
public class So59256214Application {
public static void main(String[] args) {
SpringApplication.run(So59256214Application.class, args);
}
#Bean
public NewTopic topic1() {
return TopicBuilder.name("so59256214").partitions(1).replicas(1).build();
}
#Bean
public NewTopic topic2() {
return TopicBuilder.name("so59256214.DLT").partitions(1).replicas(1).build();
}
#KafkaListener(id = "so59256214.DLT", topics = "so59256214.DLT")
void listen(ConsumerRecord<?, ?> in) {
System.out.println("dlt: " + in);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, Object> template) {
return args -> {
template.send("so59256214", 42);
template.send("so59256214", 42.0);
template.send("so59256214", "No handler for this");
};
}
#Bean
ErrorHandler eh(KafkaOperations<String, Object> template) {
SeekToCurrentErrorHandler eh = new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(template));
BackOff neverRetryOrBackOff = new FixedBackOff(0L, 0);
BackOff normalBackOff = new FixedBackOff(2000L, 3);
eh.setBackOffFunction((rec, ex) -> {
if (ex.getMessage().contains("No method found for class")) {
return neverRetryOrBackOff;
}
else {
return normalBackOff;
}
});
return eh;
}
}
#Component
#KafkaListener(id = "so59256214", topics = "so59256214")
class Listener {
#KafkaHandler
void integerHandler(Integer in) {
System.out.println("int: " + in);
}
#KafkaHandler
void doubleHandler(Double in) {
System.out.println("double: " + in);
}
}
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
Result:
int: 42
double: 42.0
dlt: ConsumerRecord(topic = so59256214.DLT, ...
I have two DataSource Beans one with #Primary annotation.
Individual Hikari pools are created for every DataSource.
I am trying to change the HikariDataSource from Pool 1(if connection is not available) to Pool 2 .
#Primary
#Bean(destroyMethod = "close", name = "dataSource")
public CustomHikariDataSource dataSource() throws SQLException {
try {
primaryDataSource = mainDataSource();
} catch (Exception e) {
primaryDataSource = secondaryDataSource();
}
HikariConfig config = new HikariConfig();
config.setDataSource(primaryDataSource);
config.setPoolName("POOL_PRIMARY");
config.setAllowPoolSuspension(true);
config.setIdleTimeout(10000);
config.setMaxLifetime(30000);
return new CustomHikariDataSource(config);
}
#Bean(destroyMethod = "close", name = "failoverDataSource")
public CustomHikariDataSource failoverDataSource() throws SQLException {
secondaryDataSource = secondaryDataSource();
HikariConfig config = new HikariConfig();
config.setDataSource(secondaryDataSource);
config.setPoolName("POOL_SECONDARY");
config.setAllowPoolSuspension(true);
return new CustomHikariDataSource(config);
}
private DataSource mainDataSource() {
return dataSourceProperties().initializeDataSourceBuilder().build();
}
private DataSource secondaryDataSource() {
return failoverDataSourceProperties().initializeDataSourceBuilder().build();
}
Where is the actual Problem?
Finally i am able to achieve it by Overriding getConnection() method from HikariDataSource.class
#Override
public Connection getConnection() throws SQLException {
if (isClosed()) {
throw new SQLException("HikariDataSource " + this + " has been closed.");
}
if (fastPathPool != null && (fastPathPool.poolState == 0 || fastPathPool.poolState == 1)) {
try {
fastPathPool.resumePool();
con = fastPathPool.getConnection();
} catch (Exception e) {
}
if (con.isClosed()) {
config = pool.config;
fastPathPool.suspendPool();
} else
return con;
}
config.setDataSource(dataSource);
config.setAllowPoolSuspension(true);
config.setMinimumIdle(minIdle);
pool = new HikariPool(config);
HikariPool result = pool;
if (result == null) {
synchronized (this) {
result = pool;
if (result == null) {
validate();
System.out.println("{} - Starting..." + getPoolName());
try {
pool = result = new HikariPool(this);
this.seal();
} catch (PoolInitializationException pie) {
if (pie.getCause() instanceof SQLException) {
throw (SQLException) pie.getCause();
} else {
throw pie;
}
}
System.out.println("{} - Start completed." + getPoolName());
}
}
}
return result.getConnection();
}
For complete class ,feel free to ping me.
Happy Coding ! :)
In the spring batch project, I used JdbcCursorItemReader to read data to process them in parallel. I can run the batch locally without any problem.
I also heard that JdbcPagingItemReader is recommended for parallel processing against JdbcCursorItemReader, as cursor reader will hold the connection too long while paging reader can release connection once the page size is reached.
I then switched to JdbcPagingItemReader in step2, but out of surprise, I got the exception below when running locally.
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 -
Connection is not available, request timed out after 300001ms.
However, it seems the above exception occurs in step1 before the paging reader in step2 is executed, and that is the only change made. Please shed some light on why the exception is thrown and if it is good practice to use paging reader instead of cursor in parallel processing. Much appreciated your help!
The code snippet is pasted below:
#Bean
#StepScope
public Flow createParallelSubFlow() {
List<Flow> subFlowList = new ArrayList<>();
List<Stream> streamList;
try {
streamList = dataSourceConfig.streamMapper().
getStreamListByStatus(Constants.PENDING_STATUS_CD);
} catch (Exception e) {
}
streamList.forEach(stream -> {
long id = stream.getStreamId();
String flowName = "stream" + id + "_flow";
Flow subFlow = new FlowBuilder<Flow>(flowName)
.start(step1(id))
.next(step2(id))
.end();
subFlowList.add(subFlow);
});
return new FlowBuilder<Flow>("splitFlow").split(new SimpleAsyncTaskExecutor())
.add(subFlowList.toArray(new Flow[0])).build();
}
public Step step1(long id) {
return stepBuilderFactory.get("step1")
.<Domain, Domain>chunk(100)
.reader(reader1(id))
.writer(writer1())
.build();
}
//#StepScope
//#Bean
public Step step2(long id) {
return stepBuilderFactory.get("step2")
.<Domain, Domain>chunk(100)
.reader(cursorReader2(id))
.processor(processor2)
.writer(writer2())
.build();
}
public JdbcCursorItemReader<Domain> cursorReader2(Long id) {
return new JdbcCursorItemReaderBuilder<Domain>()
.dataSource(dataSourceConfig.dataSource())
.name("cursorReader")
.sql(Constants.QUERY_SQL)
.preparedStatementSetter(new PreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps) throws SQLException {
ps.setLong(1, id);
}})
.rowMapper(new RowMapper())
.build();
}
//Switch from cursorReader2 to pagingReader2 in step2
public JdbcPagingItemReader<Domain> pagingReader2(Long id) {
return new JdbcPagingItemReaderBuilder<Domain>()
.dataSource(dataSourceConfig.dataSource())
.name("pagingReader")
.queryProvider(queryProvider())
.parameterValues(parameterValues(id))
.rowMapper(new RowMapper())
.pageSize(100)
.build();
}
#Bean
public PagingQueryProvider queryProvider() {
SqlPagingQueryProviderFactoryBean providerFactory = new SqlPagingQueryProviderFactoryBean();
Map<String, Order> sortKeys = new HashMap<>(2);
sortKeys.put("ID", Order.ASCENDING);
providerFactory.setDataSource(dataSourceConfig.dataSource());
providerFactory.setSelectClause("SELECT Clause");
providerFactory.setFromClause("FROM Clause");
providerFactory.setWhereClause("WHERE Clause");
providerFactory.setSortKeys(sortKeys);
PagingQueryProvider pagingQueryProvider = null;
try {
pagingQueryProvider = providerFactory.getObject();
} catch (Exception e) {
logger.error("Failed to get PagingQueryProvider", e);
throw new RuntimeException("Failed to get PagingQueryProvider", e);
}
return pagingQueryProvider;
}
private Map<String, Object> parameterValues(Long id) {
Map<String, Object> parameterValues = new HashMap<>();
parameterValues.put("1", id);
return parameterValues;
}
I want to write skipper lines in first csv file and the result of processor in second file in one step but it not works !
My code :
// => Step cecStep1
#Bean
public Step cecStep1(StepBuilderFactory stepBuilders) throws IOException {
return stepBuilders.get("fileDecrypt")
.<CSCivique, String>chunk(100)
.reader(reader1())
.processor(processor1FileDecrypt())
.writer(writer1())
.faultTolerant()
.skip(Exception.class)
.skipLimit(100)
.listener(new MySkipListener())
.build();
}
// ##################################### Step SkipListener ###################################################
public static class MySkipListener implements SkipListener {
private BufferedWriter bw = null;
public MySkipListener(File file) throws IOException {
//this.fileWriter = new FileWriter(file);
bw= new BufferedWriter(new FileWriter(file, true));
System.out.println("MySkipListener =========> :"+file);
}
#Override
public void onSkipInRead(Throwable throwable) {
if (throwable instanceof FlatFileParseException) {
FlatFileParseException flatFileParseException = (FlatFileParseException) throwable;
System.out.println("onSkipInRead =========> :");
try {
bw.write(flatFileParseException.getInput()+"; VĂ©rifiez les colonnes !!");
bw.newLine();
bw.flush();
// fileWriter.close();
} catch (IOException e) {
System.err.println("Unable to write skipped line to error file");
}
}
}
#Override
public void onSkipInWrite(CSCivique item, Throwable t) {
System.out.println("Item " + item + " was skipped due to: " + t.getMessage());
}
#Override
public void onSkipInProcess(CSCivique item, Throwable t) {
System.out.println("Item " + item + " was skipped due to: " + t.getMessage());
}
}
#Bean
public FlatFileItemWriter<String> writer1() {
return new FlatFileItemWriterBuilder<String>().name(greetingItemWriter)
.resource(new FileSystemResource("target/test-outputs/greetings.csv"))
.lineAggregator(new PassThroughLineAggregator<>()).build();
}
Tankyou !
In your processor, you can:
throw a skippable exception for invalid items so that the skip listener intercepts them and writes them to the specified file
let valid items go to the writer so that they are written as configured in the item writer
For example:
class MyItemProcessor implements ItemProcessor<Object, Object> {
#Override
public Object process(Object item) throws Exception {
if (shouldBeSkipped(item)) {
throw new MySkippableException();
}
// process item
return item;
}
}
Hope this helps.
I was trying to log the number of current active connections. I am using com.zaxxer.hikari.HikariJNDIFactory as my data source factory.
final Context context = new InitialContext();
HikariConfig hikariConfig = new HikariConfig();
hikariConfig.setDataSource((DataSource) ((Context)context.lookup("java:comp/env")).lookup("jdbc/mydb"));
HikariPool hikariPool = new HikariPool(hikariConfig);
LOGGER.log(Level.INFO, "The count is ::" + hikariPool.getActiveConnections());
But it is throwing the following exception:
java.lang.RuntimeException: java.lang.NullPointerException
at com.zaxxer.hikari.util.PoolUtilities.createInstance(PoolUtilities.java:105)
at com.zaxxer.hikari.metrics.MetricsFactory.createMetricsTracker(MetricsFactory.java:34)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:131)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:99)
at com.something.servlet.HikariConnectionCount.doGet(HikariConnectionCount.java:35)
Where HikariConnectionCount.java is the file I have written
Programatic access is documented here https://github.com/brettwooldridge/HikariCP/wiki/MBean-(JMX)-Monitoring-and-Management
Here's a dirty recipe:
import org.springframework.beans.DirectFieldAccessor;
import com.zaxxer.hikari.HikariDataSource;
import com.zaxxer.hikari.pool.HikariPool;
public class HikariDataSourcePoolDetail {
private final HikariDataSource dataSource;
public HikariDataSourcePoolDetail(HikariDataSource dataSource) {
this.dataSource = dataSource;
}
public HikariPool getHikariPool() {
return (HikariPool) new DirectFieldAccessor(dataSource).getPropertyValue("pool");
}
public int getActive() {
try {
return getHikariPool().getActiveConnections();
} catch (Exception ex) {
return -1;
}
}
public int getMax() {
return dataSource.getMaximumPoolSize();
}
}
Use it thus:
try {
HikariDataSourcePoolDetail dsd = new HikariDataSourcePoolDetail((HikariDataSource)dataSource);
log.info("HikariDataSource details: max={} active={}", dsd.getMax(), dsd.getActive());
} catch (Exception e) {
log.error("HikariDataSourcePoolDetail failed: ", e);
}