Spring Bean Scope for StringRedisConnection - spring-data

I have the following two bean definitions for Spring Data Redis. I cant seem to find the relevant documentation to determine the scopes(singleton,request or session) of these beans for a web app.
#Bean
public StringRedisTemplate redisTemplate() throws Exception {
StringRedisTemplate redisTemplate = new StringRedisTemplate();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
#Bean
public StringRedisConnection stringRedisConnection() throws Exception {
return new DefaultStringRedisConnection(redisTemplate().getConnectionFactory().getConnection());
}

Thanks to #Christoph Strobl recommendation here is the implementation Iam currently using
public List<String> testAutoComplete(String key,String query, int limitCount){
StringRedisSerializer serializer = new StringRedisSerializer();
RedisZSetCommands.Range range = Range.range();
range.gt(query);
RedisZSetCommands.Limit limit = new RedisZSetCommands.Limit();
limit.count(limitCount);
return template.execute(new RedisCallback< List<String>>() {
public List<String> doInRedis(RedisConnection connection) {
Set<byte[]> results = connection.zRangeByLex(serializer.serialize(key), range,limit);
List<String> resultAsString = new ArrayList<String>();
for(byte[] result : results){
resultAsString.add(serializer.deserialize(result));
}
return resultAsString;
}
},false);
}

Related

How to use #BeforeStep Job Parameters in JdbcCursorItemReader for named Query

I have the code like below
#Bean
public JdbcCursorItemReader<Map<String, Object>> itemReader() {
return new JdbcCursorItemReader<Map<String, Object>>() {
private JobParameters jobParameter;
String sql = "select EMPLOYEE_ID as empId, EMPLOYEE_NAME as empName EMPLOYEE_AGE as age from EMPLOYEE EMPLOYEE_DEPT =:empDept and EMPLOYEE_SAL > :empSal";
Map<String, Object> namedParameters = null;
#PostConstruct
public void initialize() throws Exception
{
setDataSource(dataSource);
setSql("select 1 from dual");
setRowMapper(new ColumnMapRowMapper());
}
#BeforeStep
public void retrieveExecutionContext(StepExecution stepExecution)
{
jobParameter = stepExecution.getJobParameters();
namedParameters = new HashMap<String, Object>() {
{
put("bstd", jobParameter.getString("empDept"));
put("bwtn", jobParameter.getString("empSal"));
}
};
jobParameter.getParameters().forEach((k, v) -> System.out.println("key =" + k + ", Value:" + v));
}
#Override
public void afterPropertiesSet() throws Exception {
setSql(NamedParameterUtils.substituteNamedParameters(sql, new MapSqlParameterSource(namedParameters)));
setPreparedStatementSetter(new ListPreparedStatementSetter(
Arrays.asList(NamedParameterUtils.buildValueArray(sql, namedParameters))));
setRowMapper(new ColumnMapRowMapper());
setDataSource(dataSource);
super.afterPropertiesSet();
}
};
}
Tried using calling afterPropertiesSet, but still seeing below exception
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: No value supplied for the SQL parameter 'empDept': No value registered for key 'empDept'
at org.springframework.jdbc.core.namedparam.NamedParameterUtils.buildValueArray(NamedParameterUtils.java:361) ~[spring-jdbc-5.3.22.jar:5.3.22]
at org.springframework.jdbc.core.namedparam.NamedParameterUtils.buildValueArray(NamedParameterUtils.java:485) ~[spring-jdbc-5.3.22.jar:5.3.22]
Requirement is dynamic query, so don't have control of the Select query and the where conditions.
Thanks in advance,
You can use a SpEL expression to inject and use job parameters in your item reader bean definition as follows:
#Bean
#StepScope
public JdbcCursorItemReader<Map<String, Object>> itemReader(#Value("#{jobParameters['empDept']}") String empDept, #Value("#{jobParameters['empSal']}") String empSal) {
JdbcCursorItemReader<Map<String, Object>> itemReader = new JdbcCursorItemReader<>();
// use parameters 'empDept' and 'empSal' in your sql query as needed
return itemReader;
}
Note that the item reader should be step-scoped for that to work. For more details, please refer to the documentation: Late Binding of Job and Step Attributes.

Spring Batch Reading AWS S3 -stepExecutionContext is null

I am trying to read files from AWS s3 using spring batch but the file name becomes null in stepExecutionContext. Same code was working when i read the files from the windows mount but when we migrate the code and reading it from S3 it is becoming null.
#Bean
#JobScope
public CustomMultiResourcePartitioner partitioner() {
CustomMultiResourcePartitioner partitioner = new CustomMultiResourcePartitioner();
Set<String > filesToProcess= fileRepository.findAllFilesByFileState("NEW");
List<Resource> resourceList = new ArrayList<>();
for(String file:filesToProcess) {
Resource resource = getS3Resource(file);
resourceList.add(resource);
log.info("resourceList Size"+resourceList.size());
}
if(resourceList.size()>0 && resourceList.toArray()!=null) {
resources = resourceList.stream().toArray(Resource[]::new);
ExecutionContext executionContext = new ExecutionContext();
executionContext.put("FILE_NAME",filesToProcess);
}
else
{
resources = new Resource[0];
}
partitioner.setResources(resources);
return partitioner;
}
#Bean
#StepScope
public FlatFileItemReader<RosterInput> itemReader(#Value("#{stepExecutionContext[fileName]}") String filename) throws UnexpectedInputException, ParseException {
FlatFileItemReader<RosterInput> reader = new FlatFileItemReader<RosterInput>();
DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer();
tokenizer.setStrict(false);
Resource resource =getS3Resource(filename);
}
I was using ByteArrayResource in my getS3Resource() method it was causing the file name to null. After modifying the code to use below the problem is solved.
public class FileNameAwareByteArrayResource extends ByteArrayResource {
private String fileName;
public FileNameAwareByteArrayResource(String fileName, byte[] byteArray) {
super(byteArray);
this.fileName = fileName;
}
#Override
public String getFilename() {
return fileName;
}
}

Use Chunk Listener for Indicator Pattern

I am trying to use Processor Indicator Pattern to make my job idempotent, i tried to use Write Listener, AfterWrite to update mongo document by setting a field Processed: true. However there were issues when there is a big number of chunks.
MongoDB Item Reader(10000 Docs) ---chunk(1000)--> JDBC Batch Item Writer(Only 5000 are saved in table after Step's completion)
The following Code is about The step:
#Bean
public MongoItemReader<X> Reader() throws Exception {
MongoItemReader<X> reader = new MongoItemReader<>();
reader.setTemplate(mongoTemplate);
reader.setCollection("MY_COLLECTION");
reader.setTargetType(X.class);
reader.setQuery("{PROCESSED: {$exists: false}}");
reader.setSort(new HashMap<String, Sort.Direction>() {{
put("_id", Sort.Direction.ASC);
}});
reader.afterPropertiesSet();
return reader;
}
#Bean
public XItemProcessor x_item_processor() {
return new XItemProcessor();
}
#Bean
public X_Item_Listener item_listener() {
return new X_Item_Listener();
}
#Bean
public X_Step_Listener step_listener() {
return new X_Step_Listener();
}
#Bean
public JdbcBatchItemWriter<Y> YWriter() {
JdbcBatchItemWriter<Y> Y_Writer = new JdbcBatchItemWriter<>();
Y_Writer.setDataSource(dataSource);
Y_Writer.setAssertUpdates(true);
Y_Writer.setItemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<>());
Y_Writer.setSql("INSERT INTO Y (Y1,Y2,Y3,Y4) VALUES (:y1, :y2, :y3, :y4)");
Y_Writer.afterPropertiesSet();
return Y_Writer;
}
#Bean
public Step XY_Step() throws Exception {
return stepBuilderFactory.get("XY")
.<X, Y>chunk(1000)
.reader(Reader())
.processor(x_item_processor())
.writer(YWriter())
.faultTolerant()
.skipLimit(Integer.MAX_VALUE)
.skip(Exception.class)
.listener((ItemProcessListener<? super X, ? super Y>) item_listener())
.listener(step_listener())
.build();
}
Here a snippet of code used in After Write Listener for updating mongo Document.
#Autowired
private MongoTemplate mongoTemplate;
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void afterWrite(List<? extends Y> items) {
BulkOperations ops=mongoTemplate.bulkOps(BulkOperations.BulkMode.UNORDERED,"MY_COLLECTION");
for (Y item : items) {
Update update = new Update().set("PROCESSED", true);
ops.updateOne(new Query(Criteria.where("_id").is(item.getID())), update);
}
ops.execute();
}

roll back all inserts if an exception occured

i am trying to persist multiple entities to database. but i need to roll back all inserts if one of them faces an exception. how can i do that?
here is what i did:
public class RoleCreationApplyService extends AbstractEntityProxy implements EntityProxy {
#Inject
#Override
public void setEntityManager(EntityManager em) {
super.entityManager = em;
}
#Resource
UserTransaction utx;
public Object acceptAppliedRole(String applyId, Role parentRole, SecurityContext securityContext) throws Exception {
utx.begin();
try {
FilterWrapper filter = FilterWrapper.createWrapperWithFilter("id", Filter.Operator._EQUAL, applyId);
RoleCreationApply roleCreationApply = (RoleCreationApply) getByFilter(RoleCreationApply.class, filter);
Role appliedRole = new Role();
appliedRole.setRoleUniqueName(roleCreationApply.getRoleName());
appliedRole.setRoleName(roleCreationApply.getRoleName());
appliedRole.setRoleDescription(roleCreationApply.getRoleDescription());
appliedRole.setRoleDisplayName(roleCreationApply.getRoleDisplayName());
appliedRole.setCreationTime(new Date());
appliedRole.setCreatedBy(securityContext.getUserPrincipal().getName());
Role childRole = (Role) save(appliedRole);
parentRole.setCreationTime(new Date());
parentRole.setCreatedBy(securityContext.getUserPrincipal().getName());
parentRole = (Role) save(parentRole);
RoleRelation roleRelation = new RoleRelation();
roleRelation.setParentRole(parentRole);
roleRelation.setChildRole(childRole);
RoleRelation savedRoleRelation = (RoleRelation) save(roleRelation);
PostRoleRelation postRoleRelation = new PostRoleRelation();
postRoleRelation.setPost(roleCreationApply.getPost());
postRoleRelation.setRoleRelation(savedRoleRelation);
ir.tamin.framework.domain.Resource result = save(postRoleRelation);
utx.commit();
return result;
} catch (Exception e) {
utx.rollback();
throw new Exception(e.getMessage());
}
}
}
and this is save method in AbstractEntityProxy class:
#Override
#ProxyMethod
public Resource save(Resource clientObject) throws ProxyProcessingException {
checkRelationShips((Entity) clientObject, Method.SAVE, OneToOne.class, ManyToOne.class);
try {
entityManager.persist(clientObject);
} catch (PersistenceException e) {
throw new ResourceAlreadyExistsException(e);
}
return clientObject;
}
but when an exception occures for example Unique Constraint Violated and it goes to catch block, when trying to execute utx.rollback() it complains transaction does not exist and so some entities will persist. but i want all to roll back if one fails.
PS: i don't want to use plain JDBC. what is JPA approach?

Save on JDBC connections by using JdbcCursorItemReader or JdbcPagingItemReader

In the spring batch project, I used JdbcCursorItemReader to read data to process them in parallel. I can run the batch locally without any problem.
I also heard that JdbcPagingItemReader is recommended for parallel processing against JdbcCursorItemReader, as cursor reader will hold the connection too long while paging reader can release connection once the page size is reached.
I then switched to JdbcPagingItemReader in step2, but out of surprise, I got the exception below when running locally.
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 -
Connection is not available, request timed out after 300001ms.
However, it seems the above exception occurs in step1 before the paging reader in step2 is executed, and that is the only change made. Please shed some light on why the exception is thrown and if it is good practice to use paging reader instead of cursor in parallel processing. Much appreciated your help!
The code snippet is pasted below:
#Bean
#StepScope
public Flow createParallelSubFlow() {
List<Flow> subFlowList = new ArrayList<>();
List<Stream> streamList;
try {
streamList = dataSourceConfig.streamMapper().
getStreamListByStatus(Constants.PENDING_STATUS_CD);
} catch (Exception e) {
}
streamList.forEach(stream -> {
long id = stream.getStreamId();
String flowName = "stream" + id + "_flow";
Flow subFlow = new FlowBuilder<Flow>(flowName)
.start(step1(id))
.next(step2(id))
.end();
subFlowList.add(subFlow);
});
return new FlowBuilder<Flow>("splitFlow").split(new SimpleAsyncTaskExecutor())
.add(subFlowList.toArray(new Flow[0])).build();
}
public Step step1(long id) {
return stepBuilderFactory.get("step1")
.<Domain, Domain>chunk(100)
.reader(reader1(id))
.writer(writer1())
.build();
}
//#StepScope
//#Bean
public Step step2(long id) {
return stepBuilderFactory.get("step2")
.<Domain, Domain>chunk(100)
.reader(cursorReader2(id))
.processor(processor2)
.writer(writer2())
.build();
}
public JdbcCursorItemReader<Domain> cursorReader2(Long id) {
return new JdbcCursorItemReaderBuilder<Domain>()
.dataSource(dataSourceConfig.dataSource())
.name("cursorReader")
.sql(Constants.QUERY_SQL)
.preparedStatementSetter(new PreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps) throws SQLException {
ps.setLong(1, id);
}})
.rowMapper(new RowMapper())
.build();
}
//Switch from cursorReader2 to pagingReader2 in step2
public JdbcPagingItemReader<Domain> pagingReader2(Long id) {
return new JdbcPagingItemReaderBuilder<Domain>()
.dataSource(dataSourceConfig.dataSource())
.name("pagingReader")
.queryProvider(queryProvider())
.parameterValues(parameterValues(id))
.rowMapper(new RowMapper())
.pageSize(100)
.build();
}
#Bean
public PagingQueryProvider queryProvider() {
SqlPagingQueryProviderFactoryBean providerFactory = new SqlPagingQueryProviderFactoryBean();
Map<String, Order> sortKeys = new HashMap<>(2);
sortKeys.put("ID", Order.ASCENDING);
providerFactory.setDataSource(dataSourceConfig.dataSource());
providerFactory.setSelectClause("SELECT Clause");
providerFactory.setFromClause("FROM Clause");
providerFactory.setWhereClause("WHERE Clause");
providerFactory.setSortKeys(sortKeys);
PagingQueryProvider pagingQueryProvider = null;
try {
pagingQueryProvider = providerFactory.getObject();
} catch (Exception e) {
logger.error("Failed to get PagingQueryProvider", e);
throw new RuntimeException("Failed to get PagingQueryProvider", e);
}
return pagingQueryProvider;
}
private Map<String, Object> parameterValues(Long id) {
Map<String, Object> parameterValues = new HashMap<>();
parameterValues.put("1", id);
return parameterValues;
}