RepositoryItemReader - How can I use a method with List parameter - spring-batch

I have the following repository:
#Repository
public interface InterlocutorRepository extends PagingAndSortingRepository<Interlocutor, UsuarioId> {
Page<Interlocutor> findByFlCadastroOnlineIn(List<StatusCadOnline> flCadastroOnline, Pageable pageable);
}
And I have the following reader in my Spring Batch App:
#Bean
public ItemReader<Interlocutor> reader() {
List<StatusCadOnline> listaStatusCadOnline = new ArrayList();
listaStatusCadOnline.add(StatusCadOnline.PENDENTE);
listaStatusCadOnline.add(StatusCadOnline.ERRO);
RepositoryItemReader<Interlocutor> reader = new RepositoryItemReader<>();
reader.setRepository(repository);
reader.setMethodName("findByFlCadastroOnlineIn");
reader.setArguments(listaStatusCadOnline);
HashMap<String, Sort.Direction> sorts = new HashMap<>();
sorts.put("nrCpf", Sort.Direction.ASC);
reader.setSort(sorts);
return reader;
}
Spring Batch is understanding that my method has two arguments of type StatusCadOnline instead of a single parameter of type List.
LogError:
ERROR [SimpleAsyncTaskExecutor-178] b.c.a.c.b.steps.ItemSkipPolicy.shouldSkip 12 - java.lang.NoSuchMethodException: com.sun.proxy.$Proxy84.findByFlCadastroOnlineIn(br.com.alelo.corp.core.model.enumerator.StatusCadOnline, br.com.alelo.corp.core.model.enumerator.StatusCadOnline, org.springframework.data.domain.PageRequest)
Could anyone help me with that?
Thanks

To use the JPA's findBy*In() with RepositoryItemReader:
You'll need to receive a list of arguments, and to do that you have to change your attributes to a List<List<?>>
Example of running code solution to your case:
...
List<List<StatusCadOnline>> arguments = new ArrayList();
List<StatusCadOnline> attributeListIn = new ArrayList();
attributeListIn.add(StatusCadOnline.PENDENTE);
attributeListIn.add(StatusCadOnline.ERRO);
arguments.add(attributeListIn);
RepositoryItemReader<Filial> reader = new RepositoryItemReader<>();
reader.setRepository(branchRepository);
reader.setMethodName("findByFlCadastroOnlineIn");
reader.setArguments(arguments);
...

Related

Spring Batch: How to write to csv file from a Collection

I need to write to a CSV file from a Collection.
I created a class:
public class ItemWriterForCSVFile extends FlatFileItemWriter<Map<String, String>>{private LineAggregator<Map<String, String>> createLineAggregator() {
DelimitedLineAggregator<Map<String, String>> lineAggregator = new DelimitedLineAggregator<>();
lineAggregator.setDelimiter(",");
BeanWrapperFieldExtractor<Map<String, String>> fieldExtractor = createFieldExtractor();
lineAggregator.setFieldExtractor(fieldExtractor);
return lineAggregator;
}
private BeanWrapperFieldExtractor<Map<String, String>> createFieldExtractor() {
BeanWrapperFieldExtractor<Map<String, String>> extractor = new BeanWrapperFieldExtractor<>();
extractor.setNames(fields);
return extractor;
}}
Currently, I am have a code above, which extract fields from an object.
I need to change it to use a Map, as a dynamic structure.
I saw that PassThroughFieldExtractor can write a collection, but haven't find a suitable examples in java.
Any help appreciated.

Is using default method in an interface a good pattern to avoid code duplications?

We have a lot of code duplication in data holder classes that can be serialized to a XML string:
public String toXml() throws JAXBException {
final JAXBContext context = JAXBContext.newInstance(this.getClass());
final Marshaller marshaller = context.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_ENCODING, "UTF-8");
final StringWriter stringWriter = new StringWriter();
marshaller.marshal(this, stringWriter);
return stringWriter.toString();
}
Why not move this code to a single interface with default implementation? So a simple implements ToXmlUtf8 would be enough to share the default implementation and avoid code duplicates:
public interface ToXml {
default String toXml() throws JAXBException {
final JAXBContext context = JAXBContext.newInstance(this.getClass());
final Marshaller marshaller = context.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_ENCODING, "UTF-8");
final StringWriter stringWriter = new StringWriter();
marshaller.marshal(this, stringWriter);
return stringWriter.toString();
}
}
Has anybody done this before successfully?
Other solutions?
I could also imagine using an annotation to generate this code.
Are there any ready to use solutions available?
Yes, default methods can be used in that way.
Although the intended use case of default methods is adding new functionality to existing interfaces without breaking old code, default methods have other uses as well. Default methods are also used in interfaces which were added in Java 8, such as in java.util.Predicate, so even the Java designers recognized that adding new functionality to existing interfaces is not the only valid use of default methods.
A disadvantage could be that the implemented interfaces are part of a class's public contract, but in your case this does not seem to be a problem.
If you're using the exact same method then an interface won't help, what you want to do is make a static method and put that in a util class

How to use BeanWrapperFieldSetMapper to map a subset of fields?

I have a Spring batch application where BeanWrapperFieldSetMapper is used to map fields using a prototype object. However, the CSV file that is being read (via a FlatFileItemReader) contains one (indicator) field that determines the mapping of another field. If the indicator field has a value of Y, then the value of the another field should be mapped to property foo otherwise it should be mapped to property bar.
I know that I can use a custom FieldSetMapper to do this, but then I have to code the mapping all of the other fields (of which there are a quite a few). Alternatively, I could do this post reading via an ItemProcessor but then my domain (prototype) object must have a property representing the indicator field (which I prefer not to do since it is not really part of the business domain).
Is it possible to perhaps use a custom FieldSetMapper to only map these custom fields and delegate the other mappings to BeanWrapperFieldSetMapper? Or is there some other better way to solve for this?
Here is my current attempt to use a custom FieldSetMapper and delegate to BeanWrapperFieldSetMapper:
public class DelegatedFieldSetMapper extends BeanWrapperFieldSetMapper<MyProtoClass> {
#Override
public MyProtoClass mapFieldSet(FieldSet fieldSet) throws BindException {
String indicator = fieldSet.readString("indicator");
Properties fieldProperties = fieldSet.getProperties();
if (indicator.equalsIgnoreCase("y")) {
fieldProperties.put("test.foo", fieldSet.readString("value");
} else {
fieldProperties.put("test.bar", fieldSet.readString("value");
}
fieldProperties.remove("indicator");
Set<Object> keys = fieldProperties.keySet();
List<String> names = new ArrayList<String>();
List<String> values = new ArrayList<String>();
for (Object key : keys) {
names.add((String) key);
values.add((String) fieldProperties.getProperty((String) key));
}
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(names.toArray(new String[names.size()]), values.toArray(new String[values.size()]));
return super.mapFieldSet(domainObjectFieldSet);
}
}
However, a FlatFileParseException is thrown. The relevant parts of the batch config class are as follows:
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Value("${file}")
private File file;
#Bean
#Scope("prototype")
public MyProtoClass () {
return new MyProtoClass();
}
#Bean
public ItemReader<MyProtoClass> reader(LineMapper<MyProtoClass> lineMapper) {
FlatFileItemReader<MyProtoClass> flatFileItemReader = new FlatFileItemReader<MyProtoClass>();
flatFileItemReader.setResource(new FileSystemResource(file));
final int NUMBER_OF_HEADER_LINES = 1;
flatFileItemReader.setLinesToSkip(NUMBER_OF_HEADER_LINES);
flatFileItemReader.setLineMapper(lineMapper);
return flatFileItemReader;
}
#Bean
public LineMapper<MyProtoClass> lineMapper(LineTokenizer lineTokenizer, FieldSetMapper<MyProtoClass> fieldSetMapper) {
DefaultLineMapper<MyProtoClass> lineMapper = new DefaultLineMapper<MyProtoClass>();
lineMapper.setLineTokenizer(lineTokenizer);
lineMapper.setFieldSetMapper(fieldSetMapper);
return lineMapper;
}
#Bean
public LineTokenizer lineTokenizer() {
DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer();
lineTokenizer.setNames(new String[] {"value", "test.bar", "test.foo", "indicator"});
return lineTokenizer;
}
#Bean
public FieldSetMapper<MyProtoClass> fieldSetMapper(PropertyEditor emptyStringToNullPropertyEditor) {
BeanWrapperFieldSetMapper<MyProtoClass> fieldSetMapper = new DelegatedFieldSetMapper();
fieldSetMapper.setPrototypeBeanName("myProtoClass");
Map<Class<String>, PropertyEditor> customEditors = new HashMap<Class<String>, PropertyEditor>();
customEditors.put(String.class, emptyStringToNullPropertyEditor);
fieldSetMapper.setCustomEditors(customEditors);
return fieldSetMapper;
}
Finally, the CSV flat file look like this:
value,bar,foo,indicator
abc,,,y
xyz,,,n
Let's say that BatchWorkObject is the class to be mapped.
Here's a sample code in Spring Boot style that needs only your custom logic to be added.
new BeanWrapperFieldSetMapper<BatchWorkObject>(){
{
this.setTargetType(BatchWorkObject.class);
}
#Override
public BatchWorkObject mapFieldSet(FieldSet fs)
throws BindException {
BatchWorkObject tmp= super.mapFieldSet(fs);
// your custom code here
return tmp;
}
});
The code actually accomplishes what is desired except for one issue that results in the FlatFileParseException. The DelegatedFieldSetMapper contains the issue as follows:
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(names.toArray(new String[names.size()]), values.toArray(new String[values.size()]));
To resolve, change to:
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(values.toArray(new String[values.size()]), names.toArray(new String[names.size()]));
Write your own FieldSetMapper with a set of prepared delegates inside.
Those delegates are pre-built for every different kind of fields mapping.
In your object route to correct delegate based on indicator field (with a Classifier, for example).
I can't see any other way, but this solution is quite easy and straightforward to maintain.
Processing based on the input format/data can be done using a custom implementation of ItemProcessor which is either changing values in the same entity (that was populated by IteamReader) or creates a new one output entity.

QueryBuider get parameters for Dao.queryRaw

I'm using QueryBuider to create raw query, but I need to fill parameters to raw query manually.
Properties 'from' and 'to' are filled two times. One in 'where' section of QueryBuider, and one in queryRaw method as parameters.
Method StatementBuilder.prepareStatementString() returns query string with "?" for substitution.
Is there any way to get these parameters directly from QueryBuider instance?
For example, imagine a new method in ormlite - StatementBuilder.getPreparedStatementParameters();
QueryBuilder<AccountableItemEntity, Long> accountableItemQb = accountableItemDao.queryBuilder();
QueryBuilder<AccountingEntryEntity, Long> accountingEntryQb = accountingEntryDao.queryBuilder();
accountingEntryQb.where().eq(
AccountingEntryEntity.ACCOUNTING_ENTRY_STATE_FIELD_NAME,
AccountingEntryStateEnum.CREATED);
accountingEntryQb.join(accountableItemQb);
QueryBuilder<AccountingTransactionEntity, Long> accountingTransactionQb =
accountingTransactionDao.queryBuilder();
accountingTransactionQb.selectRaw("ACCOUNTINGENTRYENTITY.TITLE, " +
"ACCOUNTINGENTRYENTITY.ACCOUNTABLE_ITEM_ID, " +
"SUM(ACCOUNTINGENTRYENTITY.COUNT), " +
"SUM(ACCOUNTINGENTRYENTITY.COUNT * CONVERT(ACCOUNTINGENTRYENTITY.PRICEAMOUNT,DECIMAL(20, 2)))");
accountingTransactionQb.join(accountingEntryQb);
accountingTransactionQb.where().eq(
AccountingTransactionEntity.ACCOUNTING_TRANSACTION_STATE_FIELD_NAME,
AccountingTransactionStateEnum.PRINTED)
.and().between(AccountingTransactionEntity.CREATE_TIME_FIELD_NAME, from, to);
accountingTransactionQb.groupByRaw(
"ACCOUNTINGENTRYENTITY.ACCOUNTABLE_ITEM_ID, ACCOUNTINGENTRYENTITY.TITLE");
String query = accountingTransactionQb.prepareStatementString();
accountingTransactionQb.prepare().getStatement();
Timestamp fromTimestamp = new Timestamp(from.getTime());
Timestamp toTimestamp = new Timestamp(to.getTime());
//TODO: get parameters from accountingTransactionQb
GenericRawResults<Object[]> genericRawResults =
accountingEntryDao.queryRaw(query, new DataType[] { DataType.STRING,
DataType.LONG, DataType.LONG, DataType.BIG_DECIMAL },
fromTimestamp.toString(), toTimestamp.toString());
Is there any way to get these parameters directly from QueryBuider instance?
Yes, there is a way. You need to subclass QueryBuilder and then you can use the appendStatementString(...) method. You provide the argList which then can be used to get the list of arguments.
protected void appendStatementString(StringBuilder sb,
List<ArgumentHolder> argList) throws SQLException {
appendStatementStart(sb, argList);
appendWhereStatement(sb, argList, true);
appendStatementEnd(sb, argList);
}
For example, imagine a new method in ormlite - StatementBuilder.getPreparedStatementParameters();
Good idea. I've made the following changes to the Github repo.
public StatementInfo prepareStatementInfo() throws SQLException {
List<ArgumentHolder> argList = new ArrayList<ArgumentHolder>();
String statement = buildStatementString(argList);
return new StatementInfo(statement, argList);
}
...
public static class StatementInfo {
private final String statement;
private final List<ArgumentHolder> argList;
...
The feature will be in version 4.46. You can build a release from current trunk if you don't want to wait for that release.

Can't insert new entry into deserialized AutoBean Map

When i try to insert a new entry to a deserialized Map instance i get no exception but the Map is not modified. This EntryPoint code probes it. I'm doing anything wrong?
public class Test2 implements EntryPoint {
public interface SomeProxy {
Map<String, List<Integer>> getStringKeyMap();
void setStringKeyMap(Map<String, List<Integer>> value);
}
public interface BeanFactory extends AutoBeanFactory {
BeanFactory INSTANCE = GWT.create(BeanFactory.class);
AutoBean<SomeProxy> someProxy();
}
#Override
public void onModuleLoad() {
SomeProxy proxy = BeanFactory.INSTANCE.someProxy().as();
proxy.setStringKeyMap(new HashMap<String, List<Integer>>());
proxy.getStringKeyMap().put("k1", new ArrayList<Integer>());
proxy.getStringKeyMap().put("k2", new ArrayList<Integer>());
String payload = AutoBeanCodex.encode(AutoBeanUtils.getAutoBean(proxy)).toString();
proxy = AutoBeanCodex.decode(BeanFactory.INSTANCE, SomeProxy.class, payload).as();
// insert a new entry into a deserialized map
proxy.getStringKeyMap().put("k3", new ArrayList<Integer>());
System.out.println(proxy.getStringKeyMap().keySet()); // the keySet is [k1, k2] :-( ¿where is k3?
}
}
Shouldn't AutoBeanCodex.encode(AutoBeanUtils.getAutoBean(proxy)).toString(); be getPayLoad()
I'll check the code later, and I don't know if that is causing the issue. But it did stand out as different from my typical approach.
Collection classes such as java.util.Set and java.util.List are tricky because they operate in terms of Object instances. To make collections serializable, you should specify the particular type of objects they are expected to contain through normal type parameters (for example, Map<Foo,Bar> rather than just Map). If you use raw collections or maps you will get bloated code and be vulnerable to denial of service attacks.
Font: http://www.gwtproject.org/doc/latest/DevGuideServerCommunication.html#DevGuideSerializableTypes