How to set values in ItemPreparedStatementSetter for one to many mapping - spring-batch

I am trying to use JdbcBatchItemWriter for a domain object RemittanceClaimVO . RemittanceClaimVO has a List of another domain object , ClaimVO .
public class RemittanceClaimVO {
private long remitId;
private List<ClaimVO> claims = new ArrayList<ClaimVO>();
//setter and getters
}
So for each remit id, there would be multiple claims and I wish to use single batch statement to insert all rows.
With plain jdbc, I used to write this object by putting values in batches like below ,
ist<ClaimVO> claims = remittanceClaimVO.getClaims();
if(claims != null && !claims.isEmpty()){
for(ClaimVO claim:claims){
int counter = 1 ;
stmt.setLong(counter++, remittanceClaimVO.getRemitId());
stmt.setLong(counter++, claim.getClaimId());
stmt.addBatch();
}
}
stmt.executeBatch();
I am not sure how to achieve same in Spring Batch by using ItemPreparedStatementSetter.
I have tried similar loop as above in setValues method but values not getting set.
#Override
public void setValues(RemittanceClaimVO remittanceClaimVO, PreparedStatement ps) throws SQLException {
List<ClaimVO> claims = remittanceClaimVO.getClaims();
for(ClaimVO claim:claims){
int counter = 1 ;
ps.setLong(counter++, remittanceClaimVO.getRemitId());
ps.setLong(counter++, claim.getClaimId());
}
}
This seems another related question.
Please suggest.

Related

Get return value from ExecuteSqlRaw in EF Core

I have an extremely large table that I'm trying to get the number of rows for. Using COUNT(*) is too slow, so I want to run this query using EF Core:
int count = _dbContext.Database.ExecuteSqlRaw(
"SELECT Total_Rows = SUM(st.row_count) " +
"FROM sys.dm_db_partition_stats st " +
"WHERE object_name(object_id) = 'MyLargeTable' AND(index_id < 2)");
The only problem is that the return value isn't the result of the query, but the number of records returned, which is just 1
Is there a way to get the correct value here, or will I need to use a different method?
Since you only need a scalar value you can also use an output parameter to retrieve the data, eg
var sql = #"
SELECT #Total_Rows = SUM(st.row_count)
FROM sys.dm_db_partition_stats st
WHERE object_name(object_id) = 'MyLargeTable' AND(index_id < 2)
";
var pTotalRows = new SqlParameter("#Total_Rows", System.Data.SqlDbType.BigInt);
pTotalRows.Direction = System.Data.ParameterDirection.Output;
db.Database.ExecuteSqlRaw(sql, pTotalRows);
var totalRos = (long?)(pTotalRows.Value == DBNull.Value ? null:pTotalRows.Value) ;
If one let's me to recreate a correct answer based on this blog: https://erikej.github.io/efcore/2020/05/26/ef-core-fromsql-scalar.html
We need to create a virtual entity model for our database, that will contain our needed query result, at the same time we need a pseudo DbSet<this virtual model> to use ef core FromSqlRaw method that returns data instead of ExecuteSqlRaw that just returns numbers of rows affected by query.
The example is for returning an integer value, but you can easily adapt it:
Define a return value holding class
public class IntReturn
{
public int Value { get; set; }
}
Fake a virtual DbSet<IntReturn> it will not be really present in db:
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
...
modelBuilder.Entity<IntReturn>().HasNoKey();
base.OnModelCreating(modelBuilder);
}
Now we can call FromSqlRaw for this virtual set. In this example the calling method is inside MyContext:DbContext, you'd need to instantiate your own context to use it instead of this):
NOTE the usage of "as Value" - same name as IntReturn.Value property. In some wierd cases you'd have to do the opposite: name your virtual model property after the name of the value thre database funstion is returning.
public int ReserveNextCustomerId()
{
var sql = $"Select nextval(pg_get_serial_sequence('\"Customers\"', 'Id')) as Value;";
var i = this.Set<IntReturn>()
.FromSqlRaw(sql)
.AsEnumerable()
.First().Value;
return i;
}

Using Integer Array in postgres with Spring-boot

I am attempting to accept from the browser a List and use this within a SQL query to a postgres database. I have the following code snippet that tries to show the function that I have made todo this. Some of the variables have been changed in case there appears to be discrepancies.
static public List<Map<String,Object>> fetch(NamedParameterJdbcTemplate jdbcTemplate, List<Integer> id){
List<Map<String,Object>> result= new ArrayList<>();
String sql = "select * from lookup where id && ARRAY[ :ids ]";
MapSqlParameterSource parameters = new MapSqlParameterSource();
parameters.addValue("ids",id, Types.INTEGER);
result= jdbcTemplate.query(sql,
parameters,
new RowMapper<Map<String,Object>>() { ...
}
)
}
The lookup tables id field is a postgress array hence me needing to use && and the array function
This function is called by many different endpoints and passes the NamedParameterJdbcTemplate as well as a list of Integers. The problem I am having is that if any integer in the list is < 100 I get the following message
Bad value for type int : {20}
Is there another way of doing this or a way around this error ?
EDIT:
It appears it was part of the problem mentioned as the answer but also using
rs.getInt(col)
instead of
rs.getArray(col)
There's an error I can see in the SQL, and probably the wrong choice of API after that. First in the query:
select * from lookup where id && ARRAY[ :ids ]
To bind an array parameter, it must not be placed in the ARRAY constructor, but rather you need to use JDBC binding like this:
select * from lookup where id && ?
As you've noticed I'm not using a named parameter in these examples, because NamedParameterJdbcTemplate does not provide a route to obtaining the java.sql.Connection object or a proxy to it. You can access it through the PreparedStatementSetter if you use the JdbcOperations interface instead.
public static List<Map<String,Object>> fetch(NamedParameterJdbcTemplate jdbcTemplate, List<Integer> idlist){
List<Map<String,Object>> result= new ArrayList<>();
String sql = "select * from lookup where id && ?";
final Integer[] ids = idlist.toArray(new Integer[0]);
PreparedStatementSetter parameters = new PreparedStatementSetter() {
#Override
void setValues(PreparedStatement stmt) {
Connection conn = stmt.getConnection();
// this can only be done through the Connection
java.sql.Array arr = conn.createArrayOf("integer", ids);
// you can use setObject(1, ids, java.sql.Types.ARRAY) instead of setArray
// in case the connection wrapper doesn't pass it on to the JDBC driver
stmt.setArray(1, ids);
}
};
JdbcOperations jdo = jdbcTemplate.getJdbcOperations();
result= jdo.query(sql,
parameters,
new RowMapper<Map<String,Object>>() { ...
}
)
}
There might be errors in the code, since I normally use a different set of APIs, and you need a try-catch block for java.sql.SQLException in that setValues function, but you should be able to handle it from here on.

How to use BeanWrapperFieldSetMapper to map a subset of fields?

I have a Spring batch application where BeanWrapperFieldSetMapper is used to map fields using a prototype object. However, the CSV file that is being read (via a FlatFileItemReader) contains one (indicator) field that determines the mapping of another field. If the indicator field has a value of Y, then the value of the another field should be mapped to property foo otherwise it should be mapped to property bar.
I know that I can use a custom FieldSetMapper to do this, but then I have to code the mapping all of the other fields (of which there are a quite a few). Alternatively, I could do this post reading via an ItemProcessor but then my domain (prototype) object must have a property representing the indicator field (which I prefer not to do since it is not really part of the business domain).
Is it possible to perhaps use a custom FieldSetMapper to only map these custom fields and delegate the other mappings to BeanWrapperFieldSetMapper? Or is there some other better way to solve for this?
Here is my current attempt to use a custom FieldSetMapper and delegate to BeanWrapperFieldSetMapper:
public class DelegatedFieldSetMapper extends BeanWrapperFieldSetMapper<MyProtoClass> {
#Override
public MyProtoClass mapFieldSet(FieldSet fieldSet) throws BindException {
String indicator = fieldSet.readString("indicator");
Properties fieldProperties = fieldSet.getProperties();
if (indicator.equalsIgnoreCase("y")) {
fieldProperties.put("test.foo", fieldSet.readString("value");
} else {
fieldProperties.put("test.bar", fieldSet.readString("value");
}
fieldProperties.remove("indicator");
Set<Object> keys = fieldProperties.keySet();
List<String> names = new ArrayList<String>();
List<String> values = new ArrayList<String>();
for (Object key : keys) {
names.add((String) key);
values.add((String) fieldProperties.getProperty((String) key));
}
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(names.toArray(new String[names.size()]), values.toArray(new String[values.size()]));
return super.mapFieldSet(domainObjectFieldSet);
}
}
However, a FlatFileParseException is thrown. The relevant parts of the batch config class are as follows:
#Configuration
#EnableBatchProcessing
public class BatchConfiguration {
#Value("${file}")
private File file;
#Bean
#Scope("prototype")
public MyProtoClass () {
return new MyProtoClass();
}
#Bean
public ItemReader<MyProtoClass> reader(LineMapper<MyProtoClass> lineMapper) {
FlatFileItemReader<MyProtoClass> flatFileItemReader = new FlatFileItemReader<MyProtoClass>();
flatFileItemReader.setResource(new FileSystemResource(file));
final int NUMBER_OF_HEADER_LINES = 1;
flatFileItemReader.setLinesToSkip(NUMBER_OF_HEADER_LINES);
flatFileItemReader.setLineMapper(lineMapper);
return flatFileItemReader;
}
#Bean
public LineMapper<MyProtoClass> lineMapper(LineTokenizer lineTokenizer, FieldSetMapper<MyProtoClass> fieldSetMapper) {
DefaultLineMapper<MyProtoClass> lineMapper = new DefaultLineMapper<MyProtoClass>();
lineMapper.setLineTokenizer(lineTokenizer);
lineMapper.setFieldSetMapper(fieldSetMapper);
return lineMapper;
}
#Bean
public LineTokenizer lineTokenizer() {
DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer();
lineTokenizer.setNames(new String[] {"value", "test.bar", "test.foo", "indicator"});
return lineTokenizer;
}
#Bean
public FieldSetMapper<MyProtoClass> fieldSetMapper(PropertyEditor emptyStringToNullPropertyEditor) {
BeanWrapperFieldSetMapper<MyProtoClass> fieldSetMapper = new DelegatedFieldSetMapper();
fieldSetMapper.setPrototypeBeanName("myProtoClass");
Map<Class<String>, PropertyEditor> customEditors = new HashMap<Class<String>, PropertyEditor>();
customEditors.put(String.class, emptyStringToNullPropertyEditor);
fieldSetMapper.setCustomEditors(customEditors);
return fieldSetMapper;
}
Finally, the CSV flat file look like this:
value,bar,foo,indicator
abc,,,y
xyz,,,n
Let's say that BatchWorkObject is the class to be mapped.
Here's a sample code in Spring Boot style that needs only your custom logic to be added.
new BeanWrapperFieldSetMapper<BatchWorkObject>(){
{
this.setTargetType(BatchWorkObject.class);
}
#Override
public BatchWorkObject mapFieldSet(FieldSet fs)
throws BindException {
BatchWorkObject tmp= super.mapFieldSet(fs);
// your custom code here
return tmp;
}
});
The code actually accomplishes what is desired except for one issue that results in the FlatFileParseException. The DelegatedFieldSetMapper contains the issue as follows:
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(names.toArray(new String[names.size()]), values.toArray(new String[values.size()]));
To resolve, change to:
DefaultFieldSet domainObjectFieldSet = new DefaultFieldSet(values.toArray(new String[values.size()]), names.toArray(new String[names.size()]));
Write your own FieldSetMapper with a set of prepared delegates inside.
Those delegates are pre-built for every different kind of fields mapping.
In your object route to correct delegate based on indicator field (with a Classifier, for example).
I can't see any other way, but this solution is quite easy and straightforward to maintain.
Processing based on the input format/data can be done using a custom implementation of ItemProcessor which is either changing values in the same entity (that was populated by IteamReader) or creates a new one output entity.

Filehelpers and Entity Framework

I'm using Filehelpers to parse a very wide, fixed format file and want to be able to take the resulting object and load it into a DB using EF. I'm getting a missing key error when I try to load the object into the DB and when I try and add an Id I get a Filehelpers error. So it seems like either fix breaks the other. I know I can map a Filehelpers object to a POCO object and load that but I'm dealing with dozens (sometimes hundreds of columns) so I would rather not have to go through that hassle.
I'm also open to other suggestions for parsing a fixed width file and loading the results into a DB. One option of course is to use an ETL tool but I'd rather do this in code.
Thanks!
This is the FileHelpers class:
public class AccountBalanceDetail
{
[FieldHidden]
public int Id; // Added to try and get EF to work
[FieldFixedLength(1)]
public string RecordNuber;
[FieldFixedLength(3)]
public string Branch;
// Additional fields below
}
And this is the method that's processing the file:
public static bool ProcessFile()
{
var dir = Properties.Settings.Default.DataDirectory;
var engine = new MultiRecordEngine(typeof(AccountBalanceHeader), typeof(AccountBalanceDetail), typeof(AccountBalanceTrailer));
engine.RecordSelector = new RecordTypeSelector(CustomSelector);
var fileName = dir + "\\MOCK_ACCTBAL_L1500.txt";
var res = engine.ReadFile(fileName);
foreach (var rec in res)
{
var type = rec.GetType();
if (type.Name == "AccountBalanceHeader") continue;
if (type.Name == "AccountBalanceTrailer") continue;
var data = rec as AccountBalanceDetail; // Throws an error if AccountBalanceDetail.Id has a getter and setter
using (var ctx = new ApplicationDbContext())
{
// Throws an error if there is no valid Id on AccountBalanceDetail
// EntityType 'AccountBalanceDetail' has no key defined. Define the key for this EntityType.
ctx.AccountBalanceDetails.Add(data);
ctx.SaveChanges();
}
//Console.WriteLine(rec.ToString());
}
return true;
}
Entity Framework needs the key to be a property, not a field, so you could try declaring it instead as:
public int Id {get; set;}
I suspect FileHelpers might well be confused by the autogenerated backing field, so you might need to do it long form in order to be able to mark the backing field with the [FieldHidden] attribute, i.e.,
[FieldHidden]
private int _Id;
public int Id
{
get { return _Id; }
set { _Id = value; }
}
However, you are trying to use the same class for two unrelated purposes and this is generally bad design. On the one hand AccountBalanceDetail is the spec for the import format. On the other you are also trying to use it to describe the Entity. Instead you should create separate classes and map between the two with a LINQ function or a library like AutoMapper.

QueryBuider get parameters for Dao.queryRaw

I'm using QueryBuider to create raw query, but I need to fill parameters to raw query manually.
Properties 'from' and 'to' are filled two times. One in 'where' section of QueryBuider, and one in queryRaw method as parameters.
Method StatementBuilder.prepareStatementString() returns query string with "?" for substitution.
Is there any way to get these parameters directly from QueryBuider instance?
For example, imagine a new method in ormlite - StatementBuilder.getPreparedStatementParameters();
QueryBuilder<AccountableItemEntity, Long> accountableItemQb = accountableItemDao.queryBuilder();
QueryBuilder<AccountingEntryEntity, Long> accountingEntryQb = accountingEntryDao.queryBuilder();
accountingEntryQb.where().eq(
AccountingEntryEntity.ACCOUNTING_ENTRY_STATE_FIELD_NAME,
AccountingEntryStateEnum.CREATED);
accountingEntryQb.join(accountableItemQb);
QueryBuilder<AccountingTransactionEntity, Long> accountingTransactionQb =
accountingTransactionDao.queryBuilder();
accountingTransactionQb.selectRaw("ACCOUNTINGENTRYENTITY.TITLE, " +
"ACCOUNTINGENTRYENTITY.ACCOUNTABLE_ITEM_ID, " +
"SUM(ACCOUNTINGENTRYENTITY.COUNT), " +
"SUM(ACCOUNTINGENTRYENTITY.COUNT * CONVERT(ACCOUNTINGENTRYENTITY.PRICEAMOUNT,DECIMAL(20, 2)))");
accountingTransactionQb.join(accountingEntryQb);
accountingTransactionQb.where().eq(
AccountingTransactionEntity.ACCOUNTING_TRANSACTION_STATE_FIELD_NAME,
AccountingTransactionStateEnum.PRINTED)
.and().between(AccountingTransactionEntity.CREATE_TIME_FIELD_NAME, from, to);
accountingTransactionQb.groupByRaw(
"ACCOUNTINGENTRYENTITY.ACCOUNTABLE_ITEM_ID, ACCOUNTINGENTRYENTITY.TITLE");
String query = accountingTransactionQb.prepareStatementString();
accountingTransactionQb.prepare().getStatement();
Timestamp fromTimestamp = new Timestamp(from.getTime());
Timestamp toTimestamp = new Timestamp(to.getTime());
//TODO: get parameters from accountingTransactionQb
GenericRawResults<Object[]> genericRawResults =
accountingEntryDao.queryRaw(query, new DataType[] { DataType.STRING,
DataType.LONG, DataType.LONG, DataType.BIG_DECIMAL },
fromTimestamp.toString(), toTimestamp.toString());
Is there any way to get these parameters directly from QueryBuider instance?
Yes, there is a way. You need to subclass QueryBuilder and then you can use the appendStatementString(...) method. You provide the argList which then can be used to get the list of arguments.
protected void appendStatementString(StringBuilder sb,
List<ArgumentHolder> argList) throws SQLException {
appendStatementStart(sb, argList);
appendWhereStatement(sb, argList, true);
appendStatementEnd(sb, argList);
}
For example, imagine a new method in ormlite - StatementBuilder.getPreparedStatementParameters();
Good idea. I've made the following changes to the Github repo.
public StatementInfo prepareStatementInfo() throws SQLException {
List<ArgumentHolder> argList = new ArrayList<ArgumentHolder>();
String statement = buildStatementString(argList);
return new StatementInfo(statement, argList);
}
...
public static class StatementInfo {
private final String statement;
private final List<ArgumentHolder> argList;
...
The feature will be in version 4.46. You can build a release from current trunk if you don't want to wait for that release.