I have a requirement where csv input file is dynamically created and hence specifying Mapper class is not possible.
Is there a way to avoid setting the class and still able to read and write in Spring batch
BeanWrapperFieldSetMapper fieldSetMapper = new BeanWrapperFieldSetMapper();
fieldSetMapper.setTargetType(Target.class);//I want to avoid this.
Additional information :
Run some logic and create CSV(comma dilimited)
Columns are in order and I stored that information statically in properties file.(c1,c2,c3) which I also use to pass in lineTokenizer.setNames(properties.get(jobName.columnValues))
The same code executes for different jobNames and all information required is fetched from properties.
Now the problem : For FieldSetMapper
Class classInstance = Class.forName(getClassProperty(jobName));
fieldSetMapper.setTargetType(classInstance);
for point 4 I have to maintain all the classes for each job which I want to avoid.
Alternatively, the question is : I have a requirement where I am not sure how many fields will be in input file.
Related
I'm working on a batch using spring-batch with one reader, one writer ,one processor. I have one CSV file as an input of my reader.
I wanted to use OpenCSV to convert one line to one bean but what i see from the documentation is that OpenCsv take one file and use the object CsvToBeanBuilder to map all the line of one file to a list of object.
I saw this post : Configuring openCSV instead of FlatFileItemReader in spring batch step
but there is no explanation on how to map one String line to a Bean object using opencsv. Do someone know if it's possible? thanks.
The explanation is in the comments. OpenCSV does the reading and the mapping. If you want to use OpenCSV in your Spring Batch app with a FlatFileItemReader, you only need the mapping part, ie a LineMapper implementation based on OpenCSV.
Now if OpenCSV does not provide a way to map a single line to a POJO, then it is probably not suitable to be used in Spring Batch in that way. In that case, you need to implement a custom ItemReader based on OpenCSV that does the reading and the mapping.
So, I was trying to find a way to remove/rename( and change the fields value ) the _class field from the document generated by spring data couchbase as the document is going to be stored by one service and in all likeliness be consumed by someone totally different.
I was playing around with the api for spring couchbase and through a bit of trial and error found that I can rename the _class field with a custom value using the following way ->
1) Override the typeKey method in a class inheriting AbstractCouchbaseConfiguration . For example, let's say we overrided typeKey to do the following ->
#Override
public String typeKey() {
return "type";
}
2) In the POJO that stores the data into couchbase, add a field with the same field name as what you gave into the return value of the typeKey method and give it the custom value as needed -
private final String type = "studentDoc";
I wanted to check if this is a valid way of going about this or/and some better way is available to do something like this now
That is the only way to do it with spring data at this moment, we would like to add a few extra ways to do that but we are limited to the Spring Data interface contracts. That is why most of the extra configs are done via AbstractCouchbaseConfiguration.
Spring data library needs a field with the fully qualified class name as it's value to understand which class object to deserialize the data from couchbase into. By default, this field would be named _class, but can be modified by overriding the typeKey() method in your Couchbase configuration (extending AbstractCouchbaseConfiguration) as mentioned by you.
#Override
public String typeKey() {
return "customType";
}
But as far as I know, you shouldn't modify the value of the field since the library would not be able to understand which object to deserialize the data into.
Is it possible to generate one pivot script file per part in CFE ?
In our model, we imagine using pivot runner to update database later on. In our model we have one part that would be used to instanciate many structures (let's call it "Common"), while having one named "Global" shared accross all those ones.
I would like my producer to generate one pivot file based on the Common part only, thus not having any reference of the Global entities;
Is it achievable ?
Thanks for your answer,
XML parts are storage units. It allows you to split a large model into multiple files, but this doesn't change the inferred model. Producers use the inferred model.
What I would do is separate entities into different schema, so you'll have two schema: "Common" and "Global". The file generated by the Pivot Script producer will still contain all the objects, but you are able to distinguish them thanks to the schema. Then you can use the PivotRunner and change a little its behavior to only preserve objects in a specific schema:
// References: CodeFluent.Runtime.dll and CodeFluent.Runtime.Database.dll
PivotRunner runner = new PivotRunner("pivot.xml");
foreach (var table in runner.Tables.Where(t => t.Schema != "Common").ToList())
{
runner.Tables.Remove(table);
}
// TODO stored procedures, functions, views, table types, etc.
runner.ConnectionString = "...";
runner.Run();
http://blog.codefluententities.com/2013/10/10/the-new-sql-server-pivot-script-producer/
I have a requirement to implement in Spring batch,I need to read from a file and from a DB ,the data needs to be processed and written to an email
I have gone through the spring batch documentation but was unable to find a CHUNKtasklet which would read data from multiple readers
SO essentially I have to read from 2 different sources of data(one from file and another from DB,each will need to have its own mapper)
Regards
Tar
I see two options depending on how the data is structured:
Spring Batch relies heavily on composition when building batch components. One option would be to create a custom composite ItemReader that delegates to a other readers (ones Spring Batch provides or otherwise) and provides the logic to assemble a single object based on the results of those delegated ItemReaders.
You could use an ItemReader to provide the base information (say from a database) and use and ItemProcessor to enrich the item (say reading from a file).
Either of the above are normal ways to handle this type of input scenario.
I am implementing data mapper in my zend framework 1.12 project and its working fine as expected. Now further more to enhance it i wants to optimize it in following way.
While fetching any data what id i wants to fetch any 3 field data out of 10 fields in my model table? - The current issue is if i fetches the only required values then other valus in domain object class remains blank and while saving that data i am saving while model object not a single field value.
Can any one suggest the efficient way of doing this so that i can fetch/update only required values and no need to fetch all field data to update the record.
If property is NULL ignore it when crafting the update? If NULLs are valid values, then I think you would need to track loaded/dirty states per property.
How do you go about white-listing the fields to retrieve when making the call to the mapper? If you can persist that information I think it would make sense to leverage that knowledge when going to craft the update.
I don't typically go down this path. I will lazy load certain fields on a model when it makes sense, but I don't allow loading parts of the object like this, rather I create an alternate object for use in rendering a list when loading the full object is too resource intensive. A generic dummy list object I just use with tabular data. It being populated from SQL or stored procedures result-sets, usually with my generic table mapper.