Writing Flat file using beanio (beanio.org) . pojo's have parent class - bean-io

I need to sort pojo of different data type like Student,employee,patient using age and store it into array. Then write it to flat file using beanio.
By json i am sending request which can have array of student,employee and patient .I have 3 pojo at java side like student,employee,patient to store data from json request.
i am able to merge and then sort all array of objects like student,employee,patient into single array of class which is base class of student,employee,patient like Human. Human class i have to make so i can sort all 3 child class using Comparator by property age.
class SortbyAge implements Comparator<Human>
{
// Used for sorting in ascending order of
// age
public int compare(Human a, Human b)
{
return a.getAge() - b.getAge();
}
}
By here everything is fine .
I am able to sort data depending on age and store it into Human Array.
problem is when i am writing sorted data to flat file using beanio .
**when i am writing data to Flat file i am getting exception below exception
org.beanio.BeanWriterException: Bean identification failed: no record or group mapping for bean class 'class [Lcom.amex.ibm.model.Human;' at the current position**
i have written all 4 tags into xml file like below.
<record name="student" class="com.amex.ibm.model.Student" occurs="0+" maxLength="unbounded">
<field name="name" length="3"/>
<field name="age" length="8"/>
<field name="address" length="15"/>
</record>
<record name="employee" class="com.amex.ibm.model.Employee" occurs="0+" maxLength="unbounded">
<field name="name" length="3"/>
<field name="age" length="8"/>
<field name="address" length="15"/>
</record>
<record name="patient" class="com.amex.ibm.model.Patient" occurs="0+" maxLength="unbounded">
<field name="name" length="3"/>
<field name="age" length="8"/>
<field name="address" length="15"/>
</record>
<record name="human" class="com.amex.ibm.model.Human" occurs="0+" maxLength="unbounded">
<field name="age" length="3"/>
</record>
How to define Parent class mapping in bean IO??

The problem you are seeing is that BeanIO doesn't know how to map an array of type Human You need to pass each of the individual objects to BeanIO to write it out to your file. Try this, by looping over your array and then pass each of the objects to BeanIO.
Change
b.write(listFinalArray);
to
for (int i = 0; i < listFinalArray.length; i++) {
b.write(listFinalArray[i]);
}
or less typing:
for (final Human human : listFinalArray) {
b.write(human);
}

Related

how to write fixedlength file to CSV file with bean io with all values come in diffent columns of a record

This code is able to write data to csv file but the only problem is the data is getting written in single column only.
I want the data to come in different column. I am new to bean io and not able to figure it out.
I have tried below given code and not able get output in proper format:
public class XlsWriter {
public static void main(String[] args) throws Exception {
StreamFactory factory = StreamFactory.newInstance();
factory.load("C:\\Users\\PV5057094\\Demo_workspace\\XlsxMapper\\src\\main\\resources\\Employee.xml");
Field[] fields = Employee.class.getDeclaredFields();
System.out.println("fileds" + fields.length);
List<Object> list = new ArrayList<Object>();
for (Field field : fields) {
list.add(field.getName());
}
BeanReader in = factory.createReader("EmployeeInfo", new File("C:\\Temp\\Soc\\textInput.txt"));
BeanWriter out = factory.createWriter("EmployeeInfo", new File("C:\\Temp\\Soc\\output.csv"));
Object record;
while ((record = in.read()) != null) {
System.out.println(record.toString().length());
out.write(record);
System.out.println("Record Written:" + record.toString());
}
in.close();
out.flush();
out.close();
}
}
textInput.txt
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
<?xml version="1.0" encoding="UTF-8"?>
<beanio xmlns="http://www.beanio.org/2012/03"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.beanio.org/2012/03 http://www.beanio.org/2012/03/mapping.xsd">
<stream name="EmployeeInfo" format="fixedlength">
<record name="employee"
class="com.aexp.gmnt.imc.record.submission.Employee" minOccurs="0"
maxOccurs="unbounded" order="1">
<field name="firstName" length="5" padding="0" justify="right" />
<field name="lastName" length="5" padding="0" justify="right"/>
<field name="title" length="5" padding="0" justify="right"/>
</record>
</stream>
I want every record value in different column of a CSV file, but currently it is comming in a single column only, please help.
You need to have a different stream definition in your mapping file for writing to the CSV file. The EmployeeInfo stream can only deal with fixed length content because that is how it is configured.
You need to add a second <stream> definition to handle the CSV file you want to generate and your BeanWriter need to reference the new CSV stream instead of the fixed length one.
Add a new <stream> definition to your existing mapping.xml file:
<stream name="EmployeeInfoCSV" format="csv">
<record name="employee" class="com.aexp.gmnt.imc.record.submission.Employee" minOccurs="0" maxOccurs="unbounded">
<field name="firstName" />
<field name="lastName" />
<field name="title" />
</record>
</stream>
Note the change in the name of the <stream> and the format set to csv. In this <stream> definition you can then also change the order in which the data is written to the csv file if you want to without affecting your BeanReader's order in which it expects to read the data. The length,padding and justify attributes are not required for a csv file.
Now you only need to change how you configure your BeanWriter from:
BeanWriter out = factory.createWriter("EmployeeInfo", new File("C:\\Temp\\Soc\\output.csv"));
to
BeanWriter out = factory.createWriter("EmployeeInfoCSV", new File("C:\\Temp\\Soc\\output.csv"));
Note the change to use the csv stream name in the createWriter method parameters.
Edit to answer this question from the comments:
just a added question of I need to add the as first line with header
values as field values without writing them as header record type in
bean io then is it possible through reflection or something?
No need for reflection or jumping through hoops to get it done. You can create a Writer that you can use to write out the header (column) names to the file first before passing the writer to the BeanWriter for appending the rest of the output.
Instead of using the BeanWriter like above:
BeanWriter out = factory.createWriter("EmployeeInfoCSV", new File("C:\\Temp\\Soc\\output.csv"));
You would now do something like:
BufferedWriter writer = new BufferedWriter(new FileWriter(new File("C:\\Temp\\Soc\\output.csv")));
writer.write("First Name,Last Name,Title");
writer.newLine();
BeanWriter out = factory.createWriter("EmployeeInfoCSV", writer);
BeanIO would then carry on writing its output to the writer which will append the data to the existing file. Remember to close() the writer as well when you are done.

Dozer mapping class A -> Class B

I have a confusion that I am mapping 2 objects via Dozer mapping. So one class A has typeCode and another class B has typeCode(same spelling) and isCashFund
I have written custom converter for typeCode => isCashfund but will it now not populate the typeCode of class B as now it is not one to one mapping.
<mapping>
<class-a>TsmfFund</class-a>
<class-b>Fund</class-b>
<field type="one-way">
<a>priceTypeSet.description</a>
<b>priceTypeSetDescription</b>
</field>
<field>
<a>currencyCode</a>
<b>currency</b>
</field>
<field type="one-way" custom-converter="FundTypeConverter">
<a>typeCode</a>
<b>cashFund</b>
</field>
</mapping>

Indexed getter only in Dozer

I'm struggling with a mapping in Dozer. The basic structure of my classes is this:
class Foo {
private String someString;
public String getSomeString() {
return someString;
}
public void setSomeString(String someString) {
this.someString = someString;
}
}
and the interesting part:
class Bar {
// note that no field is declared
public String[0] getSomeBarString() {
// This returns an array where the acctually desired string is a index 0
}
public void setSomeBarString(String someString) {
// stores the string otherwise
}
}
Compensating the absence of a field and the differently named getter/setter methods was quite easy:
<mapping>
<class-a>Foo</class-a>
<class-b>Bar</class-b>
<field>
<a>someString</a>
<b get-method="getSomeBarString" set-method="setSomeBarString">someBarString</b>
</field>
</mapping>
From my understanding I could even omitt get-method and set-method as there is no field access by default.
My problem is that the getter is indexed and the setter isn't. I've already read about indexed property mapping but it does it both ways. Is there a way to make only one direction indexed? E.g. would get-method="getSomeBarString[0]" work?
After a night of sleep I got an idea myself. I just define two one way mappings and make one of them indexed. It also turns out indexing is defined the same way (after the property name) even if you declare a different get-method or set-method.
<mapping type="one-way">
<class-a>Foo</class-a>
<class-b>Bar</class-b>
<field>
<a>someString</a>
<b set-method="setSomeBarString">someBarString</b>
</field>
</mapping>
<mapping type="one-way">
<class-a>Bar</class-a>
<class-b>Foo</class-b>
<field>
<a get-method="getSomeBarString">someBarString[0]</a>
<b>someString</b>
</field>
</mapping>

Getting Data from Multiple tables in Liferay 6.0.6

i'm trying to get data from multiple tables in liferay 6.0.6 using custom sql, but for now i'm just able to display data from one table.does any one know how to do that.thanks
UPDATE:
i did found this link http://www.liferaysavvy.com/2013/02/getting-data-from-multiple-tables-in.html but for me it's not working because it gives an error BeanLocator is null,and it seems that it's a bug in liferay 6.0.6
The following technique also works with liferay 6.2-ga1.
We will consider we are in the portlet project fooproject.
Let's say you have two tables: article, and author. Here are the entities in your service.xml :
<entity name="Article" local-service="true">
<column name="id_article" type="long" primary="true" />
<column name="id_author" type="long" />
<column name="title" type="String" />
<column name="content" type="String" />
<column name="writing_date" type="Date" />
</entity>
<entity name="Author" local-service="true">
<column name="id_author" type="long" primary="true" />
<column name="full_name" type="String" />
</entity>
At that point run the service builder to generate the persistence and service layers.
You have to use custom SQL queries as described by Liferay's Documentation to fetch info from multiple databases.
Here is the code of your fooproject-portlet/src/main/ressources/default.xml :
<?xml version="1.0"?>
<custom-sql>
<sql file="custom-sql/full_article.xml" />
</custom-sql>
And the custom request in the fooproject-portlet/src/main/ressources/full_article.xml:
<?xml version="1.0"?>
<custom-sql>
<sql
id="com.myCompany.fooproject.service.persistence.ArticleFinder.findByAuthor">
<![CDATA[
SELECT
Author.full_name AS author_name
Article.title AS article_title,
Article.content AS article_content
Article.writing_date AS writing_date
FROM
fooproject_Article AS Article
INNER JOIN
fooproject_Author AS Author
ON Article.id_author=Author.id_author
WHERE
author_name LIKE ?
]]>
</sql>
</custom-sql>
As you can see, we want to fetch author's name, article's title, article's content and article's date.
So let's allow the service builder to generate a bean that can store all these informations. How ? By adding it to the service.xml ! Be careful: the fields of the bean and the fields' name returned by the query must match.
<entity name="ArticleBean">
<column name="author_name" type="String" primary="true" />
<column name="article_title" type="String" primary="true" />
<column name="article_content" type="String" />
<column name="article_date" type="Date" />
</entity>
Note: defining which field is primary here does not really matter as there will never be anything in the ArticleBean table. It is all about not having exceptions thrown by the service builder while generating the Bean.
The finder method must be implemented then. To do so, create the class com.myCompany.fooproject.service.persistence.impl.ArticleFinderImpl. Populate it with the following content:
public class ArticleFinderImpl extends BasePersistenceImpl<Article> {
}
Use the correct import statements and run the service builder. Let's make that class implement the interface generated by the service builder:
public class ArticleFinderImpl extends BasePersistenceImpl<Article> implements ArticleFinder {
}
And populate it with the actual finder implementation:
public class ArticleFinderImpl extends BasePersistenceImpl<Article> implements ArticleFinder {
// Query id according to liferay's query naming convention
public static final String FIND_BY_AUTHOR = ArticleFinder.class.getName() + ".findByAuthor";
public List<Article> findByAuthor(String author) {
Session session = null;
try {
session = openSession();
// Retrieve query
String sql = CustomSQLUtil.get(FIND_BY_AUTHOR);
SQLQuery q = session.createSQLQuery(sql);
q.setCacheable(false);
// Set the expected output type
q.addEntity("StaffBean", StaffBeanImpl.class);
// Binding arguments to query
QueryPos qpos = QueryPos.getInstance(q);
qpos.add(author);
// Fetching all elements and returning them as a list
return (List<StaffBean>) QueryUtil.list(q, getDialect(), QueryUtil.ALL_POS, QueryUtil.ALL_POS);
} catch(Exception e) {
e.printStackTrace();
} finally {
closeSession(session);
}
return null;
}
}
You can then call this method from your ArticleServiceImpl, whether it is to make a local or a remote API.
Note: it is hack. This is not a perfectly clean way to retrieve data, but it is the "less bad" you can do if you want to use Liferay's Service Builder.

liferay-6.1 - Implement own service

Hey I have create my own service.xml with student. Now o want to add my own searchByName method for student. can you please explain me what to write in StudentLocalServiceImpl.
public class StudentLocalServiceImpl extends StudentLocalServiceBaseImpl {
/*
* NOTE FOR DEVELOPERS:
*
*/
public List<Student> getAll() throws SystemException {
return studentPersistence.findAll();
}
public Student getStudentByName(String name) {
return studentPersistence.
}
// I have created one method getAll. I need help for the another one.
Thanks in Advance.
You would first declare this as a "finder" element in the service.xml within the entity you defined.
e.g.
<finder name="Name" return-type="Student">
<finder-column name="name" />
</finder>
The return-type could also be Collection if wanting a List<Student> as the return type, if name is not unique.
<finder name="Name" return-type="Collection">
<finder-column name="name" />
</finder>
You can also state a comparison operator for the column:
<finder name="NotName" return-type="Collection">
<finder-column name="name" comparator="!=" />
</finder>
A finder can actually declare a unique index as well to be generated on this relation (will be applied to the DB table) by specifying the unique="true" attribute on the finder:
<finder name="Name" return-type="Student" unique="true">
<finder-column name="name" />
</finder>
With this definition and after re-runing ant build-service the studentPersistence will contain new methods using the name of the finder found in the xml element appended with a prefix: countBy, findBy, fetchBy, removeBy, etc.
Finally, your serice method would only need to contain the following (based on the above):
public Student getStudentByName(String name) throws SystemException {
return studentPersistence.findByName(name);
}
HTH