how to write fixedlength file to CSV file with bean io with all values come in diffent columns of a record - bean-io

This code is able to write data to csv file but the only problem is the data is getting written in single column only.
I want the data to come in different column. I am new to bean io and not able to figure it out.
I have tried below given code and not able get output in proper format:
public class XlsWriter {
public static void main(String[] args) throws Exception {
StreamFactory factory = StreamFactory.newInstance();
factory.load("C:\\Users\\PV5057094\\Demo_workspace\\XlsxMapper\\src\\main\\resources\\Employee.xml");
Field[] fields = Employee.class.getDeclaredFields();
System.out.println("fileds" + fields.length);
List<Object> list = new ArrayList<Object>();
for (Field field : fields) {
list.add(field.getName());
}
BeanReader in = factory.createReader("EmployeeInfo", new File("C:\\Temp\\Soc\\textInput.txt"));
BeanWriter out = factory.createWriter("EmployeeInfo", new File("C:\\Temp\\Soc\\output.csv"));
Object record;
while ((record = in.read()) != null) {
System.out.println(record.toString().length());
out.write(record);
System.out.println("Record Written:" + record.toString());
}
in.close();
out.flush();
out.close();
}
}
textInput.txt
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
AAAAABBBBBCCCCC
<?xml version="1.0" encoding="UTF-8"?>
<beanio xmlns="http://www.beanio.org/2012/03"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.beanio.org/2012/03 http://www.beanio.org/2012/03/mapping.xsd">
<stream name="EmployeeInfo" format="fixedlength">
<record name="employee"
class="com.aexp.gmnt.imc.record.submission.Employee" minOccurs="0"
maxOccurs="unbounded" order="1">
<field name="firstName" length="5" padding="0" justify="right" />
<field name="lastName" length="5" padding="0" justify="right"/>
<field name="title" length="5" padding="0" justify="right"/>
</record>
</stream>
I want every record value in different column of a CSV file, but currently it is comming in a single column only, please help.

You need to have a different stream definition in your mapping file for writing to the CSV file. The EmployeeInfo stream can only deal with fixed length content because that is how it is configured.
You need to add a second <stream> definition to handle the CSV file you want to generate and your BeanWriter need to reference the new CSV stream instead of the fixed length one.
Add a new <stream> definition to your existing mapping.xml file:
<stream name="EmployeeInfoCSV" format="csv">
<record name="employee" class="com.aexp.gmnt.imc.record.submission.Employee" minOccurs="0" maxOccurs="unbounded">
<field name="firstName" />
<field name="lastName" />
<field name="title" />
</record>
</stream>
Note the change in the name of the <stream> and the format set to csv. In this <stream> definition you can then also change the order in which the data is written to the csv file if you want to without affecting your BeanReader's order in which it expects to read the data. The length,padding and justify attributes are not required for a csv file.
Now you only need to change how you configure your BeanWriter from:
BeanWriter out = factory.createWriter("EmployeeInfo", new File("C:\\Temp\\Soc\\output.csv"));
to
BeanWriter out = factory.createWriter("EmployeeInfoCSV", new File("C:\\Temp\\Soc\\output.csv"));
Note the change to use the csv stream name in the createWriter method parameters.
Edit to answer this question from the comments:
just a added question of I need to add the as first line with header
values as field values without writing them as header record type in
bean io then is it possible through reflection or something?
No need for reflection or jumping through hoops to get it done. You can create a Writer that you can use to write out the header (column) names to the file first before passing the writer to the BeanWriter for appending the rest of the output.
Instead of using the BeanWriter like above:
BeanWriter out = factory.createWriter("EmployeeInfoCSV", new File("C:\\Temp\\Soc\\output.csv"));
You would now do something like:
BufferedWriter writer = new BufferedWriter(new FileWriter(new File("C:\\Temp\\Soc\\output.csv")));
writer.write("First Name,Last Name,Title");
writer.newLine();
BeanWriter out = factory.createWriter("EmployeeInfoCSV", writer);
BeanIO would then carry on writing its output to the writer which will append the data to the existing file. Remember to close() the writer as well when you are done.

Related

Writing Flat file using beanio (beanio.org) . pojo's have parent class

I need to sort pojo of different data type like Student,employee,patient using age and store it into array. Then write it to flat file using beanio.
By json i am sending request which can have array of student,employee and patient .I have 3 pojo at java side like student,employee,patient to store data from json request.
i am able to merge and then sort all array of objects like student,employee,patient into single array of class which is base class of student,employee,patient like Human. Human class i have to make so i can sort all 3 child class using Comparator by property age.
class SortbyAge implements Comparator<Human>
{
// Used for sorting in ascending order of
// age
public int compare(Human a, Human b)
{
return a.getAge() - b.getAge();
}
}
By here everything is fine .
I am able to sort data depending on age and store it into Human Array.
problem is when i am writing sorted data to flat file using beanio .
**when i am writing data to Flat file i am getting exception below exception
org.beanio.BeanWriterException: Bean identification failed: no record or group mapping for bean class 'class [Lcom.amex.ibm.model.Human;' at the current position**
i have written all 4 tags into xml file like below.
<record name="student" class="com.amex.ibm.model.Student" occurs="0+" maxLength="unbounded">
<field name="name" length="3"/>
<field name="age" length="8"/>
<field name="address" length="15"/>
</record>
<record name="employee" class="com.amex.ibm.model.Employee" occurs="0+" maxLength="unbounded">
<field name="name" length="3"/>
<field name="age" length="8"/>
<field name="address" length="15"/>
</record>
<record name="patient" class="com.amex.ibm.model.Patient" occurs="0+" maxLength="unbounded">
<field name="name" length="3"/>
<field name="age" length="8"/>
<field name="address" length="15"/>
</record>
<record name="human" class="com.amex.ibm.model.Human" occurs="0+" maxLength="unbounded">
<field name="age" length="3"/>
</record>
How to define Parent class mapping in bean IO??
The problem you are seeing is that BeanIO doesn't know how to map an array of type Human You need to pass each of the individual objects to BeanIO to write it out to your file. Try this, by looping over your array and then pass each of the objects to BeanIO.
Change
b.write(listFinalArray);
to
for (int i = 0; i < listFinalArray.length; i++) {
b.write(listFinalArray[i]);
}
or less typing:
for (final Human human : listFinalArray) {
b.write(human);
}

MyBatis - Returning a HashMap

I want the returned result of the select statement below to be Map<String, Profile>:
<select id="getLatestProfiles" parameterType="string" resultMap="descProfileMap">
select ml.layerdescription, p1.*
from ( select max(profile_id) as profile_id
from SyncProfiles
group by map_layer_id) p2
inner join SyncProfiles p1 on p1.profile_id = p2.profile_id
inner join maplayers ml on ml.LAYERID = p1.MAP_LAYER_ID
where ml.maxsite = #{site}
</select>
I have seen this post which maps a String to a custom class, but the key was part of the custom class. In my query above, the layerdescription field is not part of the Profile class since I'm aiming to have the Profile class strictly represent the syncprofiles table and the layerdescription field is in another table.
My interface looks like:
public Map<String, Profile> getLatestProfiles(final String site);
How should descProfileMap be defined? I want to do something like:
<resultMap id="descProfileMap" type="java.util.HashMap">
<id property="key" column="layerdescription" />
<result property="value" javaType="Profile"/>
</resultMap>
But this is clearly wrong. Thanks for your help!
Achieving this requires 2 steps:
-Use association and nested resultMap:
<resultMap type="Profile" id="profileResultMap">
<!-- columns to properties mapping -->
</resultMap
<resultMap type="map" id="descProfileMap">
<id property="key" column="layerdescription" />
<association property="value" resultMap="profileResultMap" />
</resultMap>
-Add every record to a Map with expected structure using ResultHandler:
final Map<String, Profile> finalMap = new HashMap<String, Profile>();
ResultHandler handler = new ResultHandler() {
#Override
public void handleResult(ResultContext resultContext) {
Map<String, Object> map = (Map) resultContext.getResultObject();
finalMap.put(map.get("key").toString()), (Profile)map.get("value"));
}
};
session.select("getLatestProfiles", handler);
If you run that as is, expect this exception will likely be raised:
org.apache.ibatis.executor.ExecutorException: Mapped Statements with
nested result mappings cannot be safely used with a custom
ResultHandler. Use safeResultHandlerEnabled=false setting to bypass
this check or ensure your statement returns ordered data and set
resultOrdered=true on it.
Then following the suggestion, you can either disable the check globally in Mybatis config:
According to the documentation:
safeResultHandlerEnabled: Allows using ResultHandler on nested statements. If allow, set the
false. Default: true.
<settings>
<setting name="safeResultHandlerEnabled" value="false"/>
</settings>
or specify your result is ordered in the statement:
The documentation states:
resultOrdered This is only applicable for nested result select
statements: If this is true, it is assumed that nested results are
contained or grouped together such that when a new main result row is
returned, no references to a previous result row will occur anymore.
This allows nested results to be filled much more memory friendly.
Default: false.
<select id="getLatestProfiles" parameterType="string" resultMap="descProfileMap" resultOrdered="true">
But I have not found anyway to specify this statement option when using annotations.

Getting Data from Multiple tables in Liferay 6.0.6

i'm trying to get data from multiple tables in liferay 6.0.6 using custom sql, but for now i'm just able to display data from one table.does any one know how to do that.thanks
UPDATE:
i did found this link http://www.liferaysavvy.com/2013/02/getting-data-from-multiple-tables-in.html but for me it's not working because it gives an error BeanLocator is null,and it seems that it's a bug in liferay 6.0.6
The following technique also works with liferay 6.2-ga1.
We will consider we are in the portlet project fooproject.
Let's say you have two tables: article, and author. Here are the entities in your service.xml :
<entity name="Article" local-service="true">
<column name="id_article" type="long" primary="true" />
<column name="id_author" type="long" />
<column name="title" type="String" />
<column name="content" type="String" />
<column name="writing_date" type="Date" />
</entity>
<entity name="Author" local-service="true">
<column name="id_author" type="long" primary="true" />
<column name="full_name" type="String" />
</entity>
At that point run the service builder to generate the persistence and service layers.
You have to use custom SQL queries as described by Liferay's Documentation to fetch info from multiple databases.
Here is the code of your fooproject-portlet/src/main/ressources/default.xml :
<?xml version="1.0"?>
<custom-sql>
<sql file="custom-sql/full_article.xml" />
</custom-sql>
And the custom request in the fooproject-portlet/src/main/ressources/full_article.xml:
<?xml version="1.0"?>
<custom-sql>
<sql
id="com.myCompany.fooproject.service.persistence.ArticleFinder.findByAuthor">
<![CDATA[
SELECT
Author.full_name AS author_name
Article.title AS article_title,
Article.content AS article_content
Article.writing_date AS writing_date
FROM
fooproject_Article AS Article
INNER JOIN
fooproject_Author AS Author
ON Article.id_author=Author.id_author
WHERE
author_name LIKE ?
]]>
</sql>
</custom-sql>
As you can see, we want to fetch author's name, article's title, article's content and article's date.
So let's allow the service builder to generate a bean that can store all these informations. How ? By adding it to the service.xml ! Be careful: the fields of the bean and the fields' name returned by the query must match.
<entity name="ArticleBean">
<column name="author_name" type="String" primary="true" />
<column name="article_title" type="String" primary="true" />
<column name="article_content" type="String" />
<column name="article_date" type="Date" />
</entity>
Note: defining which field is primary here does not really matter as there will never be anything in the ArticleBean table. It is all about not having exceptions thrown by the service builder while generating the Bean.
The finder method must be implemented then. To do so, create the class com.myCompany.fooproject.service.persistence.impl.ArticleFinderImpl. Populate it with the following content:
public class ArticleFinderImpl extends BasePersistenceImpl<Article> {
}
Use the correct import statements and run the service builder. Let's make that class implement the interface generated by the service builder:
public class ArticleFinderImpl extends BasePersistenceImpl<Article> implements ArticleFinder {
}
And populate it with the actual finder implementation:
public class ArticleFinderImpl extends BasePersistenceImpl<Article> implements ArticleFinder {
// Query id according to liferay's query naming convention
public static final String FIND_BY_AUTHOR = ArticleFinder.class.getName() + ".findByAuthor";
public List<Article> findByAuthor(String author) {
Session session = null;
try {
session = openSession();
// Retrieve query
String sql = CustomSQLUtil.get(FIND_BY_AUTHOR);
SQLQuery q = session.createSQLQuery(sql);
q.setCacheable(false);
// Set the expected output type
q.addEntity("StaffBean", StaffBeanImpl.class);
// Binding arguments to query
QueryPos qpos = QueryPos.getInstance(q);
qpos.add(author);
// Fetching all elements and returning them as a list
return (List<StaffBean>) QueryUtil.list(q, getDialect(), QueryUtil.ALL_POS, QueryUtil.ALL_POS);
} catch(Exception e) {
e.printStackTrace();
} finally {
closeSession(session);
}
return null;
}
}
You can then call this method from your ArticleServiceImpl, whether it is to make a local or a remote API.
Note: it is hack. This is not a perfectly clean way to retrieve data, but it is the "less bad" you can do if you want to use Liferay's Service Builder.

Mule ESB: how to filter emails based on subject or sender?

I am new to Mule 3.3 and I am trying to use it to retrieve emails from a POP3 server and download the CSV attachments if the sender field and subject field contain certain keywords. I have used the example provided on Mulesoft website and I have successfully managed to scan my inbox for new emails and only download CSV attachments. However, I am now stuck because I can't figure out how to filter emails by subject and sender fields.
Doing some research I have come across a message-property-filter pattern tag that can be applied to an endpoint, but I am not sure exactly to which endpoint to apply it, incoming or outgoing. Neither approach seems to work and I can't find a decent example showing how to use this tag. The basic algorithm I want to implement is as follows:
if email is from "Bob"
if attachment of type "CSV"
then download CSV attachment
if email subject field contains "keyword"
if attachment of type CSV
then download CSV attachment
Here's the Mule xml I have so far:
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:file="http://www.mulesoft.org/schema/mule/file" xmlns:pop3s="http://www.mulesoft.org/schema/mule/pop3s" xmlns:pop3="http://www.mulesoft.org/schema/mule/pop3"
xmlns="http://www.mulesoft.org/schema/mule/core"
xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:spring="http://www.springframework.org/schema/beans" version="CE-3.3.1"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/pop3s http://www.mulesoft.org/schema/mule/pop3s/current/mule-pop3s.xsd
http://www.mulesoft.org/schema/mule/file http://www.mulesoft.org/schema/mule/file/current/mule-file.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/pop3 http://www.mulesoft.org/schema/mule/pop3/current/mule-pop3.xsd ">
<expression-transformer expression="#[attachments-list:*.csv]"
name="returnAttachments" doc:name="Expression">
</expression-transformer>
<pop3s:connector name="POP3Connector"
checkFrequency="5000"
deleteReadMessages="false"
defaultProcessMessageAction="RECENT"
doc:name="POP3"
validateConnections="true">
</pop3s:connector>
<file:connector name="fileName" doc:name="File">
<file:expression-filename-parser />
</file:connector>
<flow name="incoming-orders" doc:name="incoming-orders">
<pop3s:inbound-endpoint user="my_username"
password="my_password"
host="pop.gmail.com"
port="995"
transformer-refs="returnAttachments"
doc:name="GetMail"
connector-ref="POP3Connector"
responseTimeout="10000"/>
<collection-splitter doc:name="Collection Splitter"/>
<echo-component doc:name="Echo"/>
<file:outbound-endpoint path="/attachments"
outputPattern="#[function:datestamp].csv"
doc:name="File" responseTimeout="10000">
<expression-transformer expression="payload.inputStream"/>
<message-property-filter pattern="from=(.*)(bob#email.com)(.*)" caseSensitive="false"/>
</file:outbound-endpoint>
</flow>
What is the best way to tackle this problem?
Thanks in advance.
To help you, here are two configuration bits:
The following filter accepts only messages where fromAddress is 'Bob' and where subject contains 'keyword':
<expression-filter
expression="#[message.inboundProperties.fromAddress == 'Bob' || message.inboundProperties.subject contains 'keyword']" />
The following transformer extracts all the attachments whose names end with '.csv':
<expression-transformer
expression="#[($.value in message.inboundAttachments.entrySet() if $.key ~= '.*\\.csv')]" />
Welcome to Mule! A few month ago I implemented a similar proejct for a customer. I take a look at your flow, let´s start refactoring.
Remove the transformer-refs="returnAttachments" from inbound-endpoint
Add the following elements to your flow
<pop3:inbound-endpoint ... />
<custom-filter class="com.benasmussen.mail.filter.RecipientFilter">
<spring:property name="regex" value=".*bob.bent#.*" />
</custom-filter>
<expression-transformer>
<return-argument expression="*.csv" evaluator="attachments-list" />
</expression-transformer>
<collection-splitter doc:name="Collection Splitter" />
Add my RecipientFilter as java class to your project. All messages will be discard if they don't match to the regex pattern.
package com.benasmussen.mail.filter;
import java.util.Collection;
import java.util.Set;
import java.util.regex.Pattern;
import org.mule.api.MuleMessage;
import org.mule.api.lifecycle.Initialisable;
import org.mule.api.lifecycle.InitialisationException;
import org.mule.api.routing.filter.Filter;
import org.mule.config.i18n.CoreMessages;
import org.mule.transport.email.MailProperties;
public class RecipientFilter implements Filter, Initialisable
{
private String regex;
private Pattern pattern;
public boolean accept(MuleMessage message)
{
String from = message.findPropertyInAnyScope(MailProperties.FROM_ADDRESS_PROPERTY, null);
return isMatch(from);
}
public void initialise() throws InitialisationException
{
if (regex == null)
{
throw new InitialisationException(CoreMessages.createStaticMessage("Property regex is not set"), this);
}
pattern = Pattern.compile(regex);
}
public boolean isMatch(String from)
{
return pattern.matcher(from).matches();
}
public void setRegex(String regex)
{
this.regex = regex;
}
}
The mule expression framework is powerful, but in some use cases I prefer my own business logic.
Improvment
Use application properties (mule-app.properties) > mule documentation
Documentation
MailProperties shows you all available message properties (EMail)
Take a look at the mule schema doc to see all available elements
Incoming payload (mails, etc) are transported by an DefaultMuleMessage (Payload, Properties, Attachments)

liferay-6.1 - Implement own service

Hey I have create my own service.xml with student. Now o want to add my own searchByName method for student. can you please explain me what to write in StudentLocalServiceImpl.
public class StudentLocalServiceImpl extends StudentLocalServiceBaseImpl {
/*
* NOTE FOR DEVELOPERS:
*
*/
public List<Student> getAll() throws SystemException {
return studentPersistence.findAll();
}
public Student getStudentByName(String name) {
return studentPersistence.
}
// I have created one method getAll. I need help for the another one.
Thanks in Advance.
You would first declare this as a "finder" element in the service.xml within the entity you defined.
e.g.
<finder name="Name" return-type="Student">
<finder-column name="name" />
</finder>
The return-type could also be Collection if wanting a List<Student> as the return type, if name is not unique.
<finder name="Name" return-type="Collection">
<finder-column name="name" />
</finder>
You can also state a comparison operator for the column:
<finder name="NotName" return-type="Collection">
<finder-column name="name" comparator="!=" />
</finder>
A finder can actually declare a unique index as well to be generated on this relation (will be applied to the DB table) by specifying the unique="true" attribute on the finder:
<finder name="Name" return-type="Student" unique="true">
<finder-column name="name" />
</finder>
With this definition and after re-runing ant build-service the studentPersistence will contain new methods using the name of the finder found in the xml element appended with a prefix: countBy, findBy, fetchBy, removeBy, etc.
Finally, your serice method would only need to contain the following (based on the above):
public Student getStudentByName(String name) throws SystemException {
return studentPersistence.findByName(name);
}
HTH