Streaming in datamapper in mule esb - streaming

I need to take data(input.xml) from one file which is size in 100MB-200MB and need to write into four different files based on some logic.
input xml :
<?xml version="1.0"?>
<Orders>
<Order><OrderId>1</OrderId><Total>10</Total><Name>jon1</Name></Order>
<Order><OrderId>2</OrderId><Total>20</Total><Name>jon2</Name></Order>
<Order><OrderId>3</OrderId><Total>30</Total><Name>jon3</Name></Order>
<Order><OrderId>4</OrderId><Total>40</Total><Name>jon4</Name></Order>
<Orders>
logic is if Total is 1-10 then write to file1 and if Total is 11-20 then write to file2.....,
expected output:
1 10 jon1 -->write into file1
2 20 jon2 -->write into file2
3 30 jon3 -->write into file3
4 40 jon4 -->write into file4
Here i have enabled streaming in datamapper which is under configuration but i'm not getting proper output. The problem is i'm getting only some recodes into only one file which should come into that file after satisfying the condition.
But if i disable streaming button in datamapper it is working fine. As there are lakes of records i must use streaming option.
Is there any otherway to configure datamapper to enable streaming option..?
Please suggest me on this., Thanks.,

It is difficult to see a problem without a little more detail on what you are doing.
Nevertheless, I think this probably will help you to try another approach.
The data mapper will load the full XML document into memory although you activate streaming, it has to do it in order to support XPATH (it loads the full xml input into a DOM).
So if you can not afford to load 200Mb document into memory you will need to try a workaround.
What I have done before is creating a java component that transforms the input stream to an iterator with the help of a stax parser. With a very simple implementation you can code an iterator that pulls from the stream to create the next element (a pojo, a map, a string...). In the mule flow, after the "java component", you should be able to use a "for-each" with a "choice" within and apply your logic.
A quick example for your data:
package tests;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Map.Entry;
import javax.xml.stream.FactoryConfigurationError;
import javax.xml.stream.XMLInputFactory;
import javax.xml.stream.XMLStreamConstants;
import javax.xml.stream.XMLStreamException;
import javax.xml.stream.XMLStreamReader;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
public class OrdersStreamIterator implements Iterator<Map<String,String>> {
final static Log LOGGER = LogFactory.getLog(OrdersStreamIterator.class);
final InputStream is;
final XMLStreamReader xmlReader;
boolean end = false;
HashMap<String,String> next;
public OrdersStreamIterator(InputStream is)
throws XMLStreamException, FactoryConfigurationError {
this.is = is;
xmlReader = XMLInputFactory.newInstance().createXMLStreamReader(is);
}
protected HashMap<String,String> _next() throws XMLStreamException {
int event;
HashMap<String,String> order = null;
String orderChild = null;
String orderChildValue = null;
while (xmlReader.hasNext()) {
event = xmlReader.getEventType();
if (event == XMLStreamConstants.START_ELEMENT) {
if (order==null) {
if (checkOrder()) {
order = new HashMap<String,String>();
}
}
else {
orderChild = xmlReader.getLocalName();
}
}
else if (event == XMLStreamConstants.END_ELEMENT) {
if (checkOrders()) {
end = true;
return null;
}
else if (checkOrder()) {
xmlReader.next();
return order;
}
else if (order!=null) {
order.put(orderChild, orderChildValue);
orderChild = null;
orderChildValue = null;
}
}
else if (order!=null && orderChild!=null){
switch (event) {
case XMLStreamConstants.SPACE:
case XMLStreamConstants.CHARACTERS:
case XMLStreamConstants.CDATA:
int start = xmlReader.getTextStart();
int length = xmlReader.getTextLength();
if (orderChildValue==null) {
orderChildValue = new String(xmlReader.getTextCharacters(), start, length);
}
else {
orderChildValue += new String(xmlReader.getTextCharacters(), start, length);
}
break;
}
}
xmlReader.next();
}
end = true;
return null;
}
protected boolean checkOrder() {
return "Order".equals(xmlReader.getLocalName());
}
protected boolean checkOrders() {
return "Orders".equals(xmlReader.getLocalName());
}
#Override
public boolean hasNext() {
if (end) {
return false;
}
else if (next==null) {
try {
next = _next();
} catch (XMLStreamException e) {
LOGGER.error(e.getMessage(), e);
end = true;
}
return !end;
}
else {
return true;
}
}
#Override
public Map<String,String> next() {
if (hasNext()) {
final HashMap<String,String> n = next;
next = null;
return n;
}
else {
return null;
}
}
#Override
public void remove() {
throw new RuntimeException("ReadOnly!");
}
// Test
public static String dump(Map<String,String> o) {
String s = "{";
for (Entry<String,String> e : o.entrySet()) {
if (s.length()>1) {
s+=", ";
}
s+= "\"" + e.getKey() + "\" : \"" + e.getValue() + "\"";
}
return s + "}";
}
public static void main(String[] argv) throws XMLStreamException, FactoryConfigurationError {
final InputStream is = OrdersStreamIterator.class.getClassLoader().getResourceAsStream("orders.xml");
final OrdersStreamIterator i = new OrdersStreamIterator(is);
while (i.hasNext()) {
System.out.println(dump(i.next()));
}
}
}
An example flow:
<flow name="testsFlow">
<http:listener config-ref="HTTP_Listener_Configuration" path="/" doc:name="HTTP"/>
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[return tests.OrdersStreamIterator.class.getClassLoader().getResourceAsStream("orders.xml");]]></scripting:script>
</scripting:component>
<set-payload value="#[new tests.OrdersStreamIterator(payload)]" doc:name="Iterator"/>
<foreach doc:name="For Each">
<logger message="#[tests.OrdersStreamIterator.dump(payload)]" level="INFO" doc:name="Logger"/>
</foreach>
</flow>

Related

How to get the Table inside a MS Word bookmark and add multiple rows based on its Mergefield cells using docx4j?

Till now am able to parse a docx file using docx4j and find the bookmarks and all the tables in a docx file using below code:
WordprocessingMLPackage wordMLPackage = WordprocessingMLPackage.load(new java.io.File(docxFile));
List<Object> paragraphs = getAllElementFromObject(template.getMainDocumentPart(), P.class);
for (Object p : paragraphs) {
RangeFinder rt = new RangeFinder("CTBookmark", "CTMarkupRange");
new TraversalUtil(p, rt);
for (CTBookmark content : rt.getStarts()) {
if (content.getName().equals("if_supdef")) {
List<Object> tbl = getAllElementFromObject(content, Tbl.class);
System.out.println("tbl==" + tbl.size());
}
}
}
TableFinder finder = new TableFinder();
new TraversalUtil(documentPart.getContent(), finder);
System.out.println("Found " + finder.tblList.size() + " tables");
I've got these lines of code from some blogs and answers from other questions.
Now I would like to find the table only inside a bookmark (here my bookmark name is if_supdef) rather than searching in entire document. Once I find the table, I would add rows based on number of data I receive from SQL table and MERGEFIELDS available.
Bookmark and its table look like something in below picture:
Once processed through docx4j it should look like:
In document.xml I see parent tag of w:tbl is body but not bookmark.
Is it possible to read the table inside bookmark? If so, how?
If not, what is the other alternative to uniquely identify a table and add contents to it?
Try something along the lines of the below.
import java.math.BigInteger;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.docx4j.TraversalUtil;
import org.docx4j.TraversalUtil.CallbackImpl;
import org.docx4j.openpackaging.packages.WordprocessingMLPackage;
import org.docx4j.openpackaging.parts.WordprocessingML.MainDocumentPart;
import org.docx4j.wml.CTBookmark;
import org.docx4j.wml.CTMarkupRange;
import org.docx4j.wml.Tbl;
import jakarta.xml.bind.JAXBContext;
public class TableInBookmarkFinder {
public static JAXBContext context = org.docx4j.jaxb.Context.jc;
public static void main(String[] args) throws Exception {
String inputfilepath = System.getProperty("user.dir")
+ "/tbl_bookmarks.docx";
WordprocessingMLPackage wordMLPackage = WordprocessingMLPackage
.load(new java.io.File(inputfilepath));
MainDocumentPart documentPart = wordMLPackage.getMainDocumentPart();
// find
TableInBookmarkFinderCallback finder = new TableInBookmarkFinderCallback();
new TraversalUtil(documentPart.getContent(), finder);
List<TableInfo> tableInfos = finder.getTableInfos();
// result?
for (TableInfo ti : tableInfos) {
System.out.println("table contained in bookmarks:");
for (String s: ti.getBookmarkNames()) {
System.out.println("bookmark name: " + s);
}
}
}
public static class TableInfo {
TableInfo(Tbl tbl, List<String> bookmarkNames) {
this.tbl = tbl;
this.bookmarkNames = bookmarkNames;
}
private Tbl tbl;
public Tbl getTbl() {
return tbl;
}
private List<String> bookmarkNames;
public List<String> getBookmarkNames() {
return bookmarkNames;
}
}
public static class TableInBookmarkFinderCallback extends CallbackImpl {
public TableInBookmarkFinderCallback() {
}
/**
* Keep this set to true unless you don't
* want to traverse a table (eg a nested table).
* NB: If traversing from body level, you'll need to set it to true!
*/
private boolean traverseTables=true;
/**
* Track bookmarks encountered
*/
private Map<BigInteger, String> bookmarkInfos = new HashMap<BigInteger, String>();
/**
* What bookmarks are we currently in?
*/
private Set<BigInteger> currentBookmarks = new HashSet<BigInteger>();
/**
* What tables did we encounter?
*/
private List<TableInfo> tableInfos = new ArrayList<TableInfo>();
public List<TableInfo> getTableInfos() {
return tableInfos;
}
#Override
public List<Object> apply(Object o) {
System.out.println(o.getClass().getName());
if (o instanceof CTBookmark) {
CTBookmark bmStart = (CTBookmark)o;
bookmarkInfos.put(bmStart.getId(), bmStart.getName());
if (currentBookmarks.add(bmStart.getId()) ) {
// ok
System.out.println("added " + bmStart.getId());
} else {
System.out.println("ERROR: duplicate bookmarks with id " + bmStart.getId());
}
} else /* need this else because CTBookmark extends CTMarkupRange */
if (o instanceof CTMarkupRange) {
CTMarkupRange bmEnd = (CTMarkupRange)o;
if (currentBookmarks.remove(bmEnd.getId()) ) {
// ok
System.out.println("removed " + bmEnd.getId());
} else {
System.out.println("ERROR: no start element for bookmark with id " + bmEnd.getId());
}
}
if (o instanceof Tbl ) {
System.out.println("tbl");
List<String> bookmarkNames = new ArrayList<String>();
for (BigInteger bmId : currentBookmarks) {
bookmarkNames.add(bookmarkInfos.get(bmId));
}
tableInfos.add( new TableInfo( (Tbl)o, bookmarkNames));
}
return null;
}
#Override
public boolean shouldTraverse(Object o) {
if (traverseTables) {
return true;
} else {
// Yes, unless its a nested Tbl
return !(o instanceof Tbl);
}
}
}
}

How can I force spark/hadoop to ignore the .gz extension on a file and read it as uncompressed plain text?

I have code along the lines of:
val lines: RDD[String] = sparkSession.sparkContext.textFile("s3://mybucket/file.gz")
The URL ends in .gz but this is a result of legacy code. The file is plain text with no compression involved. However spark insists on reading it as a GZIP file which obviously fails. How can I make it ignore the extension and simply read the file as text?
Based on this article I've tried setting configuration in various places that doesn't include the GZIP codec, e.g.:
sparkContext.getConf.set("spark.hadoop.io.compression.codecs", classOf[DefaultCodec].getCanonicalName)
This doesn't seem to have any effect.
Since the files are on S3, I can't simply rename them without copying the entire file.
First solution: Shading GzipCodec
The idea is to shadow/shade the GzipCodec as defined in the package org.apache.hadoop.io.compress by including in your own sources this java file and replacing this line:
public String getDefaultExtension() {
return ".gz";
}
with:
public String getDefaultExtension() {
return ".whatever";
}
When building your project, this will have for effect to use your definition of GzipCodec instead of the one provided by the dependencies (this is the shadowing of GzipCodec).
This way, when parsing your file, textFile() will be forced to apply the default codec as the one for gzip doesn't fit the naming of your file anymore.
The inconvenient of this solution is that you won't be able to also process real gzip files within the same app.
Second solution: Using newAPIHadoopFile with a custom/modified TextInputFormat
You can use newAPIHadoopFile (instead of textFile) with a custom/modified TextInputFormat which forces the use of the DefaultCodec (plain text).
We'll write our own line reader based on the default one (TextInputFormat). The idea is to remove the part of TextInputFormat which finds out it's named .gz and thus uncompress the file before reading it.
Instead of calling sparkContext.textFile,
// plain text file with a .gz extension:
sparkContext.textFile("s3://mybucket/file.gz")
we can use the underlying sparkContext.newAPIHadoopFile which allows us to specify how to read the input:
import org.apache.hadoop.mapreduce.lib.input.FakeGzInputFormat
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.io.{LongWritable, Text}
sparkContext
.newAPIHadoopFile(
"s3://mybucket/file.gz",
classOf[FakeGzInputFormat], // This is our custom reader
classOf[LongWritable],
classOf[Text],
new Configuration(sparkContext.hadoopConfiguration)
)
.map { case (_, text) => text.toString }
The usual way of calling newAPIHadoopFile would be with TextInputFormat. This is the part which wraps how the file is read and where the compression codec is chosen based on the file extension.
Let's call it FakeGzInputFormat and implement it as follow as an extension of TextInputFormat (this is a Java file and let's put it in package src/main/java/org/apache/hadoop/mapreduce/lib/input):
package org.apache.hadoop.mapreduce.lib.input;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import com.google.common.base.Charsets;
public class FakeGzInputFormat extends TextInputFormat {
public RecordReader<LongWritable, Text> createRecordReader(
InputSplit split,
TaskAttemptContext context
) {
String delimiter =
context.getConfiguration().get("textinputformat.record.delimiter");
byte[] recordDelimiterBytes = null;
if (null != delimiter)
recordDelimiterBytes = delimiter.getBytes(Charsets.UTF_8);
// Here we use our custom `FakeGzLineRecordReader` instead of
// `LineRecordReader`:
return new FakeGzLineRecordReader(recordDelimiterBytes);
}
#Override
protected boolean isSplitable(JobContext context, Path file) {
return true; // plain text is splittable (as opposed to gzip)
}
}
In fact we have to go one level deeper and also replace the default LineRecordReader (Java) with our own (let's call it FakeGzLineRecordReader).
As it's quite difficult to inherit from LineRecordReader, we can copy LineRecordReader (in src/main/java/org/apache/hadoop/mapreduce/lib/input) and slightly modify (and simplify) the initialize(InputSplit genericSplit, TaskAttemptContext context) method by forcing the usage of the default codec (plain text):
(the only changes compared to the original LineRecordReader have been given a comment explaining what's happening)
package org.apache.hadoop.mapreduce.lib.input;
import java.io.IOException;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.Seekable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
#InterfaceAudience.LimitedPrivate({"MapReduce", "Pig"})
#InterfaceStability.Evolving
public class FakeGzLineRecordReader extends RecordReader<LongWritable, Text> {
private static final Logger LOG =
LoggerFactory.getLogger(FakeGzLineRecordReader.class);
public static final String MAX_LINE_LENGTH =
"mapreduce.input.linerecordreader.line.maxlength";
private long start;
private long pos;
private long end;
private SplitLineReader in;
private FSDataInputStream fileIn;
private Seekable filePosition;
private int maxLineLength;
private LongWritable key;
private Text value;
private byte[] recordDelimiterBytes;
public FakeGzLineRecordReader(byte[] recordDelimiter) {
this.recordDelimiterBytes = recordDelimiter;
}
// This has been simplified a lot since we don't need to handle compression
// codecs.
public void initialize(
InputSplit genericSplit,
TaskAttemptContext context
) throws IOException {
FileSplit split = (FileSplit) genericSplit;
Configuration job = context.getConfiguration();
this.maxLineLength = job.getInt(MAX_LINE_LENGTH, Integer.MAX_VALUE);
start = split.getStart();
end = start + split.getLength();
final Path file = split.getPath();
final FileSystem fs = file.getFileSystem(job);
fileIn = fs.open(file);
fileIn.seek(start);
in = new UncompressedSplitLineReader(
fileIn, job, this.recordDelimiterBytes, split.getLength()
);
filePosition = fileIn;
if (start != 0) {
start += in.readLine(new Text(), 0, maxBytesToConsume(start));
}
this.pos = start;
}
// Simplified as input is not compressed:
private int maxBytesToConsume(long pos) {
return (int) Math.max(Math.min(Integer.MAX_VALUE, end - pos), maxLineLength);
}
// Simplified as input is not compressed:
private long getFilePosition() {
return pos;
}
private int skipUtfByteOrderMark() throws IOException {
int newMaxLineLength = (int) Math.min(3L + (long) maxLineLength,
Integer.MAX_VALUE);
int newSize = in.readLine(value, newMaxLineLength, maxBytesToConsume(pos));
pos += newSize;
int textLength = value.getLength();
byte[] textBytes = value.getBytes();
if ((textLength >= 3) && (textBytes[0] == (byte)0xEF) &&
(textBytes[1] == (byte)0xBB) && (textBytes[2] == (byte)0xBF)) {
LOG.info("Found UTF-8 BOM and skipped it");
textLength -= 3;
newSize -= 3;
if (textLength > 0) {
textBytes = value.copyBytes();
value.set(textBytes, 3, textLength);
} else {
value.clear();
}
}
return newSize;
}
public boolean nextKeyValue() throws IOException {
if (key == null) {
key = new LongWritable();
}
key.set(pos);
if (value == null) {
value = new Text();
}
int newSize = 0;
while (getFilePosition() <= end || in.needAdditionalRecordAfterSplit()) {
if (pos == 0) {
newSize = skipUtfByteOrderMark();
} else {
newSize = in.readLine(value, maxLineLength, maxBytesToConsume(pos));
pos += newSize;
}
if ((newSize == 0) || (newSize < maxLineLength)) {
break;
}
LOG.info("Skipped line of size " + newSize + " at pos " +
(pos - newSize));
}
if (newSize == 0) {
key = null;
value = null;
return false;
} else {
return true;
}
}
#Override
public LongWritable getCurrentKey() {
return key;
}
#Override
public Text getCurrentValue() {
return value;
}
public float getProgress() {
if (start == end) {
return 0.0f;
} else {
return Math.min(1.0f, (getFilePosition() - start) / (float)(end - start));
}
}
public synchronized void close() throws IOException {
try {
if (in != null) {
in.close();
}
} finally {}
}
}

find matching 'begin' - 'end' and highlight them

I am a newbie in developing a DSL in Xtext. I would like to add a feature of highlighting the matching start and end of a code block. In my language every function must start with 'begin' and end with 'end' and same with if conditions and loops. I need to highlight the matching/corresponding 'end' when the cursor is on the 'begin'. for example below, I need to highlight the corresponding first 'end' when the cursor is on the second 'begin':
function X
begin
if a>b
begin
a=b;
end
end
Any help with that please?
Thanks
i dont think the support for that is very good. if you can live with a minor solution you could try the following
The Grammar
Model:
greetings+=Greeting*;
Greeting:
'if' name=ID 'endif';
In the UiModule
#Override
public ICharacterPairMatcher bindICharacterPairMatcher() {
return new MyDslCharacterPairMatcher(new char[] { '(', ')', '{', '}', '[', ']' });
}
with
public class MyDslCharacterPairMatcher extends DefaultCharacterPairMatcher {
public MyDslCharacterPairMatcher(char[] chars) {
super(chars);
}
#Override
public IRegion match(IDocument document, int offset, int length) {
if (document instanceof IXtextDocument) {
IXtextDocument xtextDocument = (IXtextDocument)document;
IRegion result = xtextDocument.readOnly(new IUnitOfWork<IRegion, XtextResource>() {
#Override
public IRegion exec(XtextResource state) throws Exception {
MyDslGrammarAccess ga = state.getResourceServiceProvider().get(MyDslGrammarAccess.class);
ICompositeNode root = state.getParseResult().getRootNode();
ILeafNode matchNode = NodeModelUtils.findLeafNodeAtOffset(root, offset);
EObject object = NodeModelUtils.findActualSemanticObjectFor(matchNode);
if (object instanceof Greeting) {
Greeting g = (Greeting)object;
if (matchNode.getGrammarElement() == ga.getGreetingAccess().getIfKeyword_0()) {
ICompositeNode objectNode = NodeModelUtils.findActualNodeFor(object);
for (INode n : objectNode.getAsTreeIterable()) {
if (n.getGrammarElement() != null && n.getGrammarElement() == ga.getGreetingAccess().getEndifKeyword_2()) {
return new Region(n.getOffset(), n.getLength());
}
}
}
if (matchNode.getGrammarElement() == ga.getGreetingAccess().getEndifKeyword_2()) {
ICompositeNode objectNode = NodeModelUtils.findActualNodeFor(object);
for (INode n : objectNode.getAsTreeIterable()) {
if (n.getGrammarElement() != null && n.getGrammarElement() == ga.getGreetingAccess().getIfKeyword_0()) {
return new Region(n.getOffset(), n.getLength());
}
}
}
}
return null;
}
});
if (result != null) {
return result;
}
}
return super.match(document, offset, length);
}
}
Unfortunately JFace only Highlight a single Character.
if you want more i think you have to hook into IOccurrenceComputer
(you have to have mark occurences on then)
package org.xtext.example.mydsl1.ui;
import java.util.HashMap;
import java.util.Map;
import org.eclipse.core.runtime.SubMonitor;
import org.eclipse.emf.ecore.EObject;
import org.eclipse.jface.text.ITextSelection;
import org.eclipse.jface.text.Position;
import org.eclipse.jface.text.source.Annotation;
import org.eclipse.xtext.nodemodel.ICompositeNode;
import org.eclipse.xtext.nodemodel.ILeafNode;
import org.eclipse.xtext.nodemodel.INode;
import org.eclipse.xtext.nodemodel.util.NodeModelUtils;
import org.eclipse.xtext.resource.XtextResource;
import org.eclipse.xtext.ui.editor.XtextEditor;
import org.eclipse.xtext.ui.editor.model.IXtextDocument;
import org.eclipse.xtext.ui.editor.occurrences.DefaultOccurrenceComputer;
import org.eclipse.xtext.util.CancelIndicator;
import org.eclipse.xtext.util.TextRegion;
import org.eclipse.xtext.util.concurrent.CancelableUnitOfWork;
import org.xtext.example.mydsl1.myDsl.Greeting;
import org.xtext.example.mydsl1.services.MyDslGrammarAccess;
public class MyDslOccurrenceComputer extends DefaultOccurrenceComputer {
#Override
public Map<Annotation, Position> createAnnotationMap(XtextEditor editor,
ITextSelection selection, SubMonitor monitor) {
final IXtextDocument document = editor.getDocument();
if (document != null) {
Map<Annotation, Position> result = document.readOnly(new CancelableUnitOfWork<Map<Annotation, Position>, XtextResource>() {
#Override
public Map<Annotation, Position> exec(XtextResource state,
final CancelIndicator cancelIndicator) throws Exception {
MyDslGrammarAccess ga = state.getResourceServiceProvider()
.get(MyDslGrammarAccess.class);
ICompositeNode root = state.getParseResult().getRootNode();
ILeafNode matchNode = NodeModelUtils.findLeafNodeAtOffset(
root, selection.getOffset());
EObject object = NodeModelUtils
.findActualSemanticObjectFor(matchNode);
if (object instanceof Greeting) {
Greeting g = (Greeting) object;
if (matchNode.getGrammarElement() == ga
.getGreetingAccess().getIfKeyword_0()) {
ICompositeNode objectNode = NodeModelUtils
.findActualNodeFor(object);
for (INode n : objectNode.getAsTreeIterable()) {
if (n.getGrammarElement() != null
&& n.getGrammarElement() == ga
.getGreetingAccess()
.getEndifKeyword_3()) {
Map<Annotation, Position> result = new HashMap<>();
addOccurrenceAnnotation(
DECLARATION_ANNOTATION_TYPE,
document,
new TextRegion(matchNode
.getOffset(), matchNode
.getLength()), result);
addOccurrenceAnnotation(
OCCURRENCE_ANNOTATION_TYPE,
document,
new TextRegion(n.getOffset(), n
.getLength()), result);
return result;
}
}
}
if (matchNode.getGrammarElement() == ga
.getGreetingAccess().getEndifKeyword_3()) {
ICompositeNode objectNode = NodeModelUtils
.findActualNodeFor(object);
for (INode n : objectNode.getAsTreeIterable()) {
if (n.getGrammarElement() != null
&& n.getGrammarElement() == ga
.getGreetingAccess()
.getIfKeyword_0()) {
Map<Annotation, Position> result = new HashMap<>();
addOccurrenceAnnotation(
DECLARATION_ANNOTATION_TYPE,
document,
new TextRegion(matchNode
.getOffset(), matchNode
.getLength()), result);
addOccurrenceAnnotation(
OCCURRENCE_ANNOTATION_TYPE,
document,
new TextRegion(n.getOffset(), n
.getLength()), result);
return result;
}
}
}
}
return null;
}
});
if (result != null) {
return result;
}
}
return super.createAnnotationMap(editor, selection, monitor);
}
}

wicket download output stream

I want to download csv file , i take the response content and write to it , apprently wicket write after me and the content iam getting is the page html where it should be my csv
I have seen in the example the usage of throw new AbortException();
I am using version 6.7 , do you know if my version wicket has somthing instead of it ?
or rather I am doing somthing wrong ....
can you please help me ...
add(new Link<Void>("export") {
#Override
public void onClick() {
WebResponse response = (WebResponse) getResponse();
response.setAttachmentHeader("export.csv");
response.setContentType("text/csv");
OutputStream out = getResponse().getOutputStream();
try {
c.exportData(dataSource.getListForExport(), columns, out);
} catch (Exception ex) {
System.err.println(ex);
}
}
});
public <T> void exportData(List<T> list, List<IGridColumn<IDataSource<T>, T, String>> columns, OutputStream outputStream)
throws IOException {
long startTime = System.nanoTime();
PrintWriter out = new PrintWriter(new OutputStreamWriter(outputStream, Charset.forName(characterSet)));
try {
if (isExportHeadersEnabled()) {
boolean first = true;
for (IGridColumn<IDataSource<T>, T, String> col : columns) {
if (first) {
first = false;
} else {
out.print(delimiter);
System.out.println(delimiter);
}
if (col.getId().equals("checkBox")) {
continue;
}
out.print(quoteValue(col.getId()));
System.out.println(col.getId());
}
out.print("\r\n");
System.out.println("\r\n");
}
Iterator<? extends T> rowIterator = list.iterator();
while (rowIterator.hasNext()) {
T row = rowIterator.next();
boolean first = true;
for (IGridColumn<IDataSource<T>, T, String> col : columns) {
if (first) {
first = false;
} else {
out.print(delimiter);
System.out.println(delimiter);
}
if (col.getId().equals("checkBox")) {
continue;
}
Object o = (new PropertyModel<>(row, col.getId())).getObject();// ((AbstractColumn<T,
if (o != null) {
Class<?> c = o.getClass();
String s;
IConverter converter = Application.get().getConverterLocator().getConverter(c);
if (converter == null) {
s = o.toString();
} else {
s = converter.convertToString(o, Session.get().getLocale());
}
out.print(quoteValue(s));
System.out.println(quoteValue(s));
}
}
out.print("\r\n");
System.out.println("\r\n");
}
} catch (Exception ex) {
System.out.println(ex);
} finally {
System.out.println(out);
out.close();
// measure
System.out.print(System.nanoTime() - startTime);
}
}
The best way to do this is using dynamic resources. I'll suggest you to read chapter "Resource managment with Wicket" of this magnific free Wicket guide: https://code.google.com/p/wicket-guide/.
Here you have a similar example given in this guide in the section "Custom resources".
public class RSSProducerResource extends AbstractResource {
#Override
protected ResourceResponse newResourceResponse(Attributes attributes) {
ResourceResponse resourceResponse = new ResourceResponse();
resourceResponse.setContentType("text/xml");
resourceResponse.setTextEncoding("utf-8");
resourceResponse.setWriteCallback(new WriteCallback()
{
#Override
public void writeData(Attributes attributes) throws IOException
{
OutputStream outputStream = attributes.getResponse().getOutputStream();
Writer writer = new OutputStreamWriter(outputStream);
SyndFeedOutput output = new SyndFeedOutput();
try {
output.output(getFeed(), writer);
} catch (FeedException e) {
throw new WicketRuntimeException("Problems writing feed to response...");
}
}
});
return resourceResponse;
}
// method getFeed()...
}
And then you need to add the link in the desired page or component:
add(new ResourceLink("rssLink", new RSSProducerResource()));

GMail like file upload progress bar with GWT?

All Gmail users should have already noticed that file upload progress bar has been updated recently.
I'm wondering such effect is possible to implement with GWT.
I'm fairly new with GWT, so if any GWT source code that can help me test out the function would be very helpful.
Update
I ended up going with SWFUpload. However, other suggestions under this question are all valid. Just try different options and choose the one you like!
Take a look to this library: http://code.google.com/p/gwtupload/. It is really easy to to use and works fine in all browsers and OS I've checked. It uses ajax requests to calculate progress. BTW Swfupload doesn't do well in linux and Mac.
I've used this tool before:
http://code.google.com/p/gwt-fileapi/
Unlike the other suggestions here, not only does it give the proper API to show upload progress, it also gives the ability to do batch uploads by selecting multiple files, and it also gives drag and drop support. It also has a pre HTML5 fallback mechanism.
I've had had great luck with it gwt-fileap. Recently it broke in Firefox 7 and 8 and I had to apply this patch to it - but otherwise it works really great:
## -57,26 +57,33 ##
/**
* gets the filename
- *
+ *
* #return the filename
*/
public final native String getFileName() /*-{
- return this.fileName;
+ if(this.name)
+ return this.name;
+ else
+ return this.fileName;
+
}-*/;
/**
* gets the file size in bytes
- *
+ *
* #return the file size in bytes
*/
public final native int getFileSize() /*-{
- return this.fileSize;
+ if(this.size)
+ return this.size;
+ else
+ return this.fileSize;
}-*/;
/**
* gets the MIME type of the file, may be null if the browser cannot detect
* the type
I also had to add the following lines to http://code.google.com/p/gwt-fileapi/source/browse/trunk/gwt-fileapi/src/com/gwtpro/html5/fileapi/Html5FileApi.gwt.xml - these lines describe how the fallback mechanism works. You can do something similar if you want your code to fall back on the SWFUploader implementation shown below in case HTML5 is missing.
<define-property name="fileapi.support" values="yes,no" />
<property-provider name="fileapi.support"><![CDATA[
var input=document.createElement('input');
input.setAttribute('type','file');
return input.files==null?'no':'yes';
]]></property-provider>
<replace-with
class="com.gwtpro.html5.fileapi.client.ui.FileInput.FileInputImplHtml5">
<when-type-is
class="com.gwtpro.html5.fileapi.client.ui.FileInput.FileInputImpl" />
<when-property-is name="fileapi.support" value="yes" />
<any>
<when-property-is name="user.agent" value="ie8" />
<when-property-is name="user.agent" value="safari" />
<when-property-is name="user.agent" value="gecko1_8" />
<when-property-is name="user.agent" value="opera" />
<when-property-is name="user.agent" value="chrome" />
</any>
</replace-with>
This is how I use it in my application:
This is the interface that describes the abstraction:
public interface FileUpload {
public void uploadFiles();
public Widget getWidget();
public void initialize(Grid updateTable, Uploader uploader, String url, boolean createDropHandler);
public void setDisabled(boolean b);
public void readyToPaint();
public void reset();
}
The following is the gwt-fileapi implementation of the interface:
package com.hierarchycm.gxt.client.fileUpload;
import com.google.gwt.core.client.JsArray;
import com.google.gwt.event.dom.client.ChangeEvent;
import com.google.gwt.event.dom.client.ChangeHandler;
import com.google.gwt.http.client.RequestException;
import com.google.gwt.http.client.Response;
import com.google.gwt.user.client.Window;
import com.google.gwt.user.client.ui.Grid;
import com.google.gwt.user.client.ui.RootPanel;
import com.google.gwt.user.client.ui.Widget;
import com.gwtpro.html5.fileapi.client.FileApiSupport;
import com.gwtpro.html5.fileapi.client.drop.DropHandler;
import com.gwtpro.html5.fileapi.client.file.File;
import com.gwtpro.html5.fileapi.client.file.FileEvent;
import com.gwtpro.html5.fileapi.client.file.FileEvent.FileEventHandler;
import com.gwtpro.html5.fileapi.client.ui.FileInput;
import com.gwtpro.html5.fileapi.client.upload.UploadRequest;
import com.gwtpro.html5.fileapi.client.upload.UploadRequestBuilder;
import com.gwtpro.html5.fileapi.client.upload.UploadRequestCallback;
public class FileUploadHtmlImpl extends FileInput implements FileUpload {
private Grid uploadTable;
int currentFile =0;
String url;
File[] files;
UploadRequestBuilder fileUploader;
Uploader uploader;
public FileUploadHtmlImpl() {
}
FileUploadHtmlImpl(Grid updateTable, Uploader uploader, String url) {
this(updateTable, uploader, url, true);
}
FileUploadHtmlImpl(Grid updateTable, Uploader uploader, String url, boolean createDropHandler) {
initialize(updateTable, uploader, url, createDropHandler);
//this.setCallback(getMyCallback());
}
public void initialize(Grid updateTable, Uploader uploader, String url, boolean createDropHandler){
this.url = url;
this.uploadTable = updateTable;
this.uploader = uploader;
this.setAllowMultipleFiles(true);
this.addChangeHandler(new ChangeHandler() {
#Override
public void onChange(ChangeEvent event) {
addFiles(FileUploadHtmlImpl.this.getFiles());
uploadFiles();
}
});
if (createDropHandler) {
createDropHandler();
}
}
private File[] jsArrToArr (JsArray<File> ipFiles) {
File [] result = new File [ipFiles.length()];
for (int i = 0; i < ipFiles.length(); ++i) {
result[i] = ipFiles.get(i);
}
return result;
}
private UploadRequestCallback getMyCallback() {
return new UploadRequestCallback() {
#Override
public void onError(UploadRequest request, Throwable exception) {
uploadTable.setText(currentFile + 1, 2, "failed: " + exception.getMessage());
uploadNextFile(currentFile + 1);
}
#Override
public void onResponseReceived(UploadRequest request, Response response) {
uploadTable.setText(currentFile + 1, 2, "success: " + response.getText());
uploadNextFile(currentFile + 1);
//If we just finished uploading do your thing
if (currentFile == files.length) {
setDisabled(false);
uploader.uploadDoneEventHandler();
}
}
#Override
public void onUploadProgress(UploadRequest request, int bytesUploaded) {
uploadTable.setText(currentFile + 1, 2, bytesUploaded + "");
}
};
}
public void createDropHandler() {
RootPanel rootPanel = RootPanel.get();
DropHandler dropHandler = new DropHandler(rootPanel);
this.fileUploader = new UploadRequestBuilder(url);
this.fileUploader.setCallback(getMyCallback());
dropHandler.addFileEventHandler(new FileEventHandler() {
#Override
public void onFiles(FileEvent event) {
addFiles(jsArrToArr(event.getFiles()));
uploadFiles();
}
});
}
private void addFiles (File[] ipFiles) {
files = ipFiles;
uploadTable.clear();
uploadTable.resize(files.length + 1, 3);
uploadTable.setText(0, 0, "File name");
uploadTable.setText(0, 1, "File size");
uploadTable.setText(0, 2, "Progress");
for (int i = 0; i < files.length; ++i) {
uploadTable.setText(i + 1, 0, files[i].getFileName());
uploadTable.setText(i + 1, 1, files[i].getFileSize() + "");
uploadTable.setText(i + 1, 2, "");
}
}
public void uploadNextFile(int index) {
for (String paramName : uploader.getPostParams().keySet()) {
fileUploader.setHeader(paramName, uploader.getPostParams().get(paramName));
}
currentFile = index;
this.setDisabled(true);
if (index < this.files.length) {
try {
this.fileUploader.setHeader("itemName", files[currentFile].getFileName());
this.fileUploader.sendFile(files[currentFile]);
} catch (RequestException e) {
this.uploadTable.setText(index + 1, 2, "failed: " + e.getMessage());
uploadNextFile(index + 1);
}
}
}
public void uploadFiles() {
uploadNextFile(0);
}
#Override
public Widget getWidget() {
return this;
}
#Override
public void readyToPaint() {
//no need to do anything - already painted for non swf
}
#Override
public void reset() {
// TODO Auto-generated method stub
}
private void showCapabilities() {
RootPanel
.get("status")
.getElement()
.setInnerHTML(
"Drag and Drop Support: "
+ (FileApiSupport.isDragDropSupported() ? "Yes"
: "No")
+ "<br/>HTTPXmlRequest Level 2: "
+ (FileApiSupport.isHttpXmlRequestLevel2() ? "Yes"
: "No")
+ "<br/>File input supports multiple files: "
+ (FileApiSupport
.isMultipleFileInputSupported() ? "Yes"
: "No")+"<br/><br/>");
}
}
This is the SWFUpload http://code.google.com/p/swfupload-gwt/ implementation of the same interface:
package com.hierarchycm.gxt.client.fileUpload;
import com.extjs.gxt.ui.client.widget.Html;
import com.google.gwt.user.client.ui.Grid;
import com.google.gwt.user.client.ui.Widget;
public class FileUploadSwfImpl extends Html implements FileUpload {
SwfUploadUtil swfUploadUtil = null;
private Uploader uploader;
private String url;
private boolean createDropHandler;
private Grid updateTable;
static int uploadId = 0;
static String divTagId;
public FileUploadSwfImpl() {
divTagId = "swfupload" + uploadId++;
String divTag = "<div id=\"" + divTagId + "\"></div";
this.setHtml(divTag);
}
#Override
public void uploadFiles() {
swfUploadUtil.startUpload();
}
#Override
public Widget getWidget() {
return this;
}
public void readyToPaint() {
swfUploadUtil = new SwfUploadUtil(uploader, updateTable, divTagId, url);
}
#Override
public void initialize(Grid updateTable, Uploader uploader, String url, boolean createDropHandler) {
this.uploader = uploader;
this.url = url;
this.createDropHandler = createDropHandler;
this.updateTable = updateTable;
}
#Override
public void setDisabled(boolean b) {
swfUploadUtil.setDisabled(b);
this.disabled = true;
}
#Override
public void reset() {
swfUploadUtil.reset();
}
}
And this is the utility the FileUploadSwfImpl depends on:
package com.hierarchycm.gxt.client.fileUpload;
import java.util.HashMap;
import org.swfupload.client.File;
import org.swfupload.client.SWFUpload;
import org.swfupload.client.UploadBuilder;
import org.swfupload.client.SWFUpload.ButtonAction;
import org.swfupload.client.SWFUpload.ButtonCursor;
import org.swfupload.client.event.DialogStartHandler;
import org.swfupload.client.event.FileDialogCompleteHandler;
import org.swfupload.client.event.FileQueuedHandler;
import org.swfupload.client.event.UploadCompleteHandler;
import org.swfupload.client.event.UploadErrorHandler;
import org.swfupload.client.event.UploadProgressHandler;
import org.swfupload.client.event.UploadSuccessHandler;
import org.swfupload.client.event.FileDialogCompleteHandler.FileDialogCompleteEvent;
import org.swfupload.client.event.FileQueuedHandler.FileQueuedEvent;
import org.swfupload.client.event.UploadCompleteHandler.UploadCompleteEvent;
import org.swfupload.client.event.UploadErrorHandler.UploadErrorEvent;
import org.swfupload.client.event.UploadProgressHandler.UploadProgressEvent;
import org.swfupload.client.event.UploadSuccessHandler.UploadSuccessEvent;
import com.extjs.gxt.ui.client.widget.form.TextArea;
import com.google.gwt.core.client.GWT;
import com.google.gwt.user.client.Window;
import com.google.gwt.user.client.ui.Grid;
public class SwfUploadUtil {
HashMap<String, Integer> filenameRowHm = new HashMap<String, Integer>();
private boolean resetIssued;
SWFUpload swfUpload = null;
private HashMap <String, File> files = new HashMap<String, File>();
int tableRow = 5;
Uploader uploader = null;
private Grid updateTable;
private String divName;
private String url;
synchronized private void removeFile(String id) {
files.remove(id);
}
public SwfUploadUtil(Uploader uploader, Grid updateTable, String divName, String url){
reset();
this.uploader = uploader;
this.updateTable = updateTable;
this.divName = divName;
this.url = url;
this.swfUpload = loadSWFUpload();
updateTable.resize(5, 5);
updateTable.setText(2, 0, "Upload URL:" );
updateTable.setText(2, 1, url );
updateTable.setText(4, 0, "File Name" );
updateTable.setText(4, 1, "Bytes In");
updateTable.setText(4, 2, "Status");
updateTable.setText(4, 3, "File Size" );
updateTable.setText(4, 4, "Server response" );
}
public SWFUpload loadSWFUpload() {
this.updateTable = updateTable;
if (swfUpload == null) {
final UploadBuilder builder1 = new UploadBuilder();
builder1.setHTTPSuccessCodes(200, 201);
builder1.setFileTypes("*.webm;*.asf;*.wma;*.wmv;*.avi;*.flv;*.swf;*.mpg;*.mpeg;*.mp4;*.mov;*.m4v;*.aac;*.mp3;*.wav;*.png;*.jpg;*.jpeg;*.gif");
builder1.setFileTypesDescription("Images, Video & Sound");
builder1.setButtonPlaceholderID(divName);
builder1.setButtonImageURL("./images/XPButtonUploadText_61x22.png");
builder1.setButtonCursor(ButtonCursor.HAND);
builder1.setButtonWidth(61);
builder1.setButtonHeight(22);
builder1.setButtonAction(ButtonAction.SELECT_FILES);
builder1.setUploadProgressHandler(new UploadProgressHandler() {
public void onUploadProgress(UploadProgressEvent e) {
File f = e.getFile();
updateTable.setText(getFilenameRow(f), 2, String.valueOf(e.getBytesComplete()));
}
});
builder1.setUploadSuccessHandler(new UploadSuccessHandler() {
public void onUploadSuccess(UploadSuccessEvent e) {
File f = e.getFile();
updateTable.setText(getFilenameRow(f), 4, e.getServerData());
}
});
builder1.setUploadErrorHandler(new UploadErrorHandler() {
public void onUploadError(UploadErrorEvent e) {
File ff = e.getFile();
String message = e.getMessage();
if (message == null || message.trim().length() == 0) {
message = "upload failed";
}
updateTable.setText(getFilenameRow(ff), 2, String.valueOf(message));
removeFile(ff.getId());
if (files.values().size() > 0) {
ff = files.values().iterator().next();
updateTable.setText(getFilenameRow(ff), 2, "Started");
swfUpload.startUpload(ff.getId());
}
}
});
builder1.setUploadURL(url);
builder1.setDialogStartHandler(new DialogStartHandler() {
#Override
public void onDialogStart() {
if(resetIssued == true) {
filenameRowHm.clear();
resetIssued = false;
}
}
}
);
builder1.setUploadCompleteHandler(new UploadCompleteHandler() {
public void onUploadComplete(UploadCompleteEvent e) {
File f = e.getFile();
updateTable.setText(getFilenameRow(f), 2, "Done");
removeFile(f.getId());
if (files.values().size() > 0) {
File ff = files.values().iterator().next();
updateTable.setText(getFilenameRow(ff), 2, "Started");
swfUpload.startUpload(ff.getId());
} else {
uploader.uploadDoneEventHandler();
}
}
});
builder1.setFileQueuedHandler(new FileQueuedHandler() {
public void onFileQueued(FileQueuedEvent event) {
File f = event.getFile();
updateTable.setText(getFilenameRow(f), 2, "Queued");
files.put(f.getId(), f);
}
});
builder1.setFileDialogCompleteHandler(new FileDialogCompleteHandler() {
public void onFileDialogComplete(FileDialogCompleteEvent e) {
updateTable.setText(2, 0, "Number of files");
updateTable.setText(2, 1, String.valueOf(files.values().size()));
for(File f : files.values()) {
getFilenameRow(f);
}
if (files.values().size() > 0) {
for (String paramName : uploader.getPostParams().keySet()) {
swfUpload.addPostParam(paramName,uploader.getPostParams().get(paramName));
}
}
}
});
swfUpload = builder1.build();
}
return swfUpload;
}
public int getFilenameRow (File f) {
Integer filenamerow = filenameRowHm.get(f.getId());
if (filenamerow == null) {
updateTable.resize(tableRow+1, 5);
filenamerow = new Integer(tableRow++);
updateTable.setText(filenamerow.intValue(), 0, f.getName());
updateTable.setText(filenamerow.intValue(), 3, String.valueOf(f.getSize()));
//updateTable.setText(filenamerow.intValue(), 3, String.valueOf(f));
filenameRowHm.put(f.getId(), filenamerow);
}
return filenamerow.intValue();
}
public void startUpload() {
uploader.uploadStartedEventHandler();
swfUpload.startUpload();
}
public void setDisabled(boolean disabled) {
swfUpload.setButtonDisabled(disabled);
}
public void reset() {
// TODO Auto-generated method stub
resetIssued = true;
}
}
Use SWFUpload via swfupload-gwt
The main advantage over the other methods is this does not require any special server code. You could even upload to another domain (if there is a crossdomain.xml which allows it).
Check out GWTC Upload, which has an implementation of exactly what you're looking for.
It's trivial to write your own if you have a java back end, you just start a file upload and then poll the server on a timer to see where it's up to (say every second or two). The java file upload binaries (the apache commons ones) support telling you the current progress so it's trivial to do.
Recently I started a project of my own called gwtupld
http://github.com/kompot/gwtupld/
The main goal is to provide best file upload experience for cutting edge browsers
and acceptable usability for all others. By the moment, following key features are present
multiple file selection
drag'n'drop
progress bars
slick and simple exterior
consistent behavior for all browsers
ease of visual customization
no external dependencies but GWT
Feel free to fork and submit bugs/feature proposals.
You can check out source code, then type
gradlew gwtcompile devmode
and get it will start a fully functional
sandbox (server side with real file saving should work)
You can use GwtSwfExt which is wrapper on top of SWFUpload (Its same as Swfupload-gwt lib ) you can download example and source code from http://code.google.com/p/gwtswfext.
When creating your own file upload progress, instead of pulling it form server at a small set time, you can have the client to display a indeterminate bar for 2 seconds and have the server calculate the estimated finish time the change back to determinate and pull new estimates every 5, 10 seconds instead. that should have little to no effect on the traffic.
There is custom multiupload plugin demo http://ext4all.com/post/extjs-4-multiple-file-upload