QuickfixJ create message from xml string - quickfix

QuickFixJ Message class has method toXML() which converts message into xml string.
Is there any way I can create message object from the XML string?
I need the reverse of toXML() i.e. I want to create Message from xml.

There's nothing like that built in. There doesn't really need to be, as there normally wouldn't be a use-case for it.
I've written a class which does it. The order of the tags might be different from the input message (but the FIX spec makes no guarantees about tag order, except within groups) because the XML exporter sorts by tag number, and so the original tag order is lost.
It only works on a single message in an XML file, but could easily be adapted to work on multiple messages.
You can use the standard MessageUtils.parse to create a Message from the resultant string.
Let me know if you have any problems.
class XmlMessage
{
private final String xml;
private final String delimiter;
XmlMessage(final String xml, final String delimiter)
{
this.xml = xml;
this.delimiter = delimiter;
}
public String toFixMessage() throws IOException, SAXException, ParserConfigurationException
{
final Document doc = DocumentBuilderFactory.newInstance()
.newDocumentBuilder()
.parse(new ByteArrayInputStream(xml.getBytes()));
final StringBuilder messageBuilder = new StringBuilder();
build(messageBuilder, doc, "header");
build(messageBuilder, doc, "body");
build(messageBuilder, doc, "trailer");
return messageBuilder.toString();
}
private void build(final StringBuilder messageBuilder, final Document doc, final String section)
{
final NodeList sectionRoot = doc.getElementsByTagName(section);
final NodeList sectionChildren = sectionRoot.item(0).getChildNodes();
build(messageBuilder, sectionChildren);
}
private void build(final StringBuilder messageBuilder, final NodeList nodeList)
{
final Set<String> numInGroupTags = getNumInGroupTags(nodeList);
for (int i = 0; i < nodeList.getLength(); i++)
{
final Node node = nodeList.item(i);
if (node.getNodeName().equals("field") && !numInGroupTags.contains(getTagNumber(node)))
{
messageBuilder.append(getTagNumber(node))
.append('=')
.append(node.getTextContent())
.append(delimiter);
}
else if (node.getNodeName().equals("groups"))
{
final NodeList groupElems = node.getChildNodes();
messageBuilder.append(getTagNumber(node))
.append('=')
.append(getGroupCount(groupElems))
.append(delimiter);
for (int j = 0; j < groupElems.getLength(); j++)
{
build(messageBuilder, groupElems.item(j).getChildNodes());
}
}
}
}
private Set<String> getNumInGroupTags(final NodeList nodeList)
{
final Set<String> numInGroupTags = new HashSet<>();
for (int i = 0; i < nodeList.getLength(); i++)
{
if (nodeList.item(i).getNodeName().equals("groups"))
{
numInGroupTags.add(getTagNumber(nodeList.item(i)));
}
}
return numInGroupTags;
}
private String getTagNumber(final Node node)
{
return node.getAttributes().getNamedItem("tag").getTextContent();
}
private int getGroupCount(final NodeList groupRoot)
{
int count = 0;
for (int j = 0; j < groupRoot.getLength(); j++)
{
if (groupRoot.item(j).getNodeName().equals("group")) count++;
}
return count;
}
}

Related

Replacement for "GROUP BY" in ContentResolver query in Android Q ( Android 10, API 29 changes)

I'm upgrading some legacy to target Android Q, and of course this code stop working:
String[] PROJECTION_BUCKET = {MediaStore.Images.ImageColumns.BUCKET_ID,
MediaStore.Images.ImageColumns.BUCKET_DISPLAY_NAME,
MediaStore.Images.ImageColumns.DATE_TAKEN,
MediaStore.Images.ImageColumns.DATA,
"COUNT(" + MediaStore.Images.ImageColumns._ID + ") AS COUNT",
MediaStore.Files.FileColumns.MEDIA_TYPE,
MediaStore.MediaColumns._ID};
String BUCKET_GROUP_BY = " 1) and " + BUCKET_WHERE.toString() + " GROUP BY 1,(2";
cur = context.getContentResolver().query(images, PROJECTION_BUCKET,
BUCKET_GROUP_BY, null, BUCKET_ORDER_BY);
android.database.sqlite.SQLiteException: near "GROUP": syntax error (code 1 SQLITE_ERROR[1])
Here it supposed to obtain list of images with album name, date, count of pictures - one image for each album, so we can create album picker screen without querying all pictures and loop through it to create albums.
Is it possible to group query results with contentResolver since SQL queries stoped work?
(I know that ImageColumns.DATA and "COUNT() AS COUNT" are deprecated too, but this is a question about GROUP BY)
(There is a way to query albums and separately query photo, to obtain photo uri for album cover, but i want to avoid overheads)
Unfortunately Group By is no longer supported in Android 10 and above, neither any aggregated functions such as COUNT. This is by design and there is no workaround.
The solution is what you are actually trying to avoid, which is to query, iterate, and get metrics.
To get you started you can use the next snipped, which will resolve the buckets (albums), and the amount of records in each one.
I haven't added code to resolve the thumbnails, but is easy. You must perform a query for each bucket Id from all the Album instances, and use the image from the first record.
public final class AlbumQuery
{
#NonNull
public static HashMap<String, AlbumQuery.Album> get(#NonNull final Context context)
{
final HashMap<String, AlbumQuery.Album> output = new HashMap<>();
final Uri contentUri = MediaStore.Images.Media.EXTERNAL_CONTENT_URI;
final String[] projection = {MediaStore.Images.Media.BUCKET_DISPLAY_NAME, MediaStore.Images.Media.BUCKET_ID};
try (final Cursor cursor = context.getContentResolver().query(contentUri, projection, null, null, null))
{
if ((cursor != null) && (cursor.moveToFirst() == true))
{
final int columnBucketName = cursor.getColumnIndexOrThrow(MediaStore.Images.Media.BUCKET_DISPLAY_NAME);
final int columnBucketId = cursor.getColumnIndexOrThrow(MediaStore.Images.Media.BUCKET_ID);
do
{
final String bucketId = cursor.getString(columnBucketId);
final String bucketName = cursor.getString(columnBucketName);
if (output.containsKey(bucketId) == false)
{
final int count = AlbumQuery.getCount(context, contentUri, bucketId);
final AlbumQuery.Album album = new AlbumQuery.Album(bucketId, bucketName, count);
output.put(bucketId, album);
}
} while (cursor.moveToNext());
}
}
return output;
}
private static int getCount(#NonNull final Context context, #NonNull final Uri contentUri, #NonNull final String bucketId)
{
try (final Cursor cursor = context.getContentResolver().query(contentUri,
null, MediaStore.Images.Media.BUCKET_ID + "=?", new String[]{bucketId}, null))
{
return ((cursor == null) || (cursor.moveToFirst() == false)) ? 0 : cursor.getCount();
}
}
public static final class Album
{
#NonNull
public final String buckedId;
#NonNull
public final String bucketName;
public final int count;
Album(#NonNull final String bucketId, #NonNull final String bucketName, final int count)
{
this.buckedId = bucketId;
this.bucketName = bucketName;
this.count = count;
}
}
}
This is a more efficient(not perfect) way to do that.
I am doing it for videos, but doing so is the same for images to. just change MediaStore.Video.Media.X to MediaStore.Images.Media.X
public class QUtils {
/*created by Nasib June 6, 2020*/
#RequiresApi(api = Build.VERSION_CODES.Q)
public static ArrayList<FolderHolder> loadListOfFolders(Context context) {
ArrayList<FolderHolder> allFolders = new ArrayList<>();//list that we need
HashMap<Long, String> folders = new HashMap<>(); //hashmap to track(no duplicates) folders by using their ids
String[] projection = {MediaStore.Video.Media._ID,
MediaStore.Video.Media.BUCKET_ID,
MediaStore.Video.Media.BUCKET_DISPLAY_NAME,
MediaStore.Video.Media.DATE_ADDED};
ContentResolver CR = context.getContentResolver();
Uri root = MediaStore.Video.Media.getContentUri(MediaStore.VOLUME_EXTERNAL);
Cursor c = CR.query(root, projection, null, null, MediaStore.Video.Media.DATE_ADDED + " desc");
if (c != null && c.moveToFirst()) {
int folderIdIndex = c.getColumnIndexOrThrow(MediaStore.Video.Media.BUCKET_ID);
int folderNameIndex = c.getColumnIndexOrThrow(MediaStore.Video.Media.BUCKET_DISPLAY_NAME);
int thumbIdIndex = c.getColumnIndexOrThrow(MediaStore.Video.Media._ID);
int dateAddedIndex = c.getColumnIndexOrThrow(MediaStore.Video.Media.DATE_ADDED);
do {
Long folderId = c.getLong(folderIdIndex);
if (folders.containsKey(folderId) == false) { //proceed only if the folder data has not been inserted already :)
long thumbId = c.getLong(thumbIdIndex);
String folderName = c.getString(folderNameIndex);
String dateAdded = c.getString(dateAddedIndex);
Uri thumbPath = ContentUris.withAppendedId(MediaStore.Video.Media.EXTERNAL_CONTENT_URI, thumbId);
folders.put(folderId, folderName);
allFolders.add(new FolderHolder(String.valueOf(thumbPath), folderName, dateAdded));
}
} while (c.moveToNext());
c.close(); //close cursor
folders.clear(); //clear the hashmap becuase it's no more useful
}
return allFolders;
}
}
FolderHolder model class
public class FolderHolder {
private String folderName;
public long dateAdded;
private String thumbnailPath;
public long folderId;
public void setPath(String thumbnailPath) {
this.thumbnailPath = thumbnailPath;
}
public String getthumbnailPath() {
return thumbnailPath;
}
public FolderHolder(long folderId, String thumbnailPath, String folderName, long dateAdded) {
this.folderId = folderId;
this.folderName = folderName;
this.thumbnailPath = thumbnailPath;
this.dateAdded = dateAdded;
}
public String getFolderName() {
return folderName;
}
}
GROUP_BY supporting in case of using Bundle:
val bundle = Bundle().apply {
putString(
ContentResolver.QUERY_ARG_SQL_SORT_ORDER,
"${MediaStore.MediaColumns.DATE_MODIFIED} DESC"
)
putString(
ContentResolver.QUERY_ARG_SQL_GROUP_BY,
MediaStore.Images.ImageColumns.BUCKET_ID
)
}
contentResolver.query(
uri,
arrayOf(
MediaStore.Images.ImageColumns.BUCKET_ID,
MediaStore.Images.ImageColumns.BUCKET_DISPLAY_NAME,
MediaStore.Images.ImageColumns.DATE_TAKEN,
MediaStore.Images.ImageColumns.DATA
),
bundle,
null
)

How I can make dynamic header column and dynamic data to export to Excel/CSV/PDF using MongoDB, Spring Boot, and apache poi

I want to make export function using Spring Boot, I have data on MongoDB NoSQL, and then want to export my document on MongoDB Dynamically using Apache POI ( If any better dependency you can recommend to me).
I don't want to declare header column, entity model, etc., I want to export data dynamically as shown as in my Document database, any one can give me an example for it?
Please try these code may be help for you.
->add Gson Depandencies in porm.xml
Controller code
MasterController.java
#Autowired
MasterServiceImpl masterServiceImpl;
#GetMapping(value="/dynamicfile/{flag}/{fileType}/{fileName}")
public ResponseEntity<InputStreamResource> downloadsFiles(#PathVariable("flag") int flag,#PathVariable("fileType") String fileType,#PathVariable("fileName") String fileName) throws IOException{
List<?> objects=new ArrayList<>();
if(flag==1) {
objects=masterServiceImpl.getData();
}
ByteArrayInputStream in = masterServiceImpl.downloadsFiles(objects,fileType);
HttpHeaders headers = new HttpHeaders();
if(fileType.equals("Excel")) {
headers.add("Content-Disposition", "attachment; filename="+fileName+".xlsx");
}else if(fileType.equals("Pdf")){
headers.add("Content-Disposition", "attachment; filename="+fileName+".pdf");
}else if(fileType.equals("Csv")) {
headers.add("Content-Disposition", "attachment; filename="+fileName+".csv");
}
return ResponseEntity.ok().headers(headers).body(new InputStreamResource(in));
}
Service code :
MasterServiceImpl.java
public static List<HashMap<Object, Object>> getListOfObjectToListOfHashMap(List<?> objects) {
List<HashMap<Object,Object>> list=new ArrayList<>();
for(int i=0;i<objects.size();i++) {
HashMap<Object,Object> map=new HashMap<>();
String temp=new Gson().toJson(objects.get(i)).toString();
String temo1= temp.substring(1, temp.length()-1);
String[] temp2=temo1.split(",");
for(int j=-1;j<temp2.length;j++) {
if(j==-1) {
map.put("SrNo",i+1);
}else {
String tempKey=temp2[j].toString().split(":")[0].toString();
String tempValue=temp2[j].toString().split(":")[1].toString();
char ch=tempValue.charAt(0);
if(ch=='"') {
map.put(tempKey.substring(1, tempKey.length()-1), tempValue.substring(1, tempValue.length()-1));
}else {
map.put(tempKey.substring(1, tempKey.length()-1), tempValue);
}
}
}
list.add(map);
}
return list;
}
public static ByteArrayInputStream downloadsFiles(List<?> objects,String fileType) throws IOException {
ByteArrayOutputStream out = new ByteArrayOutputStream();
List<HashMap<Object, Object>> list = getListOfObjectToListOfHashMap(objects);
String[] COLUMNs = getColumnsNameFromListOfObject(objects);
try{
if(fileType.equals("Excel")) {
generateExcel(list, COLUMNs,out);
}else if(fileType.equals("Pdf")) {
generatePdf(list, COLUMNs,out);
}else if(fileType.equals("Csv")) {
generatePdf(list, COLUMNs,out);
}
}catch(Exception ex) {
System.out.println("Error occurred:"+ ex);
}
return new ByteArrayInputStream(out.toByteArray());
}
public static final String[] getColumnsNameFromListOfObject(List<?> objects) {
String strObjects=new Gson().toJson(objects.get(0)).toString();
String[] setHeader= strObjects.substring(1, strObjects.length()-1).split(",");
String header="SrNo";
for(int i=0;i<setHeader.length;i++) {
String str=setHeader[i].toString().split(":")[0].toString();
header=header+","+str.substring(1, str.length()-1);
}
return header.split(",");
}
public static final void generateExcel(List<HashMap<Object, Object>> list, String[] COLUMNs,ByteArrayOutputStream out) throws IOException {
Workbook workbook = new XSSFWorkbook();
Sheet sheet = workbook.createSheet("Excelshit");
Font headerFont = workbook.createFont();
headerFont.setBold(true);
headerFont.setColor(IndexedColors.BLUE.getIndex());
CellStyle headerCellStyle = workbook.createCellStyle();
headerCellStyle.setFont(headerFont);
Row headerRow = sheet.createRow(0);
for (int col = 0; col < COLUMNs.length; col++) {
Cell cell = headerRow.createCell(col);
cell.setCellValue(COLUMNs[col]);
cell.setCellStyle(headerCellStyle);
}
int rowIdx = 1;
for(int k = 0; k < list.size(); k++){
Row row =sheet.createRow(rowIdx++);
for (Map.Entry<Object, Object> entry : list.get(k).entrySet()){
Object key = entry.getKey();
Object value = entry.getValue();
for (int col = 0; col < COLUMNs.length; col++) {
if(key.toString().equals(COLUMNs[col].toString())) {
row.createCell(col).setCellValue(value.toString());
}
}
}
}
workbook.write(out);
System.out.println(workbook);
}
private static final void generatePdf(List<HashMap<Object, Object>> list, String[] COLUMNs,ByteArrayOutputStream out) throws DocumentException {
Document document = new Document();
com.itextpdf.text.Font headerFont =FontFactory.getFont(FontFactory.HELVETICA_BOLD,8);
com.itextpdf.text.Font dataFont =FontFactory.getFont(FontFactory.TIMES_ROMAN,8);
PdfPCell hcell=null;
PdfPTable table = new PdfPTable(COLUMNs.length);
for (int col = 0; col < COLUMNs.length; col++) {
hcell = new PdfPCell(new Phrase(COLUMNs[col], headerFont));
hcell.setHorizontalAlignment(Element.ALIGN_CENTER);
table.addCell(hcell);
}
for(int index = 0; index < list.size(); index++){
PdfPCell cell = null;
for (Map.Entry<Object, Object> entry : list.get(index).entrySet()){
Object key = entry.getKey();
Object value = entry.getValue();
for (int col = 0; col < COLUMNs.length; col++) {
if(key.toString().equals(COLUMNs[col].toString())) {
cell = new PdfPCell(new Phrase(value.toString(),dataFont));
cell.setVerticalAlignment(Element.ALIGN_MIDDLE);
cell.setHorizontalAlignment(Element.ALIGN_CENTER);
}
}
table.addCell(cell);
}
}
PdfWriter.getInstance(document, out);
document.open();
document.add(table);
document.close();
}
public List<TblDepartment> getData() {
List<TblDepartment> list = new ArrayList<>();
departmentRepository.findAll().forEach(list::add);
return list;
}

Creating custom plugin for chinese tokenization

I'm working towards properly integrating the stanford segmenter within SOLR for chinese tokenization.
This plugin involves loading other jar files and model files. I've got it working in a crude manner by hardcoding the complete path for the files.
I'm looking for methods to create the plugin where the paths need not be hardcoded and also to have the plugin in conformance with the SOLR plugin architecture. Please let me know if there are any recommended sites or tutorials for this.
I've added my code below :
public class ChineseTokenizerFactory extends TokenizerFactory {
/** Creates a new WhitespaceTokenizerFactory */
public ChineseTokenizerFactory(Map<String,String> args) {
super(args);
assureMatchVersion();
if (!args.isEmpty()) {
throw new IllegalArgumentException("Unknown parameters: " + args);
}
}
#Override
public ChineseTokenizer create(AttributeFactory factory, Reader input) {
Reader processedStringReader = new ProcessedStringReader(input);
return new ChineseTokenizer(luceneMatchVersion, factory, processedStringReader);
}
}
public class ProcessedStringReader extends java.io.Reader {
private static final int BUFFER_SIZE = 1024 * 8;
//private static TextProcess m_textProcess = null;
private static final String basedir = "/home/praveen/PDS_Meetup/solr-4.9.0/custom_plugins/";
static Properties props = null;
static CRFClassifier<CoreLabel> segmenter = null;
private char[] m_inputData = null;
private int m_offset = 0;
private int m_length = 0;
public ProcessedStringReader(Reader input){
char[] arr = new char[BUFFER_SIZE];
StringBuffer buf = new StringBuffer();
int numChars;
if(segmenter == null)
{
segmenter = new CRFClassifier<CoreLabel>(getProperties());
segmenter.loadClassifierNoExceptions(basedir + "ctb.gz", getProperties());
}
try {
while ((numChars = input.read(arr, 0, arr.length)) > 0) {
buf.append(arr, 0, numChars);
}
} catch (IOException e) {
e.printStackTrace();
}
m_inputData = processText(buf.toString()).toCharArray();
m_offset = 0;
m_length = m_inputData.length;
}
#Override
public int read(char[] cbuf, int off, int len) throws IOException {
int charNumber = 0;
for(int i = m_offset + off;i<m_length && charNumber< len; i++){
cbuf[charNumber] = m_inputData[i];
m_offset ++;
charNumber++;
}
if(charNumber == 0){
return -1;
}
return charNumber;
}
#Override
public void close() throws IOException {
m_inputData = null;
m_offset = 0;
m_length = 0;
}
public String processText(String inputText)
{
List<String> segmented = segmenter.segmentString(inputText);
String output = "";
if(segmented.size() > 0)
{
output = segmented.get(0);
for(int i=1;i<segmented.size();i++)
{
output = output + " " +segmented.get(i);
}
}
System.out.println(output);
return output;
}
static Properties getProperties()
{
if (props == null) {
props = new Properties();
props.setProperty("sighanCorporaDict", basedir);
// props.setProperty("NormalizationTable", "data/norm.simp.utf8");
// props.setProperty("normTableEncoding", "UTF-8");
// below is needed because CTBSegDocumentIteratorFactory accesses it
props.setProperty("serDictionary",basedir+"dict-chris6.ser.gz");
props.setProperty("inputEncoding", "UTF-8");
props.setProperty("sighanPostProcessing", "true");
}
return props;
}
}
public final class ChineseTokenizer extends CharTokenizer {
public ChineseTokenizer(Version matchVersion, Reader in) {
super(matchVersion, in);
}
public ChineseTokenizer(Version matchVersion, AttributeFactory factory, Reader in) {
super(matchVersion, factory, in);
}
/** Collects only characters which do not satisfy
* {#link Character#isWhitespace(int)}.*/
#Override
protected boolean isTokenChar(int c) {
return !Character.isWhitespace(c);
}
}
You can pass the argument through the Factory's args parameter.

Can I drag items from Outlook into my SWT application?

Background
Our Eclipse RCP 3.6-based application lets people drag files in for storage/processing. This works fine when the files are dragged from a filesystem, but not when people drag items (messages or attachments) directly from Outlook.
This appears to be because Outlook wants to feed our application the files via a FileGroupDescriptorW and FileContents, but SWT only includes a FileTransfer type. (In a FileTransfer, only the file paths are passed, with the assumption that the receiver can locate and read them. The FileGroupDescriptorW/FileContents approach can supply files directly application-to-application without writing temporary files out to disk.)
We have tried to produce a ByteArrayTransfer subclass that could accept FileGroupDescriptorW and FileContents. Based on some examples on the Web, we were able to receive and parse the FileGroupDescriptorW, which (as the name implies) describes the files available for transfer. (See code sketch below.) But we have been unable to accept the FileContents.
This seems to be because Outlook offers the FileContents data only as TYMED_ISTREAM or TYMED_ISTORAGE, but SWT only understands how to exchange data as TYMED_HGLOBAL. Of those, it appears that TYMED_ISTORAGE would be preferable, since it's not clear how TYMED_ISTREAM could provide access to multiple files' contents.
(We also have some concerns about SWT's desire to pick and convert only a single TransferData type, given that we need to process two, but we think we could probably hack around that in Java somehow: it seems that all the TransferDatas are available at other points of the process.)
Questions
Are we on the right track here? Has anyone managed to accept FileContents in SWT yet? Is there any chance that we could process the TYMED_ISTORAGE data without leaving Java (even if by creating a fragment-based patch to, or a derived version of, SWT), or would we have to build some new native support code too?
Relevant code snippets
Sketch code that extracts file names:
// THIS IS NOT PRODUCTION-QUALITY CODE - FOR ILLUSTRATION ONLY
final Transfer transfer = new ByteArrayTransfer() {
private final String[] typeNames = new String[] { "FileGroupDescriptorW", "FileContents" };
private final int[] typeIds = new int[] { registerType(typeNames[0]), registerType(typeNames[1]) };
#Override
protected String[] getTypeNames() {
return typeNames;
}
#Override
protected int[] getTypeIds() {
return typeIds;
}
#Override
protected Object nativeToJava(TransferData transferData) {
if (!isSupportedType(transferData))
return null;
final byte[] buffer = (byte[]) super.nativeToJava(transferData);
if (buffer == null)
return null;
try {
final DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
long count = 0;
for (int i = 0; i < 4; i++) {
count += in.readUnsignedByte() << i;
}
for (int i = 0; i < count; i++) {
final byte[] filenameBytes = new byte[260 * 2];
in.skipBytes(72); // probable architecture assumption(s) - may be wrong outside standard 32-bit Win XP
in.read(filenameBytes);
final String fileNameIncludingTrailingNulls = new String(filenameBytes, "UTF-16LE");
int stringLength = fileNameIncludingTrailingNulls.indexOf('\0');
if (stringLength == -1)
stringLength = 260;
final String fileName = fileNameIncludingTrailingNulls.substring(0, stringLength);
System.out.println("File " + i + ": " + fileName);
}
in.close();
return buffer;
}
catch (final Exception e) {
return null;
}
}
};
In the debugger, we see that ByteArrayTransfer's isSupportedType() ultimately returns false for the FileContents because the following test is not passed (since its tymed is TYMED_ISTREAM | TYMED_ISTORAGE):
if (format.cfFormat == types[i] &&
(format.dwAspect & COM.DVASPECT_CONTENT) == COM.DVASPECT_CONTENT &&
(format.tymed & COM.TYMED_HGLOBAL) == COM.TYMED_HGLOBAL )
return true;
This excerpt from org.eclipse.swt.internal.ole.win32.COM leaves us feeling less hope for an easy solution:
public static final int TYMED_HGLOBAL = 1;
//public static final int TYMED_ISTORAGE = 8;
//public static final int TYMED_ISTREAM = 4;
Thanks.
even if
//public static final int TYMED_ISTREAM = 4;
Try below code.. it should work
package com.nagarro.jsag.poc.swtdrag;
imports ...
public class MyTransfer extends ByteArrayTransfer {
private static int BYTES_COUNT = 592;
private static int SKIP_BYTES = 72;
private final String[] typeNames = new String[] { "FileGroupDescriptorW", "FileContents" };
private final int[] typeIds = new int[] { registerType(typeNames[0]), registerType(typeNames[1]) };
#Override
protected String[] getTypeNames() {
return typeNames;
}
#Override
protected int[] getTypeIds() {
return typeIds;
}
#Override
protected Object nativeToJava(TransferData transferData) {
String[] result = null;
if (!isSupportedType(transferData) || transferData.pIDataObject == 0)
return null;
IDataObject data = new IDataObject(transferData.pIDataObject);
data.AddRef();
// Check for descriptor format type
try {
FORMATETC formatetcFD = transferData.formatetc;
STGMEDIUM stgmediumFD = new STGMEDIUM();
stgmediumFD.tymed = COM.TYMED_HGLOBAL;
transferData.result = data.GetData(formatetcFD, stgmediumFD);
if (transferData.result == COM.S_OK) {
// Check for contents format type
long hMem = stgmediumFD.unionField;
long fileDiscriptorPtr = OS.GlobalLock(hMem);
int[] fileCount = new int[1];
try {
OS.MoveMemory(fileCount, fileDiscriptorPtr, 4);
fileDiscriptorPtr += 4;
result = new String[fileCount[0]];
for (int i = 0; i < fileCount[0]; i++) {
String fileName = handleFile(fileDiscriptorPtr, data);
System.out.println("FileName : = " + fileName);
result[i] = fileName;
fileDiscriptorPtr += BYTES_COUNT;
}
} catch (Exception e) {
e.printStackTrace();
} finally {
OS.GlobalFree(hMem);
}
}
} finally {
data.Release();
}
return result;
}
private String handleFile(long fileDiscriptorPtr, IDataObject data) throws Exception {
// GetFileName
char[] fileNameChars = new char[OS.MAX_PATH];
byte[] fileNameBytes = new byte[OS.MAX_PATH];
COM.MoveMemory(fileNameBytes, fileDiscriptorPtr, BYTES_COUNT);
// Skip some bytes.
fileNameBytes = Arrays.copyOfRange(fileNameBytes, SKIP_BYTES, fileNameBytes.length);
String fileNameIncludingTrailingNulls = new String(fileNameBytes, "UTF-16LE");
fileNameChars = fileNameIncludingTrailingNulls.toCharArray();
StringBuilder builder = new StringBuilder(OS.MAX_PATH);
for (int i = 0; fileNameChars[i] != 0 && i < fileNameChars.length; i++) {
builder.append(fileNameChars[i]);
}
String name = builder.toString();
try {
File file = saveFileContent(name, data);
if (file != null) {
System.out.println("File Saved # " + file.getAbsolutePath());
;
}
} catch (IOException e) {
System.out.println("Count not save file content");
;
}
return name;
}
private File saveFileContent(String fileName, IDataObject data) throws IOException {
File file = null;
FORMATETC formatetc = new FORMATETC();
formatetc.cfFormat = typeIds[1];
formatetc.dwAspect = COM.DVASPECT_CONTENT;
formatetc.lindex = 0;
formatetc.tymed = 4; // content.
STGMEDIUM stgmedium = new STGMEDIUM();
stgmedium.tymed = 4;
if (data.GetData(formatetc, stgmedium) == COM.S_OK) {
file = new File(fileName);
IStream iStream = new IStream(stgmedium.unionField);
iStream.AddRef();
try (FileOutputStream outputStream = new FileOutputStream(file)) {
int increment = 1024 * 4;
long pv = COM.CoTaskMemAlloc(increment);
int[] pcbWritten = new int[1];
while (iStream.Read(pv, increment, pcbWritten) == COM.S_OK && pcbWritten[0] > 0) {
byte[] buffer = new byte[pcbWritten[0]];
OS.MoveMemory(buffer, pv, pcbWritten[0]);
outputStream.write(buffer);
}
COM.CoTaskMemFree(pv);
} finally {
iStream.Release();
}
return file;
} else {
return null;
}
}
}
Have you looked at https://bugs.eclipse.org/bugs/show_bug.cgi?id=132514 ?
Attached to this bugzilla entry is an patch (against an rather old version of SWT) that might be of interest.
I had the same problem and created a small library providing a Drag'n Drop Transfer Class for JAVA SWT. It can be found here:
https://github.com/HendrikHoetker/OutlookItemTransfer
Currently it supports dropping Mail Items from Outlook to your Java SWT application and will provide a list of OutlookItems with the Filename and a byte array of the file contents.
All is pure Java and in-memory (no temp files).
Usage in your SWT java application:
if (OutlookItemTransfer.getInstance().isSupportedType(event.currentDataType)) {
Object o = OutlookItemTransfer.getInstance().nativeToJava(event.currentDataType);
if (o != null && o instanceof OutlookMessage[]) {
OutlookMessage[] outlookMessages = (OutlookMessage[])o;
for (OutlookMessage msg: outlookMessages) {
//...
}
}
}
The OutlookItem will then provide two elements: filename as String and file contents as array of byte.
From here on, one could write it to a file or further process the byte array.
To your question above:
- What you find in the file descriptor is the filename of the outlook item and a pointer to an IDataObject
- the IDataObject can be parsed and will provide an IStorage object
- The IStorageObject will be then a root container providing further sub-IStorageObjects or IStreams similar to a filesystem (directory = IStorage, file = IStream
You find those elements in the following lines of code:
Get File Contents, see OutlookItemTransfer.java, method nativeToJava:
FORMATETC format = new FORMATETC();
format.cfFormat = getTypeIds()[1];
format.dwAspect = COM.DVASPECT_CONTENT;
format.lindex = <fileIndex>;
format.ptd = 0;
format.tymed = TYMED_ISTORAGE | TYMED_ISTREAM | COM.TYMED_HGLOBAL;
STGMEDIUM medium = new STGMEDIUM();
if (data.GetData(format, medium) == COM.S_OK) {
// medium.tymed will now contain TYMED_ISTORAGE
// in medium.unionfield you will find the root IStorage
}
Read the root IStorage, see CompoundStorage, method readOutlookStorage:
// open IStorage object
IStorage storage = new IStorage(pIStorage);
storage.AddRef();
// walk through the content of the IStorage object
long[] pEnumStorage = new long[1];
if (storage.EnumElements(0, 0, 0, pEnumStorage) == COM.S_OK) {
// get storage iterator
IEnumSTATSTG enumStorage = new IEnumSTATSTG(pEnumStorage[0]);
enumStorage.AddRef();
enumStorage.Reset();
// prepare statstg structure which tells about the object found by the iterator
long pSTATSTG = OS.GlobalAlloc(OS.GMEM_FIXED | OS.GMEM_ZEROINIT, STATSTG.sizeof);
int[] fetched = new int[1];
while (enumStorage.Next(1, pSTATSTG, fetched) == COM.S_OK && fetched[0] == 1) {
// get the description of the the object found
STATSTG statstg = new STATSTG();
COM.MoveMemory(statstg, pSTATSTG, STATSTG.sizeof);
// get the name of the object found
String name = readPWCSName(statstg);
// depending on type of object
switch (statstg.type) {
case COM.STGTY_STREAM: { // load an IStream (=File)
long[] pIStream = new long[1];
// get the pointer to the IStream
if (storage.OpenStream(name, 0, COM.STGM_DIRECT | COM.STGM_READ | COM.STGM_SHARE_EXCLUSIVE, 0, pIStream) == COM.S_OK) {
// load the IStream
}
}
case COM.STGTY_STORAGE: { // load an IStorage (=SubDirectory) - requires recursion to traverse the sub dies
}
}
}
}
// close the iterator
enumStorage.Release();
}
// close the IStorage object
storage.Release();

kSOAP Marshalling help needed

Does anyone have a good complex object marshalling example using the kSOAP package?
Although this example is not compilable and complete, the basic idea is to have a class that tells kSOAP how to turn an XML tag into an object (i.e. readInstance()) and how to turn an object into an XML tag (i.e. writeInstance()).
public class MarshalBase64File implements Marshal {
public static Class FILE_CLASS = File.class;
public Object readInstance(XmlPullParser parser, String namespace, String name, PropertyInfo expected)
throws IOException, XmlPullParserException {
return Base64.decode(parser.nextText());
}
public void writeInstance(XmlSerializer writer, Object obj) throws IOException {
File file = (File)obj;
int total = (int)file.length();
FileInputStream in = new FileInputStream(file);
byte b[] = new byte[4096];
int pos = 0;
int num = b.length;
if ((pos + num) > total) {
num = total - pos;
}
int len = in.read(b, 0, num);
while ((len != -1) && ((pos + len) < total)) {
writer.text(Base64.encode(b, 0, len, null).toString());
pos += len;
if ((pos + num) > total) {
num = total - pos;
}
len = in.read(b, 0, num);
}
if (len != -1) {
writer.text(Base64.encode(b, 0, len, null).toString());
}
}
public void register(SoapSerializationEnvelope cm) {
cm.addMapping(cm.xsd, "base64Binary", MarshalBase64File.FILE_CLASS, this);
}
}
Later, when you invoke the SOAP service, you'll map the object type (in this case, File objects) to the marshalling class. The SOAP envelope will automatically match the object type of each argument and, if it is not a built-in type, invoke the associated marshaller to convert it to/from XML.
public class MarshalDemo {
public String storeFile(File file) throws IOException, XmlPullParserException {
SoapObject soapObj = new SoapObject("http://www.example.com/ws/service/file/1.0", "storeFile");
soapObj.addProperty("file", file);
SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11);
new MarshalBase64File().register(envelope);
envelope.encodingStyle = SoapEnvelope.ENC;
envelope.setOutputSoapObject(soapObj);
HttpTransport ht = new HttpTransport(new URL(server, "/soap/file"));
ht.call("http://www.example.com/ws/service/file/1.0/storeFile", envelope);
String retVal = "";
SoapObject writeResponse = (SoapObject)envelope.bodyIn;
Object obj = writeResponse.getProperty("statusString");
if (obj instanceof SoapPrimitive) {
SoapPrimitive statusString = (SoapPrimitive)obj;
String content = statusString.toString();
retVal = content;
}
return retVal;
}
}
In this case, I am using Base64 encoding to marshal File objects.