Creating a chart in apache poi - ms-word

I need to create in java a microsoft word document containing charts. I'm trying out the Apache POI but haven't found a way to do it. Are there any examples of how to do this?

you can create chart using Temp Ms-Word file.
just create charts in your Temp Ms-Word File and read using customised POI jar and write back to your actual Ms-word File
https://github.com/sandeeptiwari32/POI_ENHN/blob/master/POI3.14.jar.
You can get this code in official poi version 4.0
code Example:
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import org.apache.poi.POIXMLDocumentPart;
import org.apache.poi.openxml4j.exceptions.InvalidFormatException;
import org.apache.poi.xwpf.usermodel.XWPFChart;
import org.apache.poi.xwpf.usermodel.XWPFDocument;
import org.openxmlformats.schemas.drawingml.x2006.chart.CTChart;
import org.openxmlformats.schemas.drawingml.x2006.chart.CTTitle;
import org.openxmlformats.schemas.drawingml.x2006.chart.CTTx;
import org.openxmlformats.schemas.drawingml.x2006.main.CTRegularTextRun;
import org.openxmlformats.schemas.drawingml.x2006.main.CTTextBody;
import org.openxmlformats.schemas.drawingml.x2006.main.CTTextParagraph;
public class TestXWPFChart {
public static void main(String[] args) throws Exception {
FileInputStream inpuFile=new FileInputStream("input.docx");
FileOutputStream outFile = new FileOutputStream("output.docx");
#SuppressWarnings("resource")
XWPFDocument document = new XWPFDocument(inpuFile);
XWPFChart chart=null;
for (POIXMLDocumentPart part : document.getRelations()) {
if (part instanceof XWPFChart) {
chart = (XWPFChart) part;
break;
}
}
//change chart title from "Chart Title" to XWPF CHART
CTChart ctChart = chart.getCTChart();
CTTitle title = ctChart.getTitle();
CTTx tx = title.addNewTx();
CTTextBody rich = tx.addNewRich();
rich.addNewBodyPr();
rich.addNewLstStyle();
CTTextParagraph p = rich.addNewP();
CTRegularTextRun r = p.addNewR();
r.addNewRPr();
r.setT("XWPF CHART");
//write modified chart in output docx file
document.write(outFile);
}
}

Related

Get "holes" in dates in MogoDB collection

I have a MongoDB collection that stores data for each hour since 2011.
For example:
{
"dateEntity" : ISODate("2011-01-01T08:00:00Z"),
"price" : 0.3
}
{
"dateEntity" : ISODate("2011-01-01T09:00:00Z"),
"price" : 0.35
}
I'd like to know if there are "holes" in that dates. For example, a missing entry at a hour.
Unfortunately, there is no gaps-marking aggregator in Mongodb.
I have checked if it's possible to write an own gaps-aggregator for Mongodb basing on Javascript functions in Map-Reduce pipelines by creating a time raster in the first map stage and then mapping it to its corresponding values, but database reads are discouraged while mapping and reducing, so it would be bad design. So, it is not possible to achieve this with Mongodb-own instruments.
I think, there are two possible solutions.
Solution one: Use a driver like the Java driver
I suggest you could use an idiomatic driver like the Java driver for your Mongodb data and create a raster of hours like in the Test provided.
import com.mongodb.BasicDBObject;
import com.mongodb.MongoClient;
import com.mongodb.ServerAddress;
import com.mongodb.client.MongoCollection;
import org.bson.Document;
import org.junit.Test;
import java.time.LocalDateTime;
import java.time.ZoneOffset;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
public class HourGapsTest {
#Test
public void testHourValues() {
String host = "127.0.0.1:27017";
ServerAddress addr = new ServerAddress(host);
MongoClient mongoClient = new MongoClient(addr);
MongoCollection<Document> collection = mongoClient.getDatabase("sotest").getCollection("hourhole");
LocalDateTime start = LocalDateTime.of(2011, 1, 1, 8, 0, 0);
LocalDateTime end = LocalDateTime.of(2011, 1, 2, 0, 0, 0);
List<LocalDateTime> allHours = new ArrayList<>();
for (LocalDateTime hour = start; hour.isBefore(end); hour = hour.plusHours(1L)) {
allHours.add(hour);
}
List<LocalDateTime> gaps = new ArrayList<>();
for (LocalDateTime hour : allHours) {
BasicDBObject filter = new BasicDBObject("dateEntity", new Date(hour.toInstant(ZoneOffset.UTC).toEpochMilli()));
if (!collection.find(filter).iterator().hasNext()) {
gaps.add(hour);
}
}
gaps.forEach(System.out::println);
}
}
Solution two: Use a timeseries database
However, timeseries databases like Kairosdb provide this functionality. Consider storing these time-value data in a timeseries database.

Unable to get any data when spark streaming program in run taking source as textFileStream

I am running following code on Spark shell
>`spark-shell
scala> import org.apache.spark.streaming._
import org.apache.spark.streaming._
scala> import org.apache.spark._
import org.apache.spark._
scala> object sparkClient{
| def main(args : Array[String])
| {
| val ssc = new StreamingContext(sc,Seconds(1))
| val Dstreaminput = ssc.textFileStream("hdfs:///POC/SPARK/DATA/*")
| val transformed = Dstreaminput.flatMap(word => word.split(" "))
| val mapped = transformed.map(word => if(word.contains("error"))(word,"defect")else(word,"non-defect"))
| mapped.print()
| ssc.start()
| ssc.awaitTermination()
| }
| }
defined object sparkClient
scala> sparkClient.main(null)
Output is blank as follows. No file is read and no streaming took place.
Time: 1510663547000 ms
Time: 1510663548000 ms
Time: 1510663549000 ms
Time: 1510663550000 ms
Time: 1510663551000 ms
Time: 1510663552000 ms
Time: 1510663553000 ms
Time: 1510663554000 ms
Time: 1510663555000 ms
The path which I have given as input in the above code is as follows:
[hadoopadmin#master ~]$ hadoop fs -ls /POC/SPARK/DATA/
17/11/14 18:04:32 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
Found 3 items
-rw-r--r-- 2 hadoopadmin supergroup 17881 2017-09-21 11:02
/POC/SPARK/DATA/LICENSE
-rw-r--r-- 2 hadoopadmin supergroup 24645 2017-09-21 11:04
/POC/SPARK/DATA/NOTICE
-rw-r--r-- 2 hadoopadmin supergroup 845 2017-09-21 12:35
/POC/SPARK/DATA/confusion.txt
Could anyone please explain where I am going wrong? Or is there anything wrong with the syntax(although I did not encounter any error) as I am new to spark?
textFileStream won't read pre-existing data. It will include only new files:
created in the dataDirectory by atomically moving or renaming them into the data directory.
https://spark.apache.org/docs/latest/streaming-programming-guide.html#basic-sources
Everyone on the earth has a right to be happy, be it spark itself or a spark developer.
Spark streaming method of textFileStream() needs files to be modified after the streaming process is started. This means, spark steaming will not read existing files.
So, you may think you can copy the new files. But this is a problem because copy does not affect the Modified time of the file.
The last option, you may try to create new files on the fly. But that's tedious and should happen while the spark cycle is running.
I wrote a simple java program that would create the files on the fly. So everyone now is happy. :-)(You just need a commons-io lib on the classpath. just a single jar.)
import java.awt.Button;
import java.awt.FlowLayout;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import javax.swing.JFrame;
import org.apache.commons.io.IOUtils;
public class CreateFileMain extends JFrame {
private static final long serialVersionUID = 1L;
Button b;
public CreateFileMain() {
b = new Button("Create New File");
b.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
String dir = "C:/Users/spratapw/workspace/batchload1/spark-streaming-poc/input/";
deleteExistingFiles(dir);
Random r = new Random();
File f = new File(dir+r.nextInt()+".txt");
createNewFile(f);
}
private void createNewFile(File f) {
try {
f.createNewFile();
List<String> lines = new ArrayList<>();
lines.add("Hello World");
FileOutputStream fos = new FileOutputStream(f);
IOUtils.writeLines(lines, "\n", fos, Charset.defaultCharset());
fos.close();
} catch (IOException e2) {
e2.printStackTrace();
}
}
private void deleteExistingFiles(String dir) {
File filetodelete = new File(dir);
File[] allContents = filetodelete.listFiles();
if (allContents != null) {
for (File file : allContents) {
file.delete();
}
}
}
});
this.add(b);
this.setLayout(new FlowLayout());
}
public static void main(String[] args) throws IOException {
CreateFileMain m = new CreateFileMain();
m.setVisible(true);
m.setSize(200, 200);
m.setLocationRelativeTo(null);
m.setDefaultCloseOperation(EXIT_ON_CLOSE);
}
}
Output :

How do I export AEM reports in Excel?

I'd like to to export AEM reports, page activity or component activity reports, in an excel file.
Is this feature is available in AEM or do I have to write custom for this?
The closest you will get is a CSV selector that can convert report data to CSV but even that has limitations (pagination, filters may be ignored depending on the report).
This, AFAIK, is not an OOTB function. There are old posts and blogs out there to show how this can be done on bpth client side (using JS) or server side using CSV writers.
If you are going down the route of writing a custom solution (most likely outcoume), have a look at the CSV text library that is used in acs-commons user CSV import/export utility, that makes the job really easy and is already a part of AEM.
Hope this helps.
The WriteExcel class simply takes the List collection that is used to populate the JTable object and writes the data to an Excel spreadsheet.
This class uses the Java Excel API. The Java Excel API dependency that is required to work with this API is already in the POM dependencies section.
import java.io.File;
import java.io.IOException;
import java.util.List;
import java.util.Locale;
import jxl.CellView;
import jxl.Workbook;
import jxl.WorkbookSettings;
import jxl.format.UnderlineStyle;
import jxl.write.Formula;
import jxl.write.Label;
import jxl.write.Number;
import jxl.write.WritableCellFormat;
import jxl.write.WritableFont;
import jxl.write.WritableSheet;
import jxl.write.WritableWorkbook;
import jxl.write.WriteException;
import jxl.write.biff.RowsExceededException;
public class WriteExcel {
private WritableCellFormat timesBoldUnderline;
private WritableCellFormat times;
private String inputFile;
public void setOutputFile(String inputFile) {
this.inputFile = inputFile;
}
public int write( List<members> memberList) throws IOException, WriteException {
File file = new File(inputFile);
WorkbookSettings wbSettings = new WorkbookSettings();
wbSettings.setLocale(new Locale("en", "EN"));
WritableWorkbook workbook = Workbook.createWorkbook(file, wbSettings);
workbook.createSheet("Comumunity Report", 0);
WritableSheet excelSheet = workbook.getSheet(0);
createLabel(excelSheet) ;
int size = createContent(excelSheet, memberList);
workbook.write();
workbook.close();
return size ;
}
private void createLabel(WritableSheet sheet)
throws WriteException {
// Lets create a times font
WritableFont times10pt = new WritableFont(WritableFont.TIMES, 10);
// Define the cell format
times = new WritableCellFormat(times10pt);
// Lets automatically wrap the cells
times.setWrap(true);
// create create a bold font with unterlines
WritableFont times10ptBoldUnderline = new WritableFont(WritableFont.TIMES, 10, WritableFont.BOLD, false,
UnderlineStyle.SINGLE);
timesBoldUnderline = new WritableCellFormat(times10ptBoldUnderline);
// Lets automatically wrap the cells
timesBoldUnderline.setWrap(true);
CellView cv = new CellView();
cv.setFormat(times);
cv.setFormat(timesBoldUnderline);
cv.setAutosize(true);
// Write a few headers
addCaption(sheet, 0, 0, "Number");
addCaption(sheet, 1, 0, "Points");
addCaption(sheet, 2, 0, "Name");
addCaption(sheet, 3, 0, "Screen Name");
}
private int createContent(WritableSheet sheet, List<members> memberList) throws WriteException,
RowsExceededException {
int size = memberList.size() ;
// This is where we will add Data from the JCR
for (int i = 0; i < size; i++) {
members mem = (members)memberList.get(i) ;
String number = mem.getNum();
String points = mem.getScore();
String name = mem.getName();
String display = mem.getDisplay();
// First column
addLabel(sheet, 0, i+2, number);
// Second column
addLabel(sheet, 1, i+2, points);
// Second column
addLabel(sheet, 2, i+2,name);
// Second column
addLabel(sheet, 3, i+2, display);
}
return size;
}
private void addCaption(WritableSheet sheet, int column, int row, String s)
throws RowsExceededException, WriteException {
Label label;
label = new Label(column, row, s, timesBoldUnderline);
sheet.addCell(label);
}
private void addNumber(WritableSheet sheet, int column, int row,
Integer integer) throws WriteException, RowsExceededException {
Number number;
number = new Number(column, row, integer, times);
sheet.addCell(number);
}
private void addLabel(WritableSheet sheet, int column, int row, String s)
throws WriteException, RowsExceededException {
Label label;
label = new Label(column, row, s, times);
sheet.addCell(label);
}
public int exportExcel( List<members> memberList)
{
try
{
setOutputFile("JCRMembers.xls");
int recs = write( memberList);
return recs ;
}
catch(Exception e)
{
e.printStackTrace();
}
return -1;
}
}
You can follow the steps described here: Adobe Forums
Select your required page to see component report at http://localhost:4502/etc/reports/compreport.html
Now hit the below URL. It gives you JSON output. http://localhost:4502/etc/reports/compreport/jcr:content/report.data.json
Copy paste the generated JSON output at below URL and click on JSON to excel http://www.convertcsv.com/json-to-csv.htm
You need to write your own logic
create a servlet,
construct Table with data
in the response object add below lines
response.setContentType("text/csv");
response.setCharacterEncoding("UTF-8");
response.setHeader("Content-Disposition", "attachment; filename=\"" + reportName + ".csv\"");
Cookie cookie = new Cookie("fileDownload", "true");
cookie.setMaxAge(-1);
cookie.setPath("/");
response.addCookie(cookie);
once click on the button you will get the report in the Excel format.

Producing 3D tracks in Google Earth kml files

I already have some code to generate 2D kml files, but I am interested in reproducing an image similar to this, with an associated depth profile to each position:
Is there a good reference (or perhaps python library) for doing this? I have not managed to find anything.
Image reference:
Baird, R.W., S.W. Martin, D.L. Webster, and B.L. Southall. 2014. Assessment of Modeled Received Sound Pressure Levels and Movements of Satellite-Tagged Odontocetes Exposed to Mid-Frequency Active Sonar at the Pacific Missile Range Facility: February 2011 Through February 2013. Prepared for U.S. Pacific Fleet, submitted to NAVFAC PAC by HDR Environmental, Operations and Construction, Inc.
You may use other language to generate some kml files like the one in the following link:
https://sites.google.com/site/canadadennischen888/home/kml/3d-tracking
click download the attached file
select "save as" to see the KML content
select "open" to see result in Google Earth
hope this help
If you are using java, I have code to generate kml to display 3D tracking in google earth. (plus, a vertical line from air to ground for each point).
(assumption: Since you had code for 2D, you may already have java pojo code that converted from kml21.xsd.)
(p.s. I can attached you an image if you know any free site for me to upload image.)
Hope this help:
package com.googleearth.util;
import java.util.List;
import javax.xml.bind.JAXBElement;
import com.a.googleearth.entities.GoogleEarthView;
import com.a.googleearth.model.AltitudeModeEnum;
import com.a.googleearth.model.DocumentType;
import com.a.googleearth.model.FolderType;
import com.a.googleearth.model.KmlType;
import com.a.googleearth.model.LineStringType;
import com.a.googleearth.model.LineStyleType;
import com.a.googleearth.model.ObjectFactory;
import com.a.googleearth.model.PlacemarkType;
import com.a.googleearth.model.StyleType;
public class KmlService {
public static final byte[] blue = new byte[]{(byte)0x64,(byte)0xF0,(byte)0x00,(byte)0xFF};
private static ObjectFactory factory = new ObjectFactory();
static final String DEFAULT_REGISTRATION_FOR_EMPTY = "EMPTY";
public static JAXBElement<KmlType> createKml(List<GoogleEarthView> listGoogleEarthDBView) {
KmlType kml = factory.createKmlType();
DocumentType document = factory.createDocumentType();
kml.setFeature(factory.createDocument(document));
{
LineStyleType redLineStyle = factory.createLineStyleType();
// http://www.zonums.com/gmaps/kml_color/
redLineStyle.setColor(new byte[]{(byte)0xFF,(byte)0xF0,(byte)0x00,(byte)0x14});
redLineStyle.setWidth(5f);
StyleType style = factory.createStyleType();
style.setId("blueLine");
style.setLineStyle(redLineStyle);
document.getStyleSelector().add(factory.createStyle(style));
}
FolderType folder = factory.createFolderType();
folder.setName(listGoogleEarthDBView.get(0).getFolderName());
document.getFeature().add(factory.createFolder(folder));
PlacemarkType currentPlacemark = null;
for (GoogleEarthView view : listGoogleEarthDBView) {
if (currentPlacemark == null || currentPlacemark.getName().equalsIgnoreCase("F0001") == false) {
if (currentPlacemark != null) {
JAXBElement<LineStringType> lineString = (JAXBElement<LineStringType>) currentPlacemark.getGeometry();
lineString.getValue().getCoordinates().add(view.getLongitude() + "," + view.getLatitude() + "," + view.getPressureAltitude()+"\n");
}
currentPlacemark = createF0001Placemark();
folder.getFeature().add(factory.createPlacemark(currentPlacemark));
}
JAXBElement<LineStringType> lineString = (JAXBElement<LineStringType>) currentPlacemark.getGeometry();
lineString.getValue().getCoordinates().add(view.getLongitude() + "," + view.getLatitude() + "," + view.getPressureAltitude()+"\n");
}
JAXBElement<KmlType> kmlElement = factory.createKml(kml);
return kmlElement;
}
private static PlacemarkType createF0001Placemark() {
PlacemarkType placeMark = factory.createPlacemarkType();
placeMark.setName("F0001");
placeMark.setStyleUrl("#blueLine");
LineStringType flyhtStreamLineString = factory.createLineStringType();
flyhtStreamLineString.setAltitudeMode(AltitudeModeEnum.ABSOLUTE);
flyhtStreamLineString.setExtrude(Boolean.TRUE);
placeMark.setGeometry(factory.createLineString(flyhtStreamLineString));
return placeMark;
}
}

Exporting data from Mongo/Cassandra to HDFS using Apache Sqoop

I have a problem where I have to read data from multiple data sources i.e RDBMS(MYSQL,Oracle) and NOSQL(MongoDb, Cassandra) to HDFS via Hive.(incrementally)
Apache Sqoop works perfectly for RDBMS but it does not work for NOSQL, at-least I was not able to successfully use it, (I tried to use the JDBC driver for Mongo...It was able to connect to Mongo but could not push to HDFS)
IF any one has done any work related to this and can share it , would be really very helpfull
I have used an example from web and able to transfer files from Mongo to HDFS and the other way round. I couldn't gather myself of the exact web page right now. But the program looks like below.
You can get a spark out of this and move on.
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.bson.BSONObject;
import org.bson.types.ObjectId;
import com.mongodb.hadoop.MongoInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import com.mongodb.hadoop.util.MongoConfigUtil;
public class CopyFromMongodbToHDFS {
public static class ImportWeblogsFromMongo extends
Mapper<LongWritable, Text, Text, Text> {
public void map(Object key, BSONObject value, Context context)
throws IOException, InterruptedException {
System.out.println("Key: " + key);
System.out.println("Value: " + value);
String md5 = value.get("md5").toString();
String url = value.get("url").toString();
String date = value.get("date").toString();
String time = value.get("time").toString();
String ip = value.get("ip").toString();
String output = "\t" + url + "\t" + date + "\t" + time + "\t" + ip;
context.write(new Text(md5), new Text(output));
}
}
public static void main(String[] args) throws IOException,
InterruptedException, ClassNotFoundException {
Configuration conf = new Configuration();
MongoConfigUtil.setInputURI(conf,
"mongodb://127.0.0.1:27017/test.mylogs");
System.out.println("Configuration: " + conf);
#SuppressWarnings("deprecation")
Job job = new Job(conf, "Mongo Import");
Path out = new Path("/user/cloudera/test1/logs.txt");
FileOutputFormat.setOutputPath(job, out);
job.setJarByClass(CopyFromMongodbToHDFS.class);
job.setMapperClass(ImportWeblogsFromMongo.class);
job.setOutputKeyClass(ObjectId.class);
job.setOutputValueClass(BSONObject.class);
job.setInputFormatClass(MongoInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.setNumReduceTasks(0);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
In case of mongoDB create a mongodump of the collection you want to export to HDFS.
cd < /dir_name >
mongodump -h < IP_address > -d < db_name > -c < collection_name >
This creates a dump is .bson format, eg "file.bson" . To convert to .json format.
The file.bson will be stored by default in "dump" folder in your specified < dir_name >.
bsondump file.bson > file.json
copy the file to HDFS using "copyFromLocal".