Scala/Jasper - How to Merge Multiple PDF in Single Page - scala

I have 60 PDF files I would like to show (Print) in single PDF. I have already tried googling it but it was no use for me, as all PDF append in to a single page. How do I resolve this (I'm an new to this)
Here is my code:
var jasperPrint1 = JasperFillManager.fillReport(jasperReport1, params1, new JREmptyDataSource());
var pages = jasperPrint1.getPages();
for( j <- 0 to pages.size()-1){
var obj = pages.get(j)
jasperPrint.addPage(obj)
}
var outDir: String = java.lang.System.getProperty("user.dir");
separator = java.io.File.separator;
outDir = outDir + separator + "public" + separator + "sample.pdf"
var baos = new java.io.ByteArrayOutputStream();
JasperExportManager.exportReportToPdfStream(jasperPrint, baos);
var outputStream = new java.io.FileOutputStream(outDir);
baos.writeTo(outputStream);

Related

Corrupt records from OpenXML Spreadsheet creation

I'm trying to generate a simple XLSX file using OpenXML but I'm getting an error when I open my file and the only info in the repairedRecord part of the log file is this:
Repaired Records: Cell information from /xl/worksheets/sheet1.xml part
The strange thing is that all the cells I'm trying to write do have the value I expect them to have. I'm just trying to write a single header row right now, where the headers is just an IEnumerable<string>:
using (var doc = SpreadsheetDocument.Create(filename, SpreadsheetDocumentType.Workbook)) {
var workbookPart = doc.AddWorkbookPart();
workbookPart.Workbook = new Workbook();
var worksheetPart = workbookPart.AddNewPart<WorksheetPart>();
worksheetPart.Worksheet = new Worksheet();
var sheets = workbookPart.Workbook.AppendChild(new Sheets());
var sheet = new Sheet {
Id = workbookPart.GetIdOfPart(worksheetPart),
SheetId = 1,
Name = "Sheet 1"
};
sheets.Append(sheet);
workbookPart.Workbook.Save();
var sheetData = worksheetPart.Worksheet.AppendChild(new SheetData());
var row = new Row { RowIndex = 1 };
var column = 1;
foreach (var header in headers)
row.AppendChild(new Cell {
CellReference = GetColumnLetter(column++) + "1",
DataType = CellValues.SharedString,
CellValue = new CellValue(header)
});
sheetData.Append(row);
workbookPart.Workbook.Save();
}
If you're inserting a string value, you should be using CellValues.InlineString
foreach (var header in headers)
row.AppendChild(new Cell (new InlineString(new Text(header))) {
CellReference = GetColumnLetter(column++) + "1",
DataType = CellValues.InlineString
});

Programmatically create Infopath form in SharePoint form library with dates

I am programmatically creating InfoPath forms in a form library within SharePoint 2010 from data in a CSV file. It all works fine apart from the date fields. The form will refuse to open with a format error. I have tried multiple ways of formatting the date but no luck so far. Code below...
If I format 2016-10-10 then it does show in the Forms Library view but I still can not open the form. It just shows a datatype error.
// Get the data from CSV file.
string[,] values = LoadCsv("ImportTest.csv");
//Calulate how many columns and rows in the dataset
int countCols = values.GetUpperBound(1) + 1;
int countRows = values.GetUpperBound(0) + 1;
string rFormSite = "siteurl";
// opens the site
SPWeb webSite = new SPSite(rFormSite).OpenWeb();
// gets the blank file to copy
SPFile BLANK = webSite.Folders["EventSubmissions"].Files["Blank.xml"];
// reads the blank file into an xml document
MemoryStream inStream = new MemoryStream(BLANK.OpenBinary());
XmlTextReader reader = new XmlTextReader(inStream);
XmlDocument xdBlank = new XmlDocument();
xdBlank.Load(reader);
reader.Close();
inStream.Close();
//Get latest ID from the list
int itemID = GetNextID(webSite, "EventSubmissions");
if (itemID == -1) return;
//Iterate each row of the dataset
for (int row = 1; row < countRows; row++)
{
//display current event name
Console.WriteLine("Event name - " + values[row, 4]);
XmlDocument xd = xdBlank;
XmlElement root = xd.DocumentElement;
//Cycling through all columns of the document//
for (int col = 0; col < countCols; col++)
{
string field = values[0, col];
string value = values[row, col];
switch (field)
{
case "startDate":
value = //How do format the date here ;
break;
case "endDate":
value = "";
break;
case "AutoFormID":
value = itemID.ToString();
break;
}
XmlNodeList nodes = xd.GetElementsByTagName("my:" + field);
foreach (XmlNode node in nodes)
{
node.InnerText = value;
}
}
// saves the XML Document back as a file
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
SPFile newFile = webSite.Folders["EventSubmissions"].Files.Add(itemID.ToString() + ".xml", (encoding.GetBytes(xd.OuterXml)), true);
itemID++;
}
Console.WriteLine("Complete");
Console.ReadLine();
Thanks
For me this worked
DateTime.Now.ToString("yyyy-MM-dd")

Script is clearing data too early, does anyone know why?

So I'm working on a project in Google Sheets, using scripting, that will eventually do the following;
Firstly, based on a name in a Cell , find the last 9 entries for that person in form responses.
It then arranges that data in a way that I need and writes it to a sheet, within my spreadsheet
The last part of the script (not my own work, but something i found here)
Script I found online
I've tried to adapt for my needs, not quite there yet. Creates a PDF, saves it in google drive then emails it.
This part requires a bit more work, as I want to specify what the PDF is called using the name and date. Also I'd like to specify where it's saved in google. Lastly the script only produces one PDF. Would like to eventually duplicate the script so I can either create 1 PDF or create them in batches. Will possibly post about these later, if I get stuck.
So anyways that is the overview.
Currently the script works and can query the data I want, write it to a sheet, save it to drive as PDF and email it to a single hard-coded email address. Awesomeness.
But I then tried to add a function called clearRanges which would clear out the template sheet before writing data. I used name ranges to define the 3 sections to clear. But since introducing it, and i've tried it in various parts of my script. I'm getting blank PDF's in my drive and by email.
It's like it's not waiting for the PDF to be created or email to be sent before clearing data. I've tried to put it at the start of my script too, but same thing. Got no idea why.
I was playing around with lock and waitlock as a possible solution, but it didn't seem to help.
If anyone can help out, I'd appreciate it.
function getAgentName() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
Browser.msgBox("Please go to the sheet called PDF Creator, in cell A2, choose the agent you wish to create a PDF for");
var sheet = ss.getSheetByName("PDF Creator");
var range = sheet.getRange("A2")
var value = range.getValue();
if (value == 0) {
Browser.msgBox("You need to go to the sheet named PDF Creator and put an agent name in cell A2");
} else {
getAgentData(value);
}
}
function getAgentData(value) {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sourceSheet = ss.getSheetByName("Form responses 1")
var sourceRange = sourceSheet.getDataRange();
var sourceValues = sourceRange.getValues();
var agentData = [];
var commentsData = [];
for (i = 0; i < sourceValues.length; i++) {
// Defines the data layout for PDF.
var agentName = sourceValues[i][2];
var dateTime = sourceValues[i][3];
var callType = sourceValues[i][7];
var opening = sourceValues[i][8];
var rootCause = sourceValues[i][9];
var rootFix = sourceValues[i][10];
var process = sourceValues[i][11];
var consumer = sourceValues[i][12];
var control = sourceValues[i][13];
var wrapup = sourceValues[i][14];
var dpa = sourceValues[i][15];
var score = sourceValues[i][22];
var comments = sourceValues[i][16];
var agentRow = [dateTime, callType, opening, rootCause, rootFix, process, consumer, control, wrapup, dpa, score];
var commentsRow = [dateTime, comments];
if (agentName == value && agentData.length < 9) {
agentData.push(agentRow)
commentsData.push(commentsRow)
}
}
agentData.sort(function (a, b) {
return b[0] - a[0]
});
commentsData.sort(function (a, b) {
return b[0] - a[0]
});
var destSheet = ss.getSheetByName("AgentPDF");
destSheet.getRange("A1").setValue(value + "'s Quality Score card");
var range = destSheet.getRange(6, 1, agentData.length, agentData[0].length);
range.setValues(agentData);
var commentRange = destSheet.getRange(18, 1, commentsData.length, commentsData[0].length);
commentRange.setValues(commentsData);
emailSpreadsheetAsPDF();
}
/* Send Spreadsheet in an email as PDF, automatically */
function emailSpreadsheetAsPDF() {
// Send the PDF of the spreadsheet to this email address
var email = "firstname.lastname#domain.co.uk";
// Subject of email message
// The date time string can be formatted in your timezone using Utilities.formatDate method
var subject = "PDF Reports - " + (new Date()).toString();
// Get the currently active spreadsheet URL (link)
// Or use SpreadsheetApp.openByUrl("<<SPREADSHEET URL>>");
var ss = SpreadsheetApp.getActiveSpreadsheet();
// Email Body can be HTML too with your logo image - see ctrlq.org/html-mail
var body = "PDF generated using code at ctrlq.org from sheet " + ss.getName();
var url = ss.getUrl();
url = url.replace(/edit$/, '');
/* Specify PDF export parameters
// From: https://code.google.com/p/google-apps-script-issues/issues/detail?id=3579
exportFormat = pdf / csv / xls / xlsx
gridlines = true / false
printtitle = true (1) / false (0)
size = legal / letter/ A4
fzr (repeat frozen rows) = true / false
portrait = true (1) / false (0)
fitw (fit to page width) = true (1) / false (0)
add gid if to export a particular sheet - 0, 1, 2,..
*/
var url_ext = 'export?exportFormat=pdf&format=pdf' // export as pdf
+ '&size=a4' // paper size
+ '&portrait=1' // orientation, false for landscape
+ '&fitw=true&source=labnol' // fit to width, false for actual size
+ '&sheetnames=false&printtitle=false' // hide optional headers and footers
+ '&pagenumbers=false&gridlines=false' // hide page numbers and gridlines
+ '&fzr=false' // do not repeat row headers (frozen rows) on each page
+ '&gid=928916939'; // the sheet's Id
var token = ScriptApp.getOAuthToken();
// var sheets = ss.getSheets();
//make an empty array to hold your fetched blobs
var blobs = [];
// for (var i=0; i<sheets.length; i++) {
// Convert individual worksheets to PDF
// var response = UrlFetchApp.fetch(url + url_ext + sheets[i].getSheetId(), {
var response = UrlFetchApp.fetch(url + url_ext, {
headers: {
'Authorization': 'Bearer ' + token
}
});
//convert the response to a blob and store in our array
blobs[0] = response.getBlob().setName("Tester " + '.pdf');
// }
//create new blob that is a zip file containing our blob array
// var zipBlob = Utilities.zip(blobs).setName(ss.getName() + '.zip');
var test = DriveApp.createFile(blobs[0]);
//optional: save the file to the root folder of Google Drive
DriveApp.createFile(test);
// Define the scope
Logger.log("Storage Space used: " + DriveApp.getStorageUsed());
// If allowed to send emails, send the email with the PDF attachment
if (MailApp.getRemainingDailyQuota() > 0)
var lock = LockService.getScriptLock();
GmailApp.sendEmail(email, subject, body, {
attachments: [test]
});
lock.waitLock(20000);
lock.releaseLock();
clearRanges();
}
function clearRanges() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
ss.getRangeByName('Header').clearContent();
ss.getRangeByName('Scores').clearContent();
ss.getRangeByName('Comments').clearContent();
}
Can you try adding SpreadsheetApp.flush();
around line 60 before calling emailSpreadsheetAsPDF();
SpreadsheetApp.flush()
commentRange.setValues(commentsData);
SpreadsheetApp.flush();
emailSpreadsheetAsPDF();
I've faced a similar problem before and this worked.

file not downloading properly

I am downloading a file from a url and saving it to a directory on my phone.
the path is: /private/var/mobile/Applications/17E4F0B0-0781-4259-B39D-37057D44B778/Documents/samplefile.txt
However, when i debug the file is created and downloaded. But, when i ad-hoc it and run the file. samplefile.txt is created but it's blank.
Code:
String directory = Environment.GetFolderPath (Environment.SpecialFolder.MyDocuments);
var filename = Path.Combine (directory, "samplefile.txt");
if (!File.Exists (filename)) {
File.Create (filename);
var webClient = new WebClient ();
webClient.DownloadStringCompleted += (s, e) => {
var text = e.Result; // get the downloaded text
File.WriteAllText (filename, text);
};
var url = new Uri (/**myURL**/);
webClient.Encoding = Encoding.UTF8;
webClient.DownloadStringAsync (url);
I modified your sample slightly and the following works for me.
The StreamReader is only there just to re-read in the contents of the file to confirm that its the same contents in the file as that of the downloaded file:-
If you put a breakpoint there also you can manually inspect same contents as downloaded.
string directory = System.Environment.GetFolderPath(System.Environment.SpecialFolder.Personal);
var filename = Path.Combine(directory, "samplefile.txt");
if (!File.Exists(filename))
{
var webClient = new WebClient();
webClient.DownloadStringCompleted += (s, e) =>
{
// Write contents of downloaded file to device:-
var text = e.Result; // get the downloaded text
StreamWriter sw = new StreamWriter(filename);
sw.Write(text);
sw.Flush();
sw.Close();
sw = null;
// Read in contents from device and validate same as downloaded:-
StreamReader sr = new StreamReader(filename);
string strFileContentsOnDevice = sr.ReadToEnd();
System.Diagnostics.Debug.Assert(strFileContentsOnDevice == text);
};
var url = new Uri("**url here**, UriKind.Absolute);
webClient.Encoding = Encoding.UTF8;
webClient.DownloadStringAsync(url);
}

How to search Multiple Sites using Lucene Search engine API?

Hope that someone can help me as soon as possible :-)
I would like to know how can we search Multiple Sites using Lucene??! (All sites are in one index).
I have succeeded to search one website, and to index multiple sites, however I am not able to search all websites.
Consider this method that I have:
private void PerformSearch()
{
DateTime start = DateTime.Now;
//Create the Searcher object
string strIndexDir = Server.MapPath("index") + #"\" + mstrURL;
IndexSearcher objSearcher = new IndexSearcher(strIndexDir);
//Parse the query, "text" is the default field to search
Query objQuery = QueryParser.Parse(mstrQuery, "text", new StandardAnalyzer());
//Create the result DataTable
mobjDTResults.Columns.Add("title", typeof(string));
mobjDTResults.Columns.Add("path", typeof(string));
mobjDTResults.Columns.Add("score", typeof(string));
mobjDTResults.Columns.Add("sample", typeof(string));
mobjDTResults.Columns.Add("explain", typeof(string));
//Perform search and get hit count
Hits objHits = objSearcher.Search(objQuery);
mintTotal = objHits.Length();
//Create Highlighter
QueryHighlightExtractor highlighter = new QueryHighlightExtractor(objQuery, new StandardAnalyzer(), "<B>", "</B>");
//Initialize "Start At" variable
mintStartAt = GetStartAt();
//How many items we should show?
int intResultsCt = GetSmallerOf(mintTotal, mintMaxResults + mintStartAt);
//Loop through results and display
for (int intCt = mintStartAt; intCt < intResultsCt; intCt++)
{
//Get the document from resuls index
Document doc = objHits.Doc(intCt);
//Get the document's ID and set the cache location
string strID = doc.Get("id");
string strLocation = "";
if (mstrURL.Substring(0,3) == "www")
strLocation = Server.MapPath("cache") +
#"\" + mstrURL + #"\" + strID + ".htm";
else
strLocation = doc.Get("path") + doc.Get("filename");
//Load the HTML page from cache
string strPlainText;
using (StreamReader sr = new StreamReader(strLocation, System.Text.Encoding.Default))
{
strPlainText = ParseHTML(sr.ReadToEnd());
}
//Add result to results datagrid
DataRow row = mobjDTResults.NewRow();
if (mstrURL.Substring(0,3) == "www")
row["title"] = doc.Get("title");
else
row["title"] = doc.Get("filename");
row["path"] = doc.Get("path");
row["score"] = String.Format("{0:f}", (objHits.Score(intCt) * 100)) + "%";
row["sample"] = highlighter.GetBestFragments(strPlainText, 200, 2, "...");
Explanation objExplain = objSearcher.Explain(objQuery, intCt);
row["explain"] = objExplain.ToHtml();
mobjDTResults.Rows.Add(row);
}
objSearcher.Close();
//Finalize results information
mTsDuration = DateTime.Now - start;
mintFromItem = mintStartAt + 1;
mintToItem = GetSmallerOf(mintStartAt + mintMaxResults, mintTotal);
}
As you can see that I use the site URL mstrURL when I create the search object
string strIndexDir = Server.MapPath("index") + #"\" + mstrURL;
How can I do the same when I want to search multiple sites??
Actually I am using this code.
Combine each of your site's Searcher within a MultiSearcher
See this question for more details:
Multiple Indexes search in Lucene.Net