Converting Email to PDF - ms-word

I have expended a good deal of effort trying to convert emails to PDF.
I am using Delphi 10.4 although that is not necessarily relevant to the question.
I came up with a solution that involves extraction of the body from the email in whatever format (HTML, RTF or TXT). I use INDY for this or Outlook if email is in MSG format.
I then save the body to file and open it using MS Word via automation. Then it should be a simple matter of saving the Word document in PDF format.
However, MS Word doesn't seem to read html files that well.
From the numerous samples of emails that I have tried, I have come across several issues which were complex to solve.
Examples:
html tables expanding beyond the document's page width. I solved this by working out what the page width is, setting the offending table's width as fixed and setting it to the page width and finally resizing it's columns proportionately to its new width.
That worked well until I tried to process an email with html tables with differing numbers of columns/cells per row. That causes a crash. I solved that by handling the exception and iterating through each table by row and working with its cells rather than columns.
Images within table cells often overlap the cell and the page width. Solved by iterating through all InlineShapes, checking whether they are within a table and, if so setting their width to the cell width.
There have been other issues, but I now have something that seems to work pretty well on a fairly disparate bunch of emails.
But I would think it incredibly likely that there will be new issues that will crop up from time to time and since this procedure is designed to deal unsupervised with batches of emails, this is a concern.
So my question is, does anyone know of a better way of dealing with this? For example, is there some simple way of getting Word to to "nicely" format the html on loading so that it displays and saves to PDF in a readable fashion similar to how it looks when you open the same email in Outlook.

Have you tried using the WordEditor property of the Outlook Inspector object? This returns the Microsoft Word Document Object Model of the message and you can export directly to PDF from that.
Here is a basic example...
Private Sub Demo()
Dim MailItem As MailItem
Dim FileName As String
FileName = "C:\Users\Sam\Desktop\Email.pdf"
Set MailItem = ActiveExplorer.Selection.Item(1)
With MailItem.GetInspector
.WordEditor.ExportAsFixedFormat FileName, 17
.Close 0
End With
MsgBox "Export complete"
End Sub

Related

iTextSharp extracts wrapped cell contents into new lines - how do you identify to which column a given wrapped piece of data belongs now?

I am using iTextSharp to extract data from pdfs.
I stumbled across the following problem, depicted by the scenario below:
I created a sample excel file to illustrate. Here is what it looks like:
I convert it to a pdf, using one of the many free online converters available out there, which generates a pdf looking like (when I generated the pdf I did not apply the styling to the excel):
Now, using iTextSharp to extract the data from the pdf, returns me the following string as the data extracted:
As you can see, wrapped cell data generate new lines, where each wrapped piece of data separated by a single white space.
The problem: how does one identify, now, to which column a given piece of wrapped data belongs to ? If only iTextSharp preserved as many white spaces as columns...
In my example - how can I identify to which column does 111 belong ?
Update 1:
A similar problem occurs whenever a field has more than one word (i.e., contains white spaces). For example, considering the 1st line of the sample above:
say it looked like
---A--- ---B--- ---C--- ---D---
aaaaaaa bb b cccc
iText again would generate the extraction for this one as:
aaaaaaa bb b cccc
Same problem here, in having to determine the borders of each column.
Update 2:
A sample of the real pdf file I am working with:
This is how the pdf data looks like.
In addition to Chris' generic answer, some background in iText(Sharp) content parsing...
iText(Sharp) provides a framework for content extraction in the namespace iTextSharp.text.pdf.parser / package com.itextpdf.text.pdf.parser. This franework reads the page content, keeps track of the current graphics state, and forwards information on pieces of content to the IExtRenderListener or IRenderListener / ExtRenderListener or RenderListener the user (i.e. you) provides. In particular it does not interpret structure into this information.
This render listener may be a text extraction strategy (ITextExtractionStrategy / TextExtractionStrategy), i.e. a special render listener which is predominantly designed to extract a pure text stream without formatting or layout information. And for this special case iText(Sharp) additionally provides two sample implementations, the SimpleTextExtractionStrategy and the LocationTextExtractionStrategy.
For your task you need a more sophisticated render listener which either
exports the text with coordinates (Chris in one of his answers has provided an extended LocationTextExtractionStrategy which can additionally provide positions and bounding boxes of text chunks) allowing you in additional code to analyse tabular structures; or
does the analysis of tabular data itself.
I do not have an example for the latter variant because generically recognizing and parsing tables is a whole project in itself. You might want to look into the Tabula project for inspiration; this project is surprisingly good at the task of table extraction.
PS: If you feel more at home with trying to extract structured content from a pure string representation of the content which nonetheless tries to reflect the original layout, you might try something like what is proposed in this answer, a variant of the LocationTextExtractionStrategy working similar to the pdftotext -layout tool; only the changes to be applied to the LocationTextExtractionStrategy are shown there.
PPS: Extraction of data from very specific PDF tables may be much easier; for example have a look at this answer which demonstrates that after some PDF analysis the specific way a given table is created might give rise to a simple custom render listener for extracting the table data. This can make sense for a single PDF with a table spanning many many pages like in the case of that answer, or it can make sense if you have many PDFs identically created by the same software.
This is why I asked for a representative sample file in a comment to your question
Concerning your comments
Still with the pdf example above, both with an implementation from scratch of ITextExtractionStrategy and with extending LocationExtractionStrategy, I see that each RenderText is called at the following chunks: Fi, el, d, A, Fi, el, d... and so on. Can this be changed?
The chunks of text you get as separate RenderText calls are not separated by accident or some random decision of iText. They are the very strings drawn separately in the page content!
In your sample "Fi", "el", "d", and "A" come in different RenderText calls because the content stream contains operations in which first "Fi" is drawn, then "el", then "d", then "A".
This may sound weird at first. A common cause for such torn up words is that PDF does not use the kerning information from fonts; to apply kerning, therefore, the PDF generating software has to insert tiny forward or backward jumps between characters which should be farther from or nearer to each other than without kerning. Thus, words often are torn apart between kerning pairs.
So this cannot be changed, you will get those pieces, and it is the job of the text extraction strategy to put them together.
By the way, there are worse PDFs, some PDF generators position each and every glyph separately, foremost such generators which predominantly build GUIs but can as a feature automatically export GUI canvasses as PDFs.
I would expect that in entering the realm of "adding my own implementation" I would have control over how to determine what is a "chunk" of text.
You can... well, you have to decide which of the incoming pieces belong together and which don't. E.g. do glyphs with the same y coordinate form a single line? Or do they form separate lines in different columns which just happen to be located next to each other.
So yes, you decide which glyphs you interpret as a single word or as content of a single table cell, but your input consists of the groups of glyphs used in the actual PDF content stream.
Not only that, in none of the interface's methods I can "spot" how/where it deals with non-text data/images - so I could intercede with the spacing issue (RenderImage is not called)
RenderImage will be called for embedded bitmap images, JPEGs etc. If you want to be informed about vector graphics, your strategy will also have to implement IExtRenderListener which provides methods ModifyPath, RenderPath and ClipPath.
This isn't really an answer but I needed a spot to show some things that might help you understand things.
First "conversion" from Excel, Word, PowerPoint, HTML or whatever to PDF is almost always going to be a destructive change. The destructive part is very important and it happens because you are taking data from a program that has very specific knowledge of what that data represents (Excel) and you are turning it into drawing commands in a very generic universal format (PDF) that only cares about what the data looks like, not the data itself. Unless the data is "tagged" (and it almost never is these days still) then there is no context for the drawing commands. There are no paragraphs, there are no sentences, there are no columns, rows, tables, etc. There's literally just draw this letter at x,y and draw this word at a,b.
Second, imagine you Excel file had that following data and for some reason that last column was narrower than the others when the PDF was made:
Column A | Column B | Column
C
Data #1 Data #2 Data
#3
You and I have context so we know that the second and fourth lines are really just the continuation of the first and third lines. But since iText doesn't have any context during extraction it doesn't think like that and it sees four lines of text. In fact, since it doesn't have context it doesn't even see columns, just the lines themselves.
Third, although a very small thing you need to understand that you don't draw spaces in PDF. Imagine the three column table below:
Column A | Column B | Column C
Yes
If you extracted that from a PDF you'd get this data:
Column A | Column B | Column C
Yes
Inside the PDF the word "Yes" will be just drawn at a certain x coordinate that you and I consider to be under the third column and it won't have a bunch of spaces in front of it.
As I said at the beginning, this isn't much of an answer but hopefully it will explain to you the problem that you are trying to solve. If your PDF is tagged then it will have context and you can use that context during extraction. Context isn't universal, however, so there usually isn't just a magic "insert context" checkbox. Excel actually does have a checkbox (if I remember correctly) to make a tagged PDF during export and it ultimately creates a tagged PDF using HTML-like tags for tables. Very primitive but it will works. However it will be up to you to parse this context.
Leaving here an alternative strategy for extracting the data - that does not solve the problem of who are spaces treated/can be treated, but gives you somewhat more control over the extraction by specifying geometric areas you want to extract text from. Taken from here.
public static System.util.RectangleJ GetRectangle(float distanceInPixelsFromLeft, float distanceInPixelsFromBottom, float width, float height)
{
return new System.util.RectangleJ(
distanceInPixelsFromLeft,
distanceInPixelsFromBottom,
width,
height);
}
public static void Strategy2()
{
// In this example, I'll declare a pageNumber integer variable to
// only capture text from the page I'm interested in
int pageNumber = 1;
var text = new StringBuilder();
List<Tuple<string, int>> result = new List<Tuple<string, int>>();
// The PdfReader object implements IDisposable.Dispose, so you can
// wrap it in the using keyword to automatically dispose of it
using (var pdfReader = new PdfReader("D:/Example.pdf"))
{
float distanceInPixelsFromLeft = 20;
//float distanceInPixelsFromBottom = 730;
float width = 300;
float height = 10;
for (int i = 800; i >= 0; i -= 10)
{
var rect = GetRectangle(distanceInPixelsFromLeft, i, width, height);
var filters = new RenderFilter[1];
filters[0] = new RegionTextRenderFilter(rect);
ITextExtractionStrategy strategy =
new FilteredTextRenderListener(
new LocationTextExtractionStrategy(),
filters);
var currentText = PdfTextExtractor.GetTextFromPage(
pdfReader,
pageNumber,
strategy);
currentText =
Encoding.UTF8.GetString(Encoding.Convert(
Encoding.Default,
Encoding.UTF8,
Encoding.Default.GetBytes(currentText)));
//text.Append(currentText);
result.Add(new Tuple<string, int>(currentText, currentText.Length));
}
}
// You'll do something else with it, here I write it to a console window
//Console.WriteLine(text.ToString());
foreach (var line in result.Distinct().Where(r => !string.IsNullOrWhiteSpace(r.Item1)))
{
Console.WriteLine("Text: [{0}], Length: {1}", line.Item1, line.Item2);
}
//Console.WriteLine("", string.Join("\r\n", result.Distinct().Where(r => !string.IsNullOrWhiteSpace(r.Item1))));
Outputs:
PS.: We are still left with the problem of how to deal with spaces/non text data.

How to create a block letters form input in libreoffice writer

I would like to create a document including a input form.
The printed version of the form should have little boxes for block letter input ("monospace font") like this:
The form will be printed and will be filled out manually using pens (but it would be good if the form could also be easily filled out digitally via pdf form)
Is there any convenient way apart from creating separate input boxes, or tables or other quick fixes which do not make it inconvenient filling the form digitally?
One way could be to use a background image with the required block pattern.
If you only want it printable - create a document and set the image as background.
If you want a computer fillable form for a SEPA banking transaction form - do a search, as there are free PDF forms available.

Merging documents using OpenXml and section breaks causes empty paragraphs

I am stitching a couple of documents together with a requirement that each document should retain its header and footer information in the final document. Using AltChunk instead of raw OpenXml or DocumentBuilder saves a lot of effort with regards to styles, formatting, references, parts, etc.
Unfortunately, after a couple of days I can't seem to get a 100% working version due to a small and frustrating issue and I need some insight.
My code is loosly based on this article
I modify each sub document, prior to appending it (as an AltChunk) to a working document, by moving the last section properties into the last paragraph (in order to retain header and footer references), but Word seems to be adding a blank paragraph to each of these documents as it renders them in the final document. I end up with:
document 1 with correct header and footer
section properties/break
blank paragraph
document 2 with correct header and footer
section properties/break
blank paragraph
etc.
I cant remove the blank paragraphs afterwards, as I ideally don't want to use WAS to render the document first.
It seems as if you cannot have a next-page section break without a following paragraph?
After further investigation, it seems that will not be away around my usage scenario. I would need to place the last section properties in the body element, but due to my way of processing with nested AltChunk, it would not work.
I have changed my approach completely and went back to a more detailed append procedure using OpenXml Power Tools and some LINQ to Xml.
I'm using Document Builder and works perfectly for me!
var sources = new List<OpenXmlPowerTools.Source>();
sources.Add(new OpenXmlPowerTools.Source(new WmlDocument(#tempReportPart1)));
sources.Add(new OpenXmlPowerTools.Source(new WmlDocument(#tempReportPart2)));
var outputPath = #"C:\Users\xpto\Documents\TestFolder\myNewDocument.docx";
DocumentBuilder.BuildDocument(sources, outputPath);
I have the similar empty paragraph issue while importing HTML files.
My solution is,
After inserting HTML AltChunk, I add a GUID place holder. After processing the file, I will open the file again, locate the GUID and check if there is a empty paragraph before it, if so remove the empty paragraph and GUID. it seems work perfectly in my solution.
Hope it helps.

How to change the SSRS input parameters position in report

My SSRS report contains 7 input parameters and while running my report the size of the parameter(i.e. length) is increasing.
One of my input parameter(drop down list) may contain 100 characters so the size is not constant but i want to place all parameters in 2 lines or 3 lines(in a row).
Now it is coming 2 parameters per a row
Please advice
As gbn indicates, it's not easy to change the built in report server method of presenting the parameters. SSRS likes to always use two parameters per line, presented in the order that they exist in the report (which must match the dependency order.)
So the alternatives that gbn mentions: Both involve building a "Wrapper" application: some custom code or a web page that you can code however you like to get the parameters. Then you call Reporting Services, either in code or by passing a formatted URL with your parameters. The report can be displayed in a frame, new window, or passed as a stream to where ever you'd like.
The URL access is pretty straightforward and reliable: I often use it either by hand (to create "favorites") or in code.
http://msdn.microsoft.com/en-us/library/ms153586.aspx
For what you are looking for, these might be more work than you expected, but they will be extremely flexible for your interface.
Jamie
You can certainly do that, just right click on the RDL file in the solution explorer and select view code. then move the XML tags named <ReportParameter Name="Nameofparameter"> under <ReportParameters> according to where ever you want to position. And then save it. thats it!!!
The report parameters are kind of floating in values of 2, so if u have 4 report parameters then it will be shown as 1,2 next line 3,4. Best of luck!!
Use ASP.NET for the paramaters and a ReportViewer control or URL access to render. Seriously.
I don't know of any option to present parameters any way other then the default
I believe you could try using jQuery. The report parameters are rendered in a table under a div tag with class sqlrv-ParameterContainer. Write a jQuery or JavaScript function that will extract the full innerHTML from this div ie. the table content and then extract the table row information like the <label> or <input> tags.
Create your desired table structure with <table><tr><td>{extracted sections}</td><td></td></tr></table> or leave it to your requirement...
Then just append this new HTML structure in place of the original default structure.
In jQuery it will be like
$(".sqlrv-ParameterContainer").html();
which will give you the entire table structure that comes inside the parameter. Use XML parsing and get the input controls and all. Extract these controls as-is, don't change anything.
$(".sqlrv-ParameterContainer table").remove(); // it will remove the SSRS rendered default table from DOM
$(".sqlrv-ParameterContainer table").appendChild('<table><tr>......</tr></table>'); // Append your custom html structure here....
This was something that came to my mind quickly... I would suggest you test it... :)
This doesn't help the OP with SSRS-2008 but in case it helps others - Microsoft have improved this in SSRS 2016 - parameters can now be easily managed via the GUI in Report Builder / Visual studio:
https://www.intertech.com/ssrs-parameters-2016-update/

Making a PDF output in raster format instead of vector using itextsharp

I have written C# code to save product specifications to a PDF document using iTextSharp, mainly with PdfPTable and Chunks/Paragraphs in the PdfPCells. However, I have been told that the output is unacceptable due to the fact that you can highlight and copy the text from the document and document storage and retrieval server software that they are currently using does not support "Vector" based PDFs. I'm not exactly certain what the difference is between a raster pdf and and vector pdf. Basically, every page of the PDF Document should be an image so that the text can not be highlighted. Is there any way to do this without using the DirectContent? Below is an image, illustrating a portion of the PDF that was created, and how the text can be selected and copied, which is the incorrect functionality.
I would like to avoid directly writing to the canvas, unless there is a way to do this and still have itextsharp handle my formatting and proper paging.
The windows application PDF2R works well, but doesn't seem to offer any programmatic solutions. I have found libraries that stated that they do this sort of conversion, but are several thousand dollars. I'd like to work within my budget and use the itextsharp or something much cheaper than this.
I would suggest you try to generate an image using the System.Drawing class and then insert that into the PDF document.
Call this code on your PdfWriter object:
writer.SetEncryption(PdfWriter.STRENGTH40BITS, null, null, PdfWriter.AllowPrinting);
This won't prevent users from selecting text, but it will prevent them from copying and pasting it. Give it a try.