I'm generating OOXML documents programatically with Apache POI, and one of my purposes is generate the content index of the document (as if we click on references > content table on Microsoft Office 2007).
I've been reading about this topic and looking the internal file of a docx document (document.xml). My conclusion scares me.
In order to calculate in which page will be a specific text of the document (contained in a tag inside a tag inside a tag), it seems to me that I need to know the sort of font used for every text/table before it (and likely properties like bold, italic...), size of font, top and bottom margins of each page and any other "size-related" information, then calculate the number of twips that the document will have between the very first paragraph of the document (or at least from the last pagebreak) and the text I'm trying to know in which page will be...
As you can see, the process I'm figuring out is a bit awkward (or a lot) so I hope there is a much easier way to generate the content index or at least to calculate in which page a specific text will fall in. Do anyone has ideas about how to calculate this?!?!.
Call me lazy but I don't want to calculate all this stuff :s.
PD: Excuse me for my poor english, hope you understand my question.
Related
We regularly need to perform a handful of relatively simple tests against a bunch of MS Word documents. As these checks are currently done manually, I am striving for a way to automate this. For example:
Check if every page actually has a page number and verify that it is correct.
Verify that a version identifier in the page header is identical across all pages.
Check if the document has a table of contents.
Check if the document has a table of figures.
Check if every figure has a caption.
et cetera. Is this reasonably feasible using PowerShell in conjunction with a Word API?
Powershell can access Word via its object model/Interop (on Windows, at any rate) and AIUI can also work with the Office Open XML OOXML) API, so really you should be able to write any checks you want on the document content. What is slightly less obvious is how you verify that the document content will result in a particular "printed appearance". I'm going to start with some comments on the details first.
Just bear in mind that in the following notes I'm just pointing out a few things that you might have to deal with. If you're examining documents produced by an organisation where people are already broadly speaking following the same standards, it may be easier.
Of the 5 examples you give, without checking the details I couldn't say exactly how you would do them, and there could be difficulties with all of them, but for example
Check if every page actually has a page number and verify that it is correct.
Difficult using either OOXML or the object model, because what you would really be checking is that the header for a particular section had a visible { PAGE } field code. Because that field code might be nested inside other fields that say "if don't display this field code", it's not so easy to be sure that there would be a page number.
Which is what I mean by checking the document's "printed appearance" - if, for example, you can use the object model to print to PDF and have some mechanism that lets PS inspect the PDF's content, that might be a better approach.
Verify that a version identifier in the page header is identical across all pages.
Similar problem to the above, IMO. It depends partly on how the version identifier might be inserted. Is it just a piece of text? Could it be constructed from a number of fields? Might it reference Document Properties or Variables, or Custom XML content?
Check if the document has a table of contents.
Perhaps enough to look for a TOC field that does not have certain options, such as a \c option that a Table of Figures would contain.
Check if the document has a table of figures.
Perhaps enough to check for a TOC field that does have a \c option, perhaps with a specific parameter such as "Figure"
Check if every figure has a caption.
Not sure that you can tell whether a particular image is "a Figure". But if you mean "verify that every graphic object has a caption", you could probably iterate through the inline and floating graphics in the document and verify that there was something that looked like a Word standard caption paragraph within a certain distance of that object. Word has two standard field code patterns for captions AFAIK (one where the chapter number is included and one where it isn't), so you could look for those. You could measure a distance between the image and the caption by ensuring that they were no more than a predefined number of paragraphs apart, or in the case of a floating image, perhaps that the paragraph anchoring the image was no more than so many paragraphs away from the caption.
A couple of more general problems that you might have to deal with:
- just because a document contains a certain feature, such as a ToC field, does not mean that it is visible. A TOC field might have been formatted as not visible. Even harder to detect, it could have been formatted as colored white.
- change tracking. You might have to use the Word object model to "accept changes" before checking whether any given feature is actually there or not. Unless you can find existing code that would help you do that using the OOXML representation of the document, that's probably a strong case for doing checks via the object model.
Some final observations
for future checks, perhaps worth noting that in principle you could create a "DocumentInspector" that users could call from Word BackStage to perform checks on a document. Not sure you can force users to run it, or that you could create it in PS, but perhaps a useful tool.
longer term, if you are doing a very large number of checks, perhaps worth considering whether you could train a ML model to try to detect problems.
I am trying to create a table in a PDF document with itext7. But, if the content before the table is too big, the table gets split up between current page and next page. I want to insert a -
document.Add(new AreaBreak())
if there is not enough space left in the current page for the table to be inserted completely. However, I have no idea on how to calculate the available space.
Any help or pointers will be highly appreciated.
From your requirement to avoid page-break inside the table, I assume Table#setKeepTogether(boolean) is exactly what you need.
This property ensures that, if it's possible, elements with this property are pushed to the next area if they are split between areas.
This is not exactly what you have asked, however it seems it's what you want to achieve. Manually checking for this use case might be tricky. You would need to have a look at renderers mechanism and inner processing of iText layout in order to get available space left and space needed for the table. You would also need to take care of the cases like if table is to big to be fit on single page. Also #setKeepTogether(boolean) works when elements are nested inside each other.
We are using Rational Publishing Engine to generate documents from IBM Doors.
I want to create a 2x2 table for each requirement in the Doors database, e.g.:
ID SRS-1234
Req The system shall so some magick
However, if I export multiple Doors objects to MS Word, the 2x2 tables are merged into one big table inside the Word document. This means, for example, that a 10x2 table will be generated if 5 subsequent Doors objects are being exported.
Does anybody know a trick to hinder RPE/Word from merging the tables?
I would like to avoid adding additional paragraphs as paddings between the tables. Therefore, a setting to disable this behaviour would be my preferred solution.
The way to keep two adjacent tables from merging into one is to place a paragraph mark (ANSI 13) between them. Word requires a paragraph mark at the end of a table - it uses that structure to store information about the table, relative to the surrounding text.
If the paragraph mark should be less visible it can be formatted with a small font size (minimum: 1 pt). It might also be necessary to check the paragraph Before/After spacing and possibly the line spacing.
Should you need this a lot in a document it's a good idea to create a Style for this set of formatting and apply the style, as needed.
TLDR/Question
How can I best assign unique IDs to (ideally all) of the elements in the XML that describes a Word document such that I can read/write those unique IDs from a Word (2013) Add-In?
Additionally, solutions describing ways I can get a good diff of two Word documents might be helpful but this is not the primary question.
Background
I'm creating an application-level add-in for Word (2013) using VSTO. Part of my task involves diffing an original Word document W with a modified W' so that I can then process the diff for another task. While Word clearly has the capability for diffs/merges (available in the "Review" panel in Word 2013) thus far I have not been able to find a way to programatically extract the diffs.
Therefore, I plan to get the XML for the documents (e.g. using Range.WordOpenXML) and diff them. There are a number of published algorithms for diffing XML documents (i.e. Diff(W.XML, W'.XML)) where the accuracy of the diff is largely dependent on being able to properly match the XML elements from the two documents.
Proposed Solution and Its Problems
Therefore, I'd like to be able to assign a unique ID for every element in the XML of the Word document that I can access from my Add-In. In this case a solution would be something like importing a custom namespace into the package called mynamespace and adding the attribute mynamespace:ID=*** for every element in the DOCX package. The attribute would then be accessible via Range.WordOpenXML.
However, simply using mce:Ignorable, mce:ProcessContent, and mce:PreserveAttributes as detailed at http://openxmldeveloper.org/blog/b/openxmldeveloper/archive/2012/09/21/markup-compatibility-and-extensibility.aspx does not work. The modified Word document loads without any issues, however I cannot seem to find any of the attributes, additionally saving the document removes all of the added markup.
From http://openxmldeveloper.org/discussions/formats/f/13/p/8078/163573.aspx it appears that this process of using custom xml via the Markup Compatibility and Extensibility (MCE) portion of the Office Open XML standard has become complicated over the years (patent issues, etc.). Therefore I'm guessing that my issues arise because Word's XML processor just removes all of the markup that it cannot natively process (maybe there is a way to hook into Word's XML processor and give it custom commands?).
For the future viewers:
1) There is absolutely no way to set any kind of id for most of elements, which can survive in Word (you can use any custom tags or attributes, but after MS Word opens the document, it's gone)
2) Only two elements can be used as id - ContentControl, they have ids, and bookmark (it is possible to make a hidden bookmark adding underscore before it's name, it works only from code), their name can be an id.
3) If tracking changes is enabled in Word, it is absolutely possible to see the diffs in XML, using Range.WordOpenXML and getting actual OpenXml from it, as explained here, for example.
I am generating and storing PDFs in a database.
The pdf data is stored in a text field using Convert.ToBase64String(pdf.ByteArray)
If I generate the same exact PDF that already exists in the database, and compare the 2 base64strings, they are not the same. A big portion is the same, but it appears about 5-10% of the text is different each time.
What would make 2 pdfs different if both were generated using the same method?
This is a problem because I can't tell if the PDF was modified since it was last saved to the db.
Edit: The 2 pdfs visually appear exactly the same when viewing the actual pdf, but the base64string of the bytes are different
Two PDFs that look 100% the same visually can be completely different under the covers. PDF producing programs are free to write the word "hello" as a single word or as five individual letters written in any order. They are also free to draw the lines of a table first followed by the cell contents, or the cell contents first, or any combination of these such as one cell at a time.
If you are actually programmatically creating the PDFs and you create two PDFs using completely identical code you still won't get files that are 100% identical. There's a couple of reasons for this, the most obvious is that PDFs support creation and modification dates. These will obviously change depending on when they are created. You can override these (and confuse everyone else so I don't recommend this) using something like this:
var info = writer.Info;
info.Put(PdfName.CREATIONDATE, new PdfDate(new DateTime(2001,01,01)));
info.Put(PdfName.MODDATE, new PdfDate(new DateTime(2001,01,01)));
However, PDFs also support a unique identifier in the trailer's /ID entry. To the best of my knowledge iText has no support for overriding this parameter. You could duplicate your PDF, change this manually and then calculate your differences and you might get closer to a comparison.
Then there's fonts. When subsetting fonts, producers create a unique internal name based on the original name and an arbitrary selection of six uppercase ASCII letters. So for the font Calibri the font's name could be JLXWHD+Calibri one time and SDGDJT+Calibri another time. iText doesn't support overriding of this because you'd probably do more harm than good. These internal names are used to avoid font subset collisions.
So the short answer is that unless you are comparing two files that are physical duplicates of each other you can't perform a direct comparison on their binary contents. The long answer is that you can tweak some of the PDF entries to remove unique parts for comparison only but you'd probably be doing more work than it would take to just re-store the file in the database.