Can I refresh all the Word Document metadata using DocumentFormat.OpenXml - ms-word

I have an application that uses the DocumentFormat.OpenXml API to a Word document from one or more originating documents, inserting and deleting chunks of data as it goes. In other words, the resulting document will be significantly different from the constituent documents.
I have already successfully created things like Custom Document Properties, Document Variables and Core File Properties.
But: is it possible to get the other metadata items (number of pages, words, paragraphs, etc.) refreshed, without actually having to calculate these?

Thank you to #Cindy Meister for the answer.
I was hoping that there might be some method or other in the DocumentFormat.OpenXML SDK that I could call, but it seems that is not possible.

Related

How can I perform automated tests against MS Word documents using PowerShell?

We regularly need to perform a handful of relatively simple tests against a bunch of MS Word documents. As these checks are currently done manually, I am striving for a way to automate this. For example:
Check if every page actually has a page number and verify that it is correct.
Verify that a version identifier in the page header is identical across all pages.
Check if the document has a table of contents.
Check if the document has a table of figures.
Check if every figure has a caption.
et cetera. Is this reasonably feasible using PowerShell in conjunction with a Word API?
Powershell can access Word via its object model/Interop (on Windows, at any rate) and AIUI can also work with the Office Open XML OOXML) API, so really you should be able to write any checks you want on the document content. What is slightly less obvious is how you verify that the document content will result in a particular "printed appearance". I'm going to start with some comments on the details first.
Just bear in mind that in the following notes I'm just pointing out a few things that you might have to deal with. If you're examining documents produced by an organisation where people are already broadly speaking following the same standards, it may be easier.
Of the 5 examples you give, without checking the details I couldn't say exactly how you would do them, and there could be difficulties with all of them, but for example
Check if every page actually has a page number and verify that it is correct.
Difficult using either OOXML or the object model, because what you would really be checking is that the header for a particular section had a visible { PAGE } field code. Because that field code might be nested inside other fields that say "if don't display this field code", it's not so easy to be sure that there would be a page number.
Which is what I mean by checking the document's "printed appearance" - if, for example, you can use the object model to print to PDF and have some mechanism that lets PS inspect the PDF's content, that might be a better approach.
Verify that a version identifier in the page header is identical across all pages.
Similar problem to the above, IMO. It depends partly on how the version identifier might be inserted. Is it just a piece of text? Could it be constructed from a number of fields? Might it reference Document Properties or Variables, or Custom XML content?
Check if the document has a table of contents.
Perhaps enough to look for a TOC field that does not have certain options, such as a \c option that a Table of Figures would contain.
Check if the document has a table of figures.
Perhaps enough to check for a TOC field that does have a \c option, perhaps with a specific parameter such as "Figure"
Check if every figure has a caption.
Not sure that you can tell whether a particular image is "a Figure". But if you mean "verify that every graphic object has a caption", you could probably iterate through the inline and floating graphics in the document and verify that there was something that looked like a Word standard caption paragraph within a certain distance of that object. Word has two standard field code patterns for captions AFAIK (one where the chapter number is included and one where it isn't), so you could look for those. You could measure a distance between the image and the caption by ensuring that they were no more than a predefined number of paragraphs apart, or in the case of a floating image, perhaps that the paragraph anchoring the image was no more than so many paragraphs away from the caption.
A couple of more general problems that you might have to deal with:
- just because a document contains a certain feature, such as a ToC field, does not mean that it is visible. A TOC field might have been formatted as not visible. Even harder to detect, it could have been formatted as colored white.
- change tracking. You might have to use the Word object model to "accept changes" before checking whether any given feature is actually there or not. Unless you can find existing code that would help you do that using the OOXML representation of the document, that's probably a strong case for doing checks via the object model.
Some final observations
for future checks, perhaps worth noting that in principle you could create a "DocumentInspector" that users could call from Word BackStage to perform checks on a document. Not sure you can force users to run it, or that you could create it in PS, but perhaps a useful tool.
longer term, if you are doing a very large number of checks, perhaps worth considering whether you could train a ML model to try to detect problems.

MarkLogic "XDMP-FRAGTOOLARGE" error while storing 200MB+ File using REST

When i try to store a 200MB+ xml file to marklogic using REST it gives the following error "XDMP-FRAGTOOLARGE: Fragment of /testdata/upload/submit.xml too large for in-memory storage".
I have tried the Fragment Roots and Fragment Parents option but still gets the same error.
But when i store the file without '.xml' extension in uri. it saves the file but not Xquery operations can be performed on it.
MarkLogic won't be able to derive the mime from the uri without extension. It will then fall back to storing it as binary.
I think that if you would use xdmp:document-load from QConsole, you might be able to load it correctly, as that will not try to hold the entire document in memory first. It won't help you much though, you will likely hit the same error elsewhere. The REST api will have to pass it through in memory, so that won't work like this.
You could raise memory settings in the Admin UI, but you are generally better off by splitting your input. MarkLogic Content Pump (MLCP) will allow you to do so using the aggregate_record options. That will split the file into smaller pieces based on a particular element, and store these as separate documents inside MarkLogic.
HTH!

Assigning Custom Unique IDs to Word 2013 OpenXML Elements

TLDR/Question
How can I best assign unique IDs to (ideally all) of the elements in the XML that describes a Word document such that I can read/write those unique IDs from a Word (2013) Add-In?
Additionally, solutions describing ways I can get a good diff of two Word documents might be helpful but this is not the primary question.
Background
I'm creating an application-level add-in for Word (2013) using VSTO. Part of my task involves diffing an original Word document W with a modified W' so that I can then process the diff for another task. While Word clearly has the capability for diffs/merges (available in the "Review" panel in Word 2013) thus far I have not been able to find a way to programatically extract the diffs.
Therefore, I plan to get the XML for the documents (e.g. using Range.WordOpenXML) and diff them. There are a number of published algorithms for diffing XML documents (i.e. Diff(W.XML, W'.XML)) where the accuracy of the diff is largely dependent on being able to properly match the XML elements from the two documents.
Proposed Solution and Its Problems
Therefore, I'd like to be able to assign a unique ID for every element in the XML of the Word document that I can access from my Add-In. In this case a solution would be something like importing a custom namespace into the package called mynamespace and adding the attribute mynamespace:ID=*** for every element in the DOCX package. The attribute would then be accessible via Range.WordOpenXML.
However, simply using mce:Ignorable, mce:ProcessContent, and mce:PreserveAttributes as detailed at http://openxmldeveloper.org/blog/b/openxmldeveloper/archive/2012/09/21/markup-compatibility-and-extensibility.aspx does not work. The modified Word document loads without any issues, however I cannot seem to find any of the attributes, additionally saving the document removes all of the added markup.
From http://openxmldeveloper.org/discussions/formats/f/13/p/8078/163573.aspx it appears that this process of using custom xml via the Markup Compatibility and Extensibility (MCE) portion of the Office Open XML standard has become complicated over the years (patent issues, etc.). Therefore I'm guessing that my issues arise because Word's XML processor just removes all of the markup that it cannot natively process (maybe there is a way to hook into Word's XML processor and give it custom commands?).
For the future viewers:
1) There is absolutely no way to set any kind of id for most of elements, which can survive in Word (you can use any custom tags or attributes, but after MS Word opens the document, it's gone)
2) Only two elements can be used as id - ContentControl, they have ids, and bookmark (it is possible to make a hidden bookmark adding underscore before it's name, it works only from code), their name can be an id.
3) If tracking changes is enabled in Word, it is absolutely possible to see the diffs in XML, using Range.WordOpenXML and getting actual OpenXml from it, as explained here, for example.

Add document to document library with additional column data using Powershell

I'm trying to loop through pdf files in a directory and send them to a sharepoint document library. When I send them I would like to add the customer, invoice, etc to the list as well. Anyone have recommendations?
Sure. This can done fairly easily. Here's the article I've used in the past for reference:
http://blogs.technet.com/b/heyscriptingguy/archive/2010/09/23/use-powershell-cmdlets-to-manage-sharepoint-document-libraries.aspx
Setting metadata should be pretty easy as well, but PowerShell can't guess what a customer, invoice, etc is. So you'll have to have some data source. If the filename contains the data, you could split it. If the data is in the file itself, there are some methods of getting plaintext strings out of a PDF, but it's going to be a bit harder than the first part of your request.
Let me know if I can help further with any specifics.

Recommendations on structure for Mongoid/MongoDB Tree of Tags

I'm looking for some recommendations on how to structure the tags part of this data model:
Here's a simplified version of it:
a Site has many Posts (relational association [references_many in mongoid speak]). A Site has a tree of tags
a Post has an array of tags (subset of the Site's tags, order doesn't matter)
The use cases I care about are:
Quickly saving & retrieving the Site's tags in tree form (ie to be able to display them as a tree in the UI)
Quickly querying which of a Site's posts have a certain tag.
Without the tree structure, http://github.com/wilkerlucio/mongoid_taggable solves my usecases. I've seen some of the acts_as_tree ports for Mongoid like:
http://github.com/benedikt/mongoid-tree
http://github.com/saks/mongoid_acts_as_tree
http://github.com/ticktricktrack/mongoid_tree
They all seem to take a relational approach, as opposed to embedded, to storing the hierarchy, which would mean that both of the use cases above would be slow (likely requiring a map/reduce).
Has anyone done anything similar, or have any advice? Ideally I'd love a Mongoid solution, but I'm happy to drop down to the Ruby driver as well.
Do you need to update the structure of the tree (i.e. move a tag to another parent)? If that is possible, the embedded approach would become difficult, and the relational/normalized approach makes more sense.
I would probably store the tags themselves in the document (embedded), but if there is any chance that I need to move tree nodes around on-line, then I'd store the hierarchy in another document. Queries need not be slow, if you first flatten the search query (according to the current tree) and then search for those tags. This approach probably does not scale to well if the flattened search query ends up having hundreds of tags in them (how tall is your tree?).
If tags cannot be moved to new parents (or only by you, during scheduled maintenance), go ahead and embed the whole hierarchy.
there are two implemented patterns of mongodb tree structure