I have pdf and word files that need to be used as an input for Ruta. I can convert them into text files, but lose all the tables and formatting if I do that. Is there anyway I can use them without losing any information?
Thanks!
You need an additional program that is able to convert pdf (/doc/docx) to html. There are mainly two different types of PDF converter: those which use absolute positions for generating nice-looking html, and those which rely only on html elements and css. For processing tables, I recommend the latter ones. I personally use a commerical solution, but there is also a lot of good open source software, e.g., pdf2htmlEX
If you have html, then you can apply the HtmlAnnotator and HtmlConverter for gaining plain text with annotations for the html tags as described in the UIMA Ruta documentation
Related
I have a large amount of PDF files in my local filesystem I use as documentation base and I would like to create an index of these files.
I would like to :
Parse the contents of the PDF files to get keywords.
Select the most relevant keywords to make a summary.
Create static HTML pages for some keywords with entries linked to the appropriate files.
My questions are :
Is there an existing tool to perform the whole job ?
What is the most appropriate tool to parse PDF files content, filter (by words size) and counting the words?
I consider using Perl, swish-e, pdfgrep to make a script. Do you know other tools which could be useful?
Given that points 2 and 3 seem custom I'd recommend to have your own script, use a tool out of it to parse pdf, process its output as you please, and write HTML (perhaps using another tool).
Perl is well suited for that, since it excels in processing that you'll need and also provides support for working with all kinds of file formats, via modules.
As for reading pdf, here are some options if your needs aren't too elaborate
Use CAM::PDF (and CAM::PDF::PageText) or PDF-API2 modules
Use pdftotext from the poppler library (probably in poppler-utils package)
Use pdftohtml with -xml option, read the generated simple XML file with XML::libXML or XML::Twig
The last two are external tools which you use via Perl's builtins like system.
The following text processing, to build your summary and design the output, is precisely what languages like Perl are for. The couple of tasks that are mentioned take a few lines of code.
Then write out HTML, either directly if simple or using a suitable module. Given your purpose, you may want to look into HTML::Template. Also see this post, for example.
Full parsing of PDF may be infeasible, but if the files aren't too complex it should work.
If your process for selecting keywords and building statistics is fairly common, there are integrated tools for document management (search for bibliography managers). However, I think that most of them resort to external tools to parse pdf so you may still be better off with your own script.
I am using Perl for automation for report generation. Reports are generated in HTML. same report can be opened in MS word format. tables generated in HTML look good in Word too.
Problem:
Ineed to also insert few graphs in the report. For HTML, I am using SVG::TT::Graph::Line Perl module to generate the graphs.
The idea here is to keep single HTML file that contains all tables and graphs.
Currently every thing looks good in HTML. but when i open the same file in Word, the graphs are replaced by data (because I am using SVG Perl module).
Just wondering what would be the best way to generate graphs for Word file that doesn't change my code much.
Any suggestions with the Perl modules to be used would be much appreciated.
I haven't tried this, but the only thing I can think of is to use ImageMagick to convert the SVG to PNG and then use a Data URI to embed the image in the HTML.
I need some help with conversion of DocBook files to Microsoft Word files.
Do I need an XSL file for the transformation?
Yes, you do need an XSL file. You can get XSL files for DocBook from the free DocBook XML distribution. Then, you run a free XSLT transformer such as Saxon. If you run Saxon from a command line, you give it the name of your DocBook file, and the name of one of the stylesheets, and it will transform your file according to the rules in the stylesheet.
What you need to do to transform to Word, is to pick the right stylesheet.
From DocBook XSL: The Complete Guide, here are three possibilities:
Convert to XSL-FO and then use the XMLmind to export to Word. See the XMLmind website for more information.
Use a limited set of tags and then use one of DocBook XML's included stylesheets to output to WordML.
Try to use Jfor to output to RTF, although Jfor no longer appears to be maintained.
And I have one of my own:
As above, use one of DocBook XML's included stylesheets to publish to XSL-FO, then run Apache FOP to convert from XSL-FO to RTF. You will lose the structural information, but you will keep a certain amount of the formatting.
I recently implemented same feature for our users. They use Oxygen XML editor that allows for easy transformations via XSL. I was going to do OOXML but settled on WordML. As a starting point I used roundtrip XSL, but I had to rewrite lots of templates because of existing bugs or just missing functionality. In addition, I did other customization to serve a purpose or for our XML file only.
I would not mind contributing back to the project, but don't really know how to get about it.
I know this is an 11 years old question. But now, in 2022 you can use pandoc to convert DocBook to MS Word (docx).
pandoc --from docbook --to docx --output filename.docx filename.docbook
I am using XQuery to transform DocBook into various formats using XQuery typeswitch library. XQuery uses indexes so I can transform many documents very quickly.
I am looking for a tool or set of tools to convert between file formats D and M where
D is a format handled by MSWord, in order of preference, docx, doc, rtf
M is a lightweight markup, such as markdown, textile, txt2tags, it can be an esoteric one
there is a way to generate html from M
conversion is two-way, it's done both from D to M, and from M to D
utf-8 encoding is handled properly
the content is simple, paragraphs, some simple formatting like bold and italics, maybe lists
the tools are platform-independent
What I've found so far
TeX, LaTeX -- too heavyweight
docx2txt -- too lightweight, it supports no formatting at all
html -- MSWord produces bloated html
a few one-way conversions, like doc to mediawiki,
UPDATE:
The use case is a document workflow between technical and non-technical people
I, the technical guy edit a document in plain text, put it into version control, etc.
I send it to my manager or other non-technical people
They add comments, make changes to it using their Word, then they send it back to me
I want to simply grok their changes, make my changes, put it into version control, without having to use Word
I think that Pandoc much more than meet all requirements.
http://pandoc.org
Adam, I've used docx4j to convert docx to html, edit the html in CKEditor, and then use docx4j to convert the html back to docx. My process made some assumptions about the css (ie it was designed to handle docx4j's clean html, and editing in CKEditor).
You don't say whether there is a way to generate M from HTML?
This is probably hard to do two-way, since you will have impedance mismatches between the various formats.
The best world I can think of would be a sort of Wiki / Word hybrid: Maybe you can get Google Wave to do that for you?
Another solution that might work is a CMS like Plone (did they ever add WYSIWIG capability? I stopped caring after version 1). Keep your documents there. Let the system handle changes, annotations etc. You can automate retrieval of the source (should be ReStructuredText) and commit that to your source control if you have to.
This script I wrote might help you in your workflow:
https://github.com/matb33/docx2md
It is a command-line PHP script that will only work with .docx files. It will extract the XML, run some XSL transformations, and provide you the result in Markdown format.
I encourage you to send me .docx files that don't convert accurately. I'd love to make this script as robust and reliable as possible.
I am using MS Word API to generate .docx which contains the data fetched from DB, in which i am applying the respective styles, fonts, symbols, etc. If the data fetched from the DB is quite huge, then there is a problem in displaying those data in the .docx file. I found that internally MS Word 2007 will write some content through tags which may not be needed to display the data. Hence i am figuring out what are the necessary MS Word tags needed when converting into a .xml file. So that i can avoid unnecessary tags and build only the respective tags which are needed to display the data. Hence i am planning to write my own .xml with the MS Word tags which are needed, than generating a .XML from .docx file
My queries are:-
1) Whether it is right that the MS Word will generate some tags which may not be needed during the conversion of .docx to document.xml? That makes it heavy? If so what are the tags , so that i can avoid them when write by own .xml file.
2) Please send links to understand about the MS Word tags and its advantages, which tags are needed and which are not ?
3) Whether my approach to write a new .xml similar to document.xml (.docx conversion) is worthy one to go forward so that i can build the .xml with the tags i needed , so that i can improve the performance of the data display?
Please shed some light into it and thanks in advance..
Thanks,
Rithu
You'll want to learn WordprocessingML in much more detail to do this. It certainly isn't impossible, but it is quite a learning curve to start with. Probably the best place to start is with this eBook. If you go the manual route, you'll need a zip technology. If you're in Visual Studio, you can make the writing of all of this easier by using the Open XML SDK.
As to your questions on 'unnecessary tags', it's hard to believe that there would be much at all in the file that is unnecessary. But that depends on what you consider not needed - for example, if a word is caught as mispelled, there will be "dirty=1" attribute on the Run tag. If you're okay with displaying mispelled words, then that could be considered unnecessary. Really depends on what you're displaying for and in what.