How to read XML file in Xtext generator.xtend - eclipse

I am new to Xtext.
I want to produce ouptput in generator.xtend based on xml configuration file.
My goal is to have the same grammar, but based on xml configuration file I want to produce different output in generator.
Can I get some advice?

Related

Set filename to ditamap title in DITA-OT Command Line PDF transformation

I have a script that builds a system on a regular schedule, and as part of that system, I need to convert several documents from dita to PDF.
I can run the following shaped command line from my script fine:
dita --input=<file location> --output=<output location> --format=pdf
But due to naming conventions and other restrictions, the name of the ditamap files are not always well-formed or human-readable (and I am not able to change the name of the files). I'm aware of the outputBase.file parameter that I can pass in on the command line, but I would like dita to be able to scan/read the file and substitute the document title as the filename, something along the lines of:
dita --input=<file> --output=<output> --format=pdf --outputBase.file=$title
Is this even possible?
You don't have to change dita command-line formats. Instead, you can change output PDF file name to the document title according to following steps:
In the top of the your PDF plug-in processing, read the main map's title (bookmap or map) using XSLT task and output XML file that contains title.
Set the title to some property you prefer (such as document.title). To set property, it is useful to use <xmlproperty> task in ant script.
After generating PDF file, change the PDF file name in <output location> to ${document.title}.pdf in the last phase of build process.
In my experience, one of the user want to output PDF that is authored in bookmap. In this case, above technique works fine for this user.
Hope this helps your development.

Postgres documentation in GNU Info format

Postgres is an open-source project and it has DocBook as default format for its documentation. At a first glance it looks like a tree of *.sgml files in the doc directory inside a repository.
There are several pre-defined convertion output formats, but unfortunately Emacs' native one is ignored.
Does it possible to get Postgres documentation as a postgres.info.gz file?
That's basically nothing more than a text conversion problem. I believe the right solution here would be to write an XSL that converts the XML in your SGML files to TeXinfo source code, but the next best thing:
pandoc is a parser for different textual document file formats. It has a reader for Docbook and a writer for texinfo. That should get you started.

Reading data in dymola from chx file

I want to import data into dymola from a chx file which is generated by the output of a program and then run a simulation with those outputs as parameters.
The file has parameters of the form:
<tubedata>
<nrows>28</nrows>
<ncolumns>3<ncolumns>
</tubedata>
I want to import this file into dymola, insert all the variables into a record file and then run simulation.
I'm not sure if chx files are simply xml formatted files, but if they are then there is a rather new library that allows you reading data from xml files (and xls, json, and ini files for that matter):
https://github.com/tbeu/ExternData
You could write an xslt transformation on the .chx file to put the data in Modelica table fomat. See for example https://build.openmodelica.org/Documentation/Modelica.Blocks.Tables.CombiTable1D.html
on how to format the table. Then use the table to set the parameters.
Alternatively I think you can load a script .mos file in Dymola with the format (not sure of it 100%):
x1 := value1
x2 := value2
for the parameters.

DocBook to Word Conversion?

I need some help with conversion of DocBook files to Microsoft Word files.
Do I need an XSL file for the transformation?
Yes, you do need an XSL file. You can get XSL files for DocBook from the free DocBook XML distribution. Then, you run a free XSLT transformer such as Saxon. If you run Saxon from a command line, you give it the name of your DocBook file, and the name of one of the stylesheets, and it will transform your file according to the rules in the stylesheet.
What you need to do to transform to Word, is to pick the right stylesheet.
From DocBook XSL: The Complete Guide, here are three possibilities:
Convert to XSL-FO and then use the XMLmind to export to Word. See the XMLmind website for more information.
Use a limited set of tags and then use one of DocBook XML's included stylesheets to output to WordML.
Try to use Jfor to output to RTF, although Jfor no longer appears to be maintained.
And I have one of my own:
As above, use one of DocBook XML's included stylesheets to publish to XSL-FO, then run Apache FOP to convert from XSL-FO to RTF. You will lose the structural information, but you will keep a certain amount of the formatting.
I recently implemented same feature for our users. They use Oxygen XML editor that allows for easy transformations via XSL. I was going to do OOXML but settled on WordML. As a starting point I used roundtrip XSL, but I had to rewrite lots of templates because of existing bugs or just missing functionality. In addition, I did other customization to serve a purpose or for our XML file only.
I would not mind contributing back to the project, but don't really know how to get about it.
I know this is an 11 years old question. But now, in 2022 you can use pandoc to convert DocBook to MS Word (docx).
pandoc --from docbook --to docx --output filename.docx filename.docbook
I am using XQuery to transform DocBook into various formats using XQuery typeswitch library. XQuery uses indexes so I can transform many documents very quickly.

How to get list of translatable messages

I know how to translate a natural language message into the user's language using gettext.
But now I am wondering how to obtain a list of all translatable messages in a given domain.
I have obtained a raw result with something like this:
strings /usr/share/locale/${LANG:0:2}/LC_MESSAGES/$DOMAIN.mo
but I am looking for a neater solution.
The xgettext program extracts translatable strings from .po files, which are the source files for the .mo files found in /usr/share/locale. These .po files should be included in the source distribution of the package for which you want to translate messages.
If you need to work with .mo files, you can translate them back to .po with msgunfmt.