DITA OT Preprocessing - dita

I started working on DITA OT. I had gone through its process of transforming Topics into XHTML.
While reading I understood that it uses two processes
Preprocess
Transform
Then I started looking into preprocessing coding part in Java. Understood the process but not able to understand the flow specifically how it creates “job.xml” in the temp directory.
can anyone help me understand this part in the aspect of programming?

The processing structure is very well explained in the docs. You can change various aspects of the processing by using extension points and overriding mechanisms.

Related

Writing Custom Extensions in Druid

I am new to Druid.
Problem Statement
We do currently push raw event data to Druid. I have a requirement to apply certain calculations on the data (say like certain stat techniques) which are not supported by Druid or the extensions it provides out of the box.
There are two questions I have -
What would be a better way to achieve this? (Have some external script that reads data from Druid, computes the calculations and puts it back to Druid)?
Can I take a route of writing Custom Extensions on Druid? I could not find any good documentation on how do we go about writing/ testing Druid Extensions.
These link does not provide any in-depth information -
http://druid.io/docs/latest/development/modules.html
https://github.com/apache/incubator-druid (Druid repo that has some core and community contrib extensions)
Appreciate any help on this. Thank you.
You can achieve this both ways now it's up to you how much comfortable you are writing an extension by yourself and then maintain it. This is certainly time-consuming compared to another way.
If you read data from druid and then perform your calculation and write data back to the druid, you will end up writing to the separate table. If you are not storge bound on druid cluster then you can certainly take this path and its less time-consuming.
Yes, this is the recommended way to perform any custom computation on data. You can certainly write a simple extension easily. Here's the example git hub repo link which helps to write a custom druid extension: https://github.com/implydata/druid-example-extension

How to build Scala report projects

Is there a common standard to follow for building a SCALA based report engine from scratch. Data will be sourced from HDFS, Filtered, formatted and emailed. Please share any experience or hurdles to expect.
I used to do such reports as PDF, HTML and XSLX.
We used ElasticSearch but here was the general workflow:
get filtered data from storage to scala (no real trouble, just make sure your filters are well tested)
fill the holes to have a consistent data: think about missing points, crazy timezones...
format (we used an xslt processor to produce email HTML, it is really specific and size for emails is limited, aim ~15 Mo as a very maximum)
if file is too big, store it somewhere and send the link instead

Coupling Lua and MATLAB

I am in the situation where I have a part of the codebase written in MATLAB and another part in Lua (which is used for scripting of a 3rd party program). As of now the exchange of data between them is makeshift, using the file I/O system. This evolved to be a substantial part of the code, even though that wasn't really planned.
The program is structured in a way, that some Lua scripts are run, then some MATLAB evaluation is done based on which some more Lua is run and so on. It handles simulations and evaluations (scientific code) and creates new simulations based on that. It handles thousands of files and sims.
To streamline the process I started looking into possibilities to change the data I/O and make easy calls from one to another.
I wanted to hear some opinions on how to solve the problem, the optimal solution would be one where I could call everything from MATLAB or Lua, and organize the large datasets in a more consistent and accessible way.
Solutions:
Use the Lua C API to create bindings for the Lua modules, and to add this to MATLAB as a C-Library. In this way I should hopefully be able to achieve my goals and reduce the system complexity.
Some smarter data format for the exchange of datasets (HDF?), and some functions which read the needed workspace variables. This way the parts of the program remain independent, but the data exchange gets solved.
Create wrappers for Lua/MATLAB functions, so they can be called more easily. Data exchange could be done through the return parameters of the functions.
Suggestions?
I would suggest 1 or if you aren't adverse to spending a lot of money, use MATLAB coder to generate C functions from the MATLAB side of the analysis, compile the generated code as a shared library, import the library with the LuaJIT FFI, and run everything from Lua. With this solution you would not have to change any of the MATLAB code and not much of the Lua code thanks to the LuaJIT's semantics regarding array indexing. Solution 1 is free, but it is not as efficient because of the constant marshaling between the two languages' data structures. It would also be a lot of work writing the interface. But either solution would be more efficient than file I/O.
As a easy performance boost, have you tried keeping the files in memory using a RAMdisk or tmpfs?

DATASTAGE capabilities

I'm a Linux programmer.
I used to write code in order to get things done: java perl php c.
I need to start working with DATA STAGE.
All I see is that DATA STAGE is working on table/csv style data and doing it line by line.
I want to know if DATA STAGE can work on file that are not table/csv like. can it load
data into data structures and run function on them, or is it limited to working
only on one line at a time.
thank you for any information that you can give on the capabilities of DATA SATGE
IBM (formerly Ascential) DataStage is an ETL platform that, indeed, works on data sets by applying various transformations.
This does not necessarily mean that you are constrained on applying only single line transformations (you can also aggregate, join, split etc). Also, DataStage has it's own programming language - BASIC - that allows you to modify the design of your jobs as needed.
Lastly, you are still free to call external scripts from within DataStage (either using the DSExecute function, Before Job property, After Job property or the Command stage).
Please check the IBM Information Center for a comprehensive documentation on BASIC Programming.
You could also check the DSXchange forums for DataStage specific topics.
Yes it can, as Razvan said you can join, aggregate, split. It can uses loops and external scripts, it can also handles XML.
My advice for you is that if you have large quantities of data you're gonna have to work on then datastage is your friend, else if the data that you're going to have to load is not very big then it's going to be easier to use JAVA, c, or any programming language that you know.
You can all times of functions , conversions , manipulate the data. mainly Datastage is used for ease of use when you handling humongous data from datamart /datawarehouse.
The main process of datastage would be ETL - Extraction Transformation Loading.
If a programmer uses 100 lines of code to connect to some database here we can do it with one click.
Anything can be done here even c , c++ coding in a rountine activity.
If you are talking about hierarchical files, like XML or JSON, the answer is yes.
If you are talking about complex files, such as are produced by COBOL, the answer is yes.
All using in-built functionality (e.g. Hierarchical Data stage, Complex Flat File stage). Review the DataStage palette to find other examples.

MapReduce implementation in Scala

I'd like to find out good and robust MapReduce framework, to be utilized from Scala.
To add to the answer on Hadoop: there are at least two Scala wrappers that make working with Hadoop more palatable.
Scala Map Reduce (SMR): http://scala-blogs.org/2008/09/scalable-language-and-scalable.html
SHadoop: http://jonhnny-weslley.blogspot.com/2008/05/shadoop.html
UPD 5 oct. 11
There is also Scoobi framework, that has awesome expressiveness.
http://hadoop.apache.org/ is language agnostic.
Personally, I've become a big fan of Spark
http://spark-project.org/
You have the ability to do in-memory cluster computing, significantly reducing the overhead you would experience from disk-intensive mapreduce operations.
You may be interested in scouchdb, a Scala interface to using CouchDB.
Another idea is to use GridGain. ScalaDudes have an example of using GridGain with Scala. And here is another example.
A while back, I ran into exactly this problem and ended up writing a little infrastructure to make it easy to use Hadoop from Scala. I used it on my own for a while, but I finally got around to putting it on the web. It's named (very originally) ScalaHadoop.
For a scala API on top of hadoop check out Scoobi, it is still in heavy development but shows a lot of promise. There is also some effort to implement distributed collections on top of hadoop in the Scala incubator, but that effort is not usable yet.
There is also a new scala wrapper for cascading from Twitter, called Scalding.
After looking very briefly over the documentation for Scalding it seems
that while it makes the integration with cascading smoother it still does
not solve what I see as the main problem with cascading: type safety.
Every operation in cascading operates on cascading's tuples (basically a
list of field values with or without a separate schema), which means that
type errors, I.e. Joining a key as a String and key as a Long leads
to run-time failures.
to further jshen's point:
hadoop streaming simply uses sockets. using unix streams, your code (any language) simply has to be able to read from stdin and output tab delimited streams. implement a mapper and if needed, a reducer (and if relevant, configure that as the combiner).
I've added MapReduce implementation using Hadoop on Github with few test cases here: https://github.com/sauravsahu02/MapReduceUsingScala.
Hope that helps. Note that the application is already tested.