How to skip header and footer when processing a flat file using Spring batch - spring-batch

I have my file as following,
CustomerDetails~NorthSector~Detail~Alex~12~Mark~55~Helen~33~Andrew~67~Footer~page1
I need to capture the data between Detail and Footer. and put them in my db. The detail inbetween represent the customer name and age. How can I use flatitemreader to achieve this. Or do I have to write a custom reader.

This file format is very specific. I see no obvious way to read it with the FlatFileItemReader.
A custom ItemReader is probably a better option for this use case.

Related

Load multiple multischema delimited file from same directories

Could I know does it have any method to load multiple files that are multi schema delimited files which store in same directories in Talend?
I have tried use the tFileInputMSDelimited component before, but unable to link with tFilelist component to loop through the files inside the directory.
Does anyone have idea how to solve this problem?
To make clearer, each file only contain one batch line but contain multiple header line and it comes with a bunch of transaction line. As showing at the sample data below.
The component tFileOutputMSDelimited should suit your needs.
You will need multiple flows going into it.
You can either keep the files and read them or use tHashInput/tHashOutput to get the data directly.
Then you direct all the flows to the tFileOutputMSDelimited (example with tFixedFlowInput, adapt with your flows) :
In it, you can configure which flow is the parent flow containing your ID.
Then you can add the children flows and define the parent and the ID to recognize the rows in the parent flow :

Two different mappings to ONE XML output file

I'm working on a talend job where I have a excel file and a couple of database fields that gets mapped to an XML file.
The working job looks like this:
Problem: I want to, with the same input of the excel file and the database fields, make another mapping that outputs to the same working XML file mentioned ealier. So I will have ONE XML file with TWO different mappings. How can I achieve this?
Update
I have done this mapping:
which in the end gets exported like this:
but I'm unsure on how to use this mapping in the tAdvancedFileOutputXML
If I understood correctly, you want to have a single XML file containing two different XMLs (the second one appended to the first one). In the shown Job add a OnSubJobOk link to point to a duplicate of your document flow which has a different mapping. In the second flow rather than using tFileOutputXML component to write the XML file, you can use the tAdvancedFileOutputXML with Append Source XML File marked to add to the file generated from the first flow. Also make sure to configure the XML tree. Check the following link for further information https://help.talend.com/reader/~hSvVkqNtFWjDbBHy0iO_w/h3wZegFH1_1XfusiUGtsPg
Hope this helps.

how to skip the header while processing message in Mirth

I am using Mirth 3.0. I am having a file having thousands of records. The txt file is having 3 liner header. I have to skip this header. How can I do this.
I am not suppose to use batch file option.
Thanks.
The answer is a real simple setting change which you need to make.
I think your input source data type is delimited text.
Go to your channel->Summary tab->Set data type->Source 1 Inbound Properties->Number of header record set it to 3.
What mirth will do is, to skip the first 3 line records from the file as they will be considered as headers.
If there is some method of identifying the header records in the file, you can add a source filter that uses a regular expression to identify and ignore those records.
Such result can be achieved using the Attachment script on the Summary tab. There you deal with a message in its raw format. Thus, if your file contains three lines of comments and then the first message starts with the MSH segment you may use regular JavaScript functions to subtract everything up to MSH. The same is true about the Preprocessor script, and it's even more logical to do such transformation there. The difference is that the Mirth does not store the message before it hits the Attachment handler, but it stores it before the Preprocessor handles the message.
Contrary to that, the source filter deals with the message serialized to the E4X XML object, where the serialization process may fail because of the header (it depends on the inbound message data type settings).
As a further reading I would recommend the "Unofficial Mirth Connect Developer's Guide".
(Disclaimer: I'm the author of this book.)
In my implementation the header content remains same so in advance I know how much lines the header is going to take so inside source filter I am using the following code.
delete msg["row"][1];delete msg["row"][1];return true;
I am using delete statement twice becoz after executing the first delete statement MSG will be having one less row and if header accommodates more that a single row then second delete statement is required.

Exporting a log from iPhone application

One of the features in my application is a log where a user can add log entries. I want to make it possible to for the user to export this data. However I do not know which format I should use for this. The data looks like this:
A date, distance, duration, maximum four category names. What I want is to make it possible to send it on mail or open it with dropbox using the URL scheme if the user has dropbox.
I have read about CSV format but I don't know if that is a good file format? My main concern is that the user do not have to have a fixed number of categories (could be between 1-4 categories)
Seeing as the columns of data to be exported will be dynamic in total, it will depend on what the user selects - and there's nothing wrong with this.
I think .csv is fine for this purpose as well - but you need to ask yourself... what will the user be doing with the data? You could either offer multiple file export formats or whatever is the best-for-purpose format, depending on what your average user will do with it.
CSV (comma separated values) is simple (and adds very little overhead - the commas), but not terribly flexible. This is good for importing to MSFT Excel, for instance.
You should consider using XML (the same underlying format used for plists) which is a very flexible (future proof should you wish to add additional columns in the future) and well supported format.

Keeping track of refreshes in Crystal Reports 2008

I am curious to know if there is a way to tell if a report has been printed or ran. For example, the user enters in a inspectionnumber and hits apply and then clicks print and then prints the report. Can i know if the report has been printed? is there a way to use local variables to track that, some sort of loop?
I've never tested this, but here's a theory you can try.
In your Database Expert, go to your Current Connections and Add Command. Use this to write up a SQL query to save the usage data to a table in your data source (If your data source is read only, just add a delimited text file as an additional data source and output your usage data to that instead.)
The best example I have of this is # http://www.scribd.com/doc/2190438/20-Secrets-of-Crystal-Reports. On page 39, you'll see a method for creating a table of contents that more or less uses this method.