How to use hl7 encoding transformer connector in mule? - encoding

As I am new to mule anypointstudio, can anyone help in the usage of hl7 encoding transformer connector. I have created the application for this by using hl7 encoding transformer but it is just transferring the file from one directory to another without transforming it to XML.
The Flow I have made is as follows:
FILE - logger - hl7 encoding transformer connector - logger - file

Did you check the MuleSoft documentation? Can see some examples there:
https://docs.mulesoft.com/healthcare-toolkit/v/3.0/hl7-edi#hl7-and-mllp-dataweave-example
https://docs.mulesoft.com/healthcare-toolkit/v/3.0/hl7-edi#use-hl7-edi-inside-mule-flows

Related

Compose to tar file in azure data factory

my source is sql db ,sink in Blob storage. I need to create Tar file on sink side(blob storage).
i have chosen tar as compression type and optimal as level of compression but throwing error as shown.
error
But when i tried for ZIPflate its working but requirement is compress to tar in output can any one help me.
As per this official documentation only below file formats and compression codecs are supported by copy activity in Azure Data Factory.
Azure Data Factory supports the following file formats.
Avro format
Binary format
Delimited text format
Excel format
JSON format
ORC format
Parquet format
XML format
Regarding .tar compression refer this Stackoverflow answer by DraganB

Apache NiFi - Move table content from Oracle to Mongo DB

I am very new to Apache Nifi. I am trying to Migrate data from Oracle to Mongo DB as per the screenshot in Apache NiFi. I am failing with the reported error. Pls help.
Till PutFile i think its working fine, as i can see the below Json format file in my local directory.
Simple setup direct from Oracle Database to MongoDb without SSL or username and password (not recommended for Production)
Just keep tinkering on PutMongoRecord Processor until you resolve all outstanding issues and exclamation mark is cleared
I am first using an ExecuteSQL processor which is resulting the dataset in Avro, I need the final data in JSON. In DBconnection pooling Service, you need to create a controller with the credentials of your Orcale database. Post that I am using Split Avro and then Transform XML to convert it into JSON. In Transform XML, you need to use XSLT file. After that, I use PutMongo Processor for ingestion in Json which gets automatically converted in BSON

when using the spring cloud data flow sftp source starter app file_name header is not found

spring cloud dataflow sftp source starter app states that file name should be in the headers (mode=contents). However, when I connect this source to a log sink, I see a few headers (like Content-Type) but not the file_name header. I want to use this header to upload the file to S3 with the same name.
spring server: Spring Cloud Data Flow Local Server (v1.2.3.RELEASE)
my apps are all imported from here
stream definition:
stream create --definition "sftp --remote-dir=/incoming --username=myuser --password=mypwd --host=myftp.company.io --mode=contents --filename-pattern=preloaded_file_2017_ --allow-unknown-keys=true | log" --name test_sftp_log
configuring the log application to --expression=#root --level=debug doesn't make any difference. Also, writing my own sink that tries to access the file_name header I get an error message that such header does not exist
logs snippets from the source and sink are in this gist
Please follow this link bellow, You need to code your own Source and populate such a header manually downstream already after FileReadingMessageSource. And only after that send the message with content and appropriate header to the target destination.
https://github.com/spring-cloud-stream-app-starters/file/issues/9

IBM Integration Bus: The PIF data could not be found for the specified application

I'm using IBM Integration Bus v10 (previously called IBM Message Broker) to expose COBOL routines as SOAP Web Services.
COBOL routines are integrated into IIB through MQ queues.
We have imported some COBOL copybooks as DFDL schemas in IIB, and the mapping between SOAP messages and DFDL messages is working fine.
However, when the message reaches a node where a serialization of the message tree has to take place (for example, a FileOutput or a MQ request), it fails with the following error:
"The PIF data could not be found for the specified application"
This is the last part of the stack trace of the exception:
RecoverableException
File:CHARACTER:F:\build\slot1\S000_P\src\DataFlowEngine\TemplateNodes\ImbOutputTemplateNode.cpp
Line:INTEGER:303
Function:CHARACTER:ImbOutputTemplateNode::processMessageAssemblyToFailure
Type:CHARACTER:ComIbmFileOutputNode
Name:CHARACTER:MyCustomFlow#FCMComposite_1_5
Label:CHARACTER:MyCustomFlow.File Output
Catalog:CHARACTER:BIPmsgs
Severity:INTEGER:3
Number:INTEGER:2230
Text:CHARACTER:Caught exception and rethrowing
Insert
Type:INTEGER:14
Text:CHARACTER:Kcilmw20Flow.File Output
ParserException
File:CHARACTER:F:\build\slot1\S000_P\src\MTI\MTIforBroker\DfdlParser\ImbDFDLWriter.cpp
Line:INTEGER:315
Function:CHARACTER:ImbDFDLWriter::getDFDLSerializer
Type:CHARACTER:ComIbmSOAPInputNode
Name:CHARACTER:MyCustomFlow#FCMComposite_1_7
Label:CHARACTER:MyCustomFlow.SOAP Input
Catalog:CHARACTER:BIPmsgs
Severity:INTEGER:3
Number:INTEGER:5828
Text:CHARACTER:The PIF data could not be found for the specified application
Insert
Type:INTEGER:5
Text:CHARACTER:MyCustomProject
It seems like something is missing in my deployable BAR file. It's important to say that my application has the message flow and it depends on a shared library that has all the .xsd files (DFDLs).
I suppose that the schemas are OK, as I've generated them using the Toolkit wizard, and the message parsing works well. The problem is only with serialization.
Does anybody know what may be missing here?
OutputRoot.Properties.MessageType must contain the name of the message in the DFDL schema. Additionally when the DFDL schema is in a shared library, OutputRoot.Properties.MessageSet must contain the name of the library.
Sounds as if OutputRoot.Properties is not pointing at the shared library. I cannot remember which subfield does that job - it is either OutputRoot.Properties.MessageType or OutputRoot.Properties.MessageSet.
You can easily check - just check the contents of InputRoot.Properties after an input node that has used the same shared libary.
Faced a similar problem. In my case, a message flow with an HttpRequest node using a DFDL domain parser / format to parse an HTTP response from the remote system threw this error (PIF data could not be found for the specified application). "Re-selecting" the same parser domain & message type on the node followed by build / redeploy solved the problem. Seemed to be a project reference related issue within the IIB toolkit.
you need to create static libraries and refer to application.
in compute node ur coding is based on dfdl body

How to get the connector talend output in bonita

I'm using bonita and I would know how I can to get the talend connector output and store it.
My case is very simple, I've just a Tjava component in my talend job and the code is:
String foo = "bar";
System.out.printl(foo);
Now, how can i to get this output in bonita please?
The output parameter of this connector can be retrieved in the last page ("Output operations") of the connector's configuration wizard.
FYI, the connector output is of type:
java.lang.String[][]
If you wish to learn more about the connector, you can have a look at its implementation here:
https://github.com/bonitasoft/bonita-connector-talend/blob/master/bonita-connector-talend-joblauncher-impl/src/main/java/org/bonitasoft/connectors/talend/JobLauncherConnector.java
Hope this helps.
POZ