Mirth: How to send to a destination multiple times - mirth

I would like to read a HL7 message containing multiple orders (ORC segments). My destination is a web service sender which can only handle one order at a time.
How can I iterate through the input HL7 message and send to a destination each time?
Thanks for any help.

As is stated above there are several ways to do it.
I normally did this in a JavaScript transformer step. I basically built a small state engine that would iterate over the incoming (raw) message by splitting the incoming message on '\r' characters. It would then build the outbound message as a string by identifying the "header" section (part that does not change) and storing that in a string, say Header, and the order section (the part that does change) as another string, say Order, and the concatenating them together when I reach the next order or the end of the string and sending them to another channel with
router.routeMessage('channelName', Header + '\r' + Order);

You may create another channel that communicates with your web-service and route the ORC portion to that channel.
For additional information you may read the "Unofficial Mirth Connect Developer's Guide" available at mirthconnect.shamilpublishing.com
(Disclaimer: I'm the author of this guide so any comments or suggestions are welcome.)

Related

What REST method should be used to implement a simple inter-process communication?

This is more a theorical question than a practical one.
We have a backend application that uploads csv files to a frontend application, then and only then the backend sends an empty POST request to tell the frontend to start to process those files to update its database.
For this question it doesn't matter if this is a good design (I think it isn't), what are those files, and what database is: I am only want to know better about the REST "sintax".
I'm referring to wikipedia and restfulapi.net, but I'm not convinced about any alternative, because:
GET: Request sender doesn't receive data;
POST (the currently used): Request sender doesn't want to insert data that are on the request body (just data from external files, if existent. Also they can be insert/update/delete);
PUT: Sounds good, but again, data are not on the request body;
PATCH: Sounds best, but data are not on the body (Also, I am wrong or is it deprecated/unused?);
DELETE: Doesn't always need to delete.
I know it is habit to use POST requests to let machines yell "go!" to each other, but I never thought it was right.
What do you think - in theory - would be the proper method?
The actual reference for the semantics of the HTTP methods is the RFC 7231 and not the ones you referenced in your question.
POST is a catch all method and requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics.
4.3.3. POST
The POST method requests that the target resource process the
representation enclosed in the request according to the resource's
own specific semantics. For example, POST is used for the following
functions (among others):
Providing a block of data, such as the fields entered into an HTML
form, to a data-handling process;
Posting a message to a bulletin board, newsgroup, mailing list,
blog, or similar group of articles;
Creating a new resource that has yet to be identified by the
origin server; and
Appending data to a resource's existing representation(s).
[...]
Responses to POST requests are only cacheable when they include
explicit freshness information. However, POST caching is not widely implemented.
In these scenarios, the receiving application knows where the CSV files will be and monitors that location. When it finds one, it processes it and then deletes or archives it. The application will likely have its own criteria for considering itself ready to process, e.g. time of day, size of file etc.
If the data load on the front end takes a long time you could "partition" the updates based on "importance". How you define importance would be up to your business rules. You could then POST a list of CSV filenames/locations to the front end. The list would be ordered by importance. The front end could then update its database based on that importance. Scheduling less important data for a more appropriate time of day.
If the backend knows the difference between new users and updated users you could use PUT and POST. The front end could assign higher priority to PUT requests as they relate to new users, perhaps assigning lower priority and staggered syncing for CSV filenames in POST requests.

Nifi Email ConsumeIMAP filter by subject, from , to and date

Using ConsumeIMAP to read emails from an Inbox and trying to select only emails that have
- attachment to download
- sent "From" xyz#yahoo.com
- send "To" abc#gmail.com
- Have "Daily" in their subject
- at 8 am EST
Please let me know if it can be set in any component. I tried to use EvaluateJsonPath, ExtractEmailHeaders and RouteonAttribute but no luck yet.
It sounds like you have been exploring the correct path. You should be able to achieve this using a flow consisting of:
ConsumeIMAP >> ExtractEmailHeaders >> RouteOnAttribute
ConsumeIMAP will download messages from the email server and create a single FlowFile for each message, storing the email message raw bytes in the FlowFile contents.
ExtractEmailHeaders attempts to parse a FlowFile's contents as email (must be RFC-2822 compliant), extract email headers, and write each header field to a FlowFile attribute, including:
email.headers.from.*
email.headers.to.*
email.headers.subject
email.headers.sent_date
Note that ExtractEmailHeaders is not doing any filtering, just populating FlowFile attributes based on the FlowFile content, thus making the FlowFiles more easily routable downstream in the flow. Start just by creating a flow that has these two processors and verify that the output of the ExtractEmailHeaders processor meets these expectations. If not, its possible the email messages are malformed or not RFC-2822 compliant.
After you have successfully sent email FlowFiles through ExtractEmailHeaders, you can do the filtering using one or more RouteOnAttribute processors using the NiFi Expression Language to define your match conditions, e.g.:
${email.headers.subject.contains("Daily")}
If you have verified that your flow is working correctly through ExtractEmailHeaders, but the filtering in RouteOnAttribute is not working as expected, make sure your attribute expressions and assumptions about email header values (e.g., capitalization, datetime format) are correct. Consult the Apache NiFi Expression Language Guide and if you have specific questions relating to the expression language itself, search here or post another question on that specifically.
I hope this helps!

CDA HL7V3 acknowledgement

I create using Mirth a channel that receives CDA messages in HL7V3 format.
I'm able to parse the message and extract all the data i need.
My question is: How do i create an acknowledgement to the receiver?
I found out that there is a message called MCCI_MT000200UV01 that i need to implement but i can't find good explanation and/or examples.
I have been working a long time with HL7V2 and the acknowledgement is very simple.
Can't find a way to implement this in HL7V3 format.
Thanks in advance for your help
I guess you are talking about a generic Accept Acknowledgment message which is MCCI_IN000002UV02 (according to the HL7v3 NE2014). If I were you, first thing to do I'd download the HL7v3 Normative Edition that matches the year of your inbound message used to transport the CDA document (unless it's HL7v2). Then I'd go to HL7v3NE > Specification Infrastructure > Messaging > Transmission Infrastructure > Generic Message Transmission and find the Accept Ack interaction. There is a related XML Schema that allows you to build an XML template for the v3 ACK (XMLSpy like tool does that by default).
Since ACKGenerator does not support HL7v3, next step is to create a code templates function that builds the v3 ACK from the template you acquired from the previous step.
(PS. The whole procedure with samples is explained in an "Unofficial Mirth Connect Developer's Guide" available at mirthconnect.shamilpublishing.com)

What does Template:${message.encodedData} mean in mirth?

I am trying to learn a mirth system with a channel that is pulling from a database for its source and outputting hl7 messages for its destination(s). The SQL query pulls the correct data from the source--but Mirth does not output all of the data in the right spots in the HL7 message. The destinations show that it is outputting Template:${message.encodedData}. What does that mean? Where can I see the template that it using. The destinations don'y have any filters or transformers so I am confused.
message.encodedData is the fully transformed message - after any transformation steps.
The transformer is also where you can specify the output template for how you want the data to look. Simply load up a sample template message in the output template of the transformer (message template tab in the transformer) and then create a series of message builder steps. Your output message will be in the variable tmp, and your sql results will be in the variable msg.
So, if your first column is patientID (Select patientiD as patientID ...), you would create a message builder steps along the lines of
mapped segment: tmp['PID']['PID.3']['PID.3.2']
mapping: msg['patientID'];
I don't have exact syntax in front of me right now, but that's the basic idea.
I think "transformed" is the status of the message right after the transformers are executed and "encoded" message is the status after the message that comes from the transformers is encoded into the specified channel outbound datatype. In some cases those messages will be the same but not in all the cases.
Also, is very difficult to find updated and comprehensive Mirth documentation.

Mirth losing data in mapper variables

I have a database reader channel set up that actually reads the database at 10 second intervals and sends to a web service just fine. We get a valid response from the wsdl.
However, I need to update the database record so that it is flagged as having been processed. in this case we are simple changing a field from 100 to 101. However, when I try to update the field OR send an email containing ANY data that has been stored into mapper variables I get nothing. The database does not update. Emails send blanks for fields.
When I go into the channel messages for processed messages I can see good data in the Raw Message and Encoded Message tabs. There are no values in the Mappings tab.
Any suggestions on troubleshooting?
The Run-on-Update statement does not have access to the channel map, as it runs after message Encoding (and even the post-processor, I believe).
It DOES have access to the globalChannelMap and the responseMap. Put your new ID in the globalChannelMap and you should be good to go.
If you also want to send an email, would recommend you instead add an SMTP Writer destination (e.g., SMTP writer), which will have access to any channelMap variables created in a 'Destination 1'; as well as the globalChannelMap.