A FIX msg is broken into 02 parts. How to link these broken set of FIX msgs to make it a complete FIX msg? - fix-protocol

Lets say, we get the FIX messages from upstream system in the from of XML messages. But these messages are broken into 02 xml messages. All this msgs in xml format is stored in a table.
Now, we have 1000s of these everyday. How to link these pair of messages to each other to have a complete order message/xml ?
Example, the XMLs are loaded in table as :
Row#1
Fixtag8=FIX 4.2, Fixtag9=811,…other FIX tags…, Fixtag123456=C1=
Row#2
|116=Eclipse,…other FIX tags…Fixtag10=122
And similarly we have 1000s of rows of xml data in table. But we are not able to find how to link these pair of broken fix messages.
Is there a way to fix it ?
Let me know if I need to furnish more details
Tx

Related

Is there anyway to check duplicate the message control id (MSH:10) in MSH segment using Mirth connect?

Is there anyway to check duplicate the message control id (MSH:10) in MSH segment using Mirth connect?
MSH|^~&|sss|xxx|INSTANCE2|KKLIU 0063/2021|20190905162034||ADT^A28^ADT_A05|Zx20190905162034|P|2.4|||NE|NE|||||
whenever message enters it needs to be validated whether duplicate of control id Zx20190905162034 is already processed or not?
Mirth will not do this for you, but you can write your own JavaScript transformer to check a database or your own set of previously encountered control ids.
Your JavaScript can make use of any appropriate Java classes.
The database check (you can implement this using code template) is the easier way out. You might want to designate the column storing MSH:10 values as a primary key or define an index on it. Queries against unique entries would be faster. Other alternatives include periodically redeploying the Channel while reading all MSH:10 values already in the database and placing them in a global map variable or maintained in an API that you can make a GET request to when processing every message. Any of the options depends on the number of records we are speaking about.

Validating QuickFix/N Repeating Group Where First Two Fields Swap Order

I am implementing a client to connect to a server which as far as I can tell uses a hybrid of FIX4.2 and FIX4.4.
The server sends group 453 (NoPartyIDs) with fields in a non-standard order when some events occur.
According to the specification document, the first field should be PartyID (448). With certain messages, the first field in the group is PartyIDSource (447) and the message is rejected. PartyIDSource is the second field in the group as per the specification.
I get the following error:
<event> Message 140 Rejected: Group 453's first entry does not start with delimiter 448 (Field=453)
From the documentation and trial and error, I cannot find a way through this issue. Amongst a few guesses, I have tried adding field 447 as the first (non-required) field in the group definition in the data-dictionary. I have also set ValidateFieldsOutOfOrder to N in the config.
Is there something I can do to not reject and process the message?
Relevant documentation:
Groups are a little more nuanced than other parts of the Data Dictionary.
A group is defined within a message, with the group tag. The first child element of the group tag is the group-counter tag, followed by the other fields in the group in the order in which they should appear in the message.
ValidateFieldsOutOfOrder is not relevant here, so you can take that out.
If I understand you correctly, you're saying that
sometimes 447 comes before 448
but at other times 448 comes before 447
If this is true, then unfortunately your counterparty is being really stupid. Per the FIX spec, the order of fields in repeating groups is supposed to be in a consistent order. (And also, the first field of the each group-sequence is always required to be present.) If they're flip-flopping fields, they're violating FIX.
If the order was consistent, you would just edit your DD to change the order, and it sounds like you tried that. But if your counterparty is flip-flopping, then your DD will always be wrong part of the time.
I don't have a good answer for you. QF/n is not designed to handle all the ways that counterparties do FIX wrong (nor should it be).
Your counterparty's implementation is sloppy. Try contacting their support and seeing if they'll fix it?

java imap, performance issues, fetching all mails

I will use java mail api to handle mails like thunderbird etc. I have to fetch mails having 1000 messages. My design will be: When user performs a synch on a folder, i will get all uids of the messages in the folder:
Message[] msgs = ufolder.getMessagesByUID(1, UIDFolder.LASTUID);
// Use a suitable FetchProfile
FetchProfile fp = new FetchProfile();
fp.add(FetchProfile.Item.ENVELOPE);
fp.add(FetchProfile.Item.FLAGS);
I wil then compare the list of uids with the list stored in my db.
For the deleted ones, for example a message is not in the folder but in the db, i will mark it as deleted.
For the new ones, for example a message is in the folder but not in the db, i will mark as possible new. But, because messageuids are not safe (can be changed by the mail server on some cases), for the new mails, i will use additioanlly a custom hash value build from message id in the header + subject + receivedate and build a md5 hash. Only for the possible new mails i will use this hash and catch new mails.
For the moved messages, because their uids will be changed in the new folder, it will be flagged as deleted in the first and will be a new message in new folder, but the message will have same custom hash value becaue message id in the header and other properties will remain same duing the movement.
Question about performance issue: On each click on folder (folder synch) i will do the compare operation of all uids in the folder with the local uid list stored in the db to learn the deleted ones. I could not find another better way to accomplish this. As you know, thunderbird catches immediately a deleted message without relogin, even if the folder is large and the deleted message is very old (5 years). I think thunderbird also compares all message uids in that folder with a list stored locally.
How can i implement a better mechanism for the synch for a better performance? Does thunderbird apply a different approach? How can thunderbird
accomplish it so quickly?
If we were interested only for the new messages, i could have kept last stored uid and only compare the new messages later than that, but for the deleted ones, i already have to compare full folder. Additionally, UIDNEXT value is always -1 in my mail server, if it were set correctly, it will not help to get deleted ones again, a full compare is a must i think, am I wrong?
Note: I canot not use or add message listeners because the appliaction is server-client based and the mail handling task is on the server side and we do not support threads listeners etc. The events should be triggered from the client and the request is being processed on ther server and a response is returned and client handles the response on the gui.
What you want is called condstore or quick resync, RFC7162 in both cases. That's what Thunderbird uses.
That's a pair of extensions support commands like "give me all the UIDs that have changed since last time I connected", "tell me what's been deleted" and so on.
If you can't use threads to listen for these events from the mail server, your options are very limited. Probably the best thing you can do is limit the resynchronization to the messages that are visible to the client.

Mirth: How to send to a destination multiple times

I would like to read a HL7 message containing multiple orders (ORC segments). My destination is a web service sender which can only handle one order at a time.
How can I iterate through the input HL7 message and send to a destination each time?
Thanks for any help.
As is stated above there are several ways to do it.
I normally did this in a JavaScript transformer step. I basically built a small state engine that would iterate over the incoming (raw) message by splitting the incoming message on '\r' characters. It would then build the outbound message as a string by identifying the "header" section (part that does not change) and storing that in a string, say Header, and the order section (the part that does change) as another string, say Order, and the concatenating them together when I reach the next order or the end of the string and sending them to another channel with
router.routeMessage('channelName', Header + '\r' + Order);
You may create another channel that communicates with your web-service and route the ORC portion to that channel.
For additional information you may read the "Unofficial Mirth Connect Developer's Guide" available at mirthconnect.shamilpublishing.com
(Disclaimer: I'm the author of this guide so any comments or suggestions are welcome.)

What does Template:${message.encodedData} mean in mirth?

I am trying to learn a mirth system with a channel that is pulling from a database for its source and outputting hl7 messages for its destination(s). The SQL query pulls the correct data from the source--but Mirth does not output all of the data in the right spots in the HL7 message. The destinations show that it is outputting Template:${message.encodedData}. What does that mean? Where can I see the template that it using. The destinations don'y have any filters or transformers so I am confused.
message.encodedData is the fully transformed message - after any transformation steps.
The transformer is also where you can specify the output template for how you want the data to look. Simply load up a sample template message in the output template of the transformer (message template tab in the transformer) and then create a series of message builder steps. Your output message will be in the variable tmp, and your sql results will be in the variable msg.
So, if your first column is patientID (Select patientiD as patientID ...), you would create a message builder steps along the lines of
mapped segment: tmp['PID']['PID.3']['PID.3.2']
mapping: msg['patientID'];
I don't have exact syntax in front of me right now, but that's the basic idea.
I think "transformed" is the status of the message right after the transformers are executed and "encoded" message is the status after the message that comes from the transformers is encoded into the specified channel outbound datatype. In some cases those messages will be the same but not in all the cases.
Also, is very difficult to find updated and comprehensive Mirth documentation.