Validating sequence of tags in FIX message - fix-protocol

I need to validate the sequence/order in which the fields comes in a FIX message.
Is it possible to validate the sequence of tags in a FIX message using quickfix/j or Quickfix/n ?
In the messages which I'm getting from server, certain tags are in different order which is not expected. I expect them to come after certain repeating groups but they come before the repeating groups. Hence, I need to write a script which will check the sequence of incoming messages from server and compare them against the standard message definitions.
Can someone suggest any good open source libs available to achieve this ?

The order of FIX tags is not constrained unless they are inside repeating groups. Your application should not be throwing any errors based on tag order.
If you are using Quickfix/J you should switch on message validation within your quickfix/j settings, which will check for mandatory tags being present and tags within repeating groups being in order, but I doubt any FIX engine will check order as this is not part of the protocol.

Further to user1717259 it sounds to me like you're trying to configure your data dictionary in a particular way, and you want to have a look at something like this or this

Related

why is my ctp not getting any data for one of the tables i am subscribing to?

I have few ctps subscribing to a tp.
subscription is established with no problems but data doesn't seem to hit 1 of my ctps.
I have 2 ctps subscribing to the same table. one is getting data the other doesn't.
I checked. u.w and I can see the handles being open for the said table but when I check the upd on my ctp... it receives all other tables except this one.
upd on my ctp it's a simple insert. I cannot see any data at all for the table. the t parameter is never set to the the name of the table I am interested in. I don't know what else to check. any suggestions would be greatly appreciated. the pub logic is the default pub logic.
no errors in the tp.
UPDATE1: I can send other messages and I receive data from the tp for other tables. issue doesn't seem to persist I dr just prod. I cannot debug much in prod
Without seeing more of your code it's hard to create a good answer.
Couple things you could try:
Check if you can send a generic message (e.g. h(+;1;2)) from your tp to ctp via the handle in .u.w this will make sure the connection is ok.
If you can send a message then you can check if the issue is in your ctp. You can see exactly what is being sent by adding some logging to your upd function, or if you thing the message isn't getting that far, to your .z.ps message handler function, e.g. .z.ps:{0N!x;value x} will perform some very basic client side logging.
If you can't send a message down the handle in the tp then it's possible there's other network issues at play (although I expect you would be seeing errors in your tp if that was the case). You could check .z.W in your ctp in this case to see if the corresponding handle for the tp is present there.
Can also send a test update to your tickerplant and add logging along each step of the way if you really want to see the chain of events but this could be quite invasive.

Reset TFDMemTable to default (no field definitions) at Runtime

I have a raw TFDMemTable at design time. I only activate the same at runtime and simultaneously display data through a grid component. The fields will be defined at runtime depending of its source (API REST) and user case.
At runtime, I need to reset the TFDMemTable to its default. Meaning, remove all the fields definition and accept another fresh data and field definitions.
Currently, the fields set by the first ran during runtime was fixed and it is not accepting any new field definitions. I am contemplating on creating TFDMemTable at runtime but I still have to figure out. I am hoping there is a better way..
Real quick question: How can I reset the TFDMemTable to its default at runtime (no fields definition)?
UPDATE 1:
TFDMemTable will receive JSON data from API. This API throws data with unknown number of columns/fields. Meaning, it can only determine the fields upon received of JSON data. Hence, I'd like that each time I called API, the TFDMemTable should redefine all the fields to be able to capture the API fields.
So, from my understanding, if I could be able to reset the TFDMemTable I could avoid this issue.
You can use TFDMemTable.ClearFields to remove all of the existing field definitions.

How can I know what component failed?

When using the On SubJob Error trigger, I would like to know what component failed inside the subjob. I have read that you can check the error message of each component and select the one that is not null. But I feel this practice is bad. Is there any variable that stores the identity of the component that failed?
I may be wrong, but I'm afraid there isn't. This is because globalVar elements are component-scoped (ie they are get/set by components themselves), not subjob-scoped (this would mean being set by Talend itself, or something). When the subjobError signal is triggered, you loose any component-based data coming from tFileInputDelimited. For this design reason, I don't think you will be able to solve your problem without iterating inside the globalMap searhcing for the error strings here and there.
Alternatively you can use tLogCatcher, which has a 'origin' column, to spot the offending component and eventually route to different recoverable subjobs depending on which component went to exception. This is not a design I trust too much, actually, because tLogCatcher is job-scoped, while OnSubjobError is directly linked to a specific subjob only. But It could work in simple cases

how to skip the header while processing message in Mirth

I am using Mirth 3.0. I am having a file having thousands of records. The txt file is having 3 liner header. I have to skip this header. How can I do this.
I am not suppose to use batch file option.
Thanks.
The answer is a real simple setting change which you need to make.
I think your input source data type is delimited text.
Go to your channel->Summary tab->Set data type->Source 1 Inbound Properties->Number of header record set it to 3.
What mirth will do is, to skip the first 3 line records from the file as they will be considered as headers.
If there is some method of identifying the header records in the file, you can add a source filter that uses a regular expression to identify and ignore those records.
Such result can be achieved using the Attachment script on the Summary tab. There you deal with a message in its raw format. Thus, if your file contains three lines of comments and then the first message starts with the MSH segment you may use regular JavaScript functions to subtract everything up to MSH. The same is true about the Preprocessor script, and it's even more logical to do such transformation there. The difference is that the Mirth does not store the message before it hits the Attachment handler, but it stores it before the Preprocessor handles the message.
Contrary to that, the source filter deals with the message serialized to the E4X XML object, where the serialization process may fail because of the header (it depends on the inbound message data type settings).
As a further reading I would recommend the "Unofficial Mirth Connect Developer's Guide".
(Disclaimer: I'm the author of this book.)
In my implementation the header content remains same so in advance I know how much lines the header is going to take so inside source filter I am using the following code.
delete msg["row"][1];delete msg["row"][1];return true;
I am using delete statement twice becoz after executing the first delete statement MSG will be having one less row and if header accommodates more that a single row then second delete statement is required.

Work Flow Support multiple Scenario

I am building a base workflow will support around 25 Customer
all customers they matches with one basic workflow and each one has little different request lets say one customer wanna send email and another one don't wanna send email
What I am thinking to make
1- make one workflow and in the different requirement I will make switch to check who is
the user then switch each user to his requirements
(Advantages)this way powerful in maintenance and if there is any common requirements
easy to add
(Disadvantages) if The customer number increase and be like 100 and each is different
and we expect to have 100 user using the workflow but with the Different
little requirements
2- make Differnt workflow for each customer which meaning I will have a 100 workflow
in the future and in declaration instantiate the object from the specific workflow
which related to the Current user
(Advantages) each workflow is separate
(Disadvantages) - hard to add simple feature this meaning write the same thing 100
time so this is not Professional
so What I need ??
I wanna know if those only the ways I have to use in this situation or I missing another technique
One way would be to break out your workflow into smaller parts, each which do a specific thing. You could organize a layout like the following, to be able to support multiple variations of the inbound request.
Customer1-Activity.xaml
- Common-Activity1.xaml
- Common-Activity2.xaml
Customer2-Activity.xaml
- Common-Activity1.xaml
- Common-Activity2.xaml
For any new customers you have, you only need to create a root XAML activity, with each having the slight changes required for your incoming request parameters.
Option #2: Pass in a dictionary to your activity
Thought of a better idea, where you could have your workflow have a Dictionary<string, object> type be an input argument. The dictionary can contain the parameter/argument set that was given to your workflow. Your workflow could then query for the parameter set to initialize itself with that info.