FIX client using QuickFIXN rejecting Quote Cancel messages from server - required tag missing 295 NoQuoteEntries - FIX 4.2 - quickfix

I am currently working on our FIX client to change the StreamingQuoteDuration on our quote requests to 2 minutes in order to work around a max stream limit imposed by our counterparty. I have encountered an issue with the Quote Cancel message that is received after 2 minutes. QuickFIX/n, the FIX library that our client uses, rejects the message stating that it is missing a required field - NoQuoteEntries (tag 295).
Our counterparty claims this is not a required field in their Rules Of Engagement document but I am unable to prevent QuickFIX from rejecting the message. Does anybody know how I can achieve this? I've asked the counterparty to include that tag but they are not able or willing to do so.
We are using the FIX 4.2 protocol. Here are the FIX logs from our quote messages log:
8=FIX.4.2|9=118|35=Z|34=31|49=[Redacted]|56=[Redacted]|52=20210510-10:43:16.428|117=*|298=1|131=EUR-GBP-EUR-1-20210512|10=065
8=FIX.4.2|9=129|35=3|34=1549=[Redacted]|52=20210510-10:43:16.792|56=[Redacted]45=31|58=Required tag missing|371=295|372=Z|373=1|10=063

You need to customize your FIX42.xml file (the DataDictionary) to match your counterparty's published Rules of Engagement.

Related

Getting many welcome messages from the same user

I am getting many welcome messages from the same user, is it some kind of a monitoring system by Google?
How can I learn to ignore those requests?
Yes, Google periodically issues a health check against your Action, usually about every 5-10 minutes. Your Action should respond to it normally so Google knows if there is something wrong. If there is, you will receive email that your Action is unavailable because it is unhealthy. They will continue to monitor it and, when healthy again, will restore it.
You don't need to ignore those requests, however you may wish to, either to save on resources or to avoid logging it all the time.
With a library such as multivocal, it detects it and responds automatically - there is nothing you need to to. For other libraries, you will need to examine the raw input sent in the body of your webhook request.
If you are using the Action SDK, you should examine the inputs array to see if there is one with an argument named "is_health_check". If you are using Dialogflow, then you would need to look under originalDetectIntentRequest.data.inputs.

OpenPop and Web Beacons error with GetMessages

Has anyone working with OpenPop get errors when the emails being processed have web beacons in them? I have two services that process inboxs, extracting attachments, and creates blobs for processing, but whenever an email that has a web beacon (code to single back to the mothership) the openPop dies on GetMessages. If I forward the message right back to the same mailbox, the forward removes the web beacon and all is well.
We had to setup an OWA rule that detects messages, for example from quickbooks#notification.intuit.com and forwards them right back to the same inbox. This automatically cleans out the web beacon, but the sender is no longer known and we cannot notify them and let them know we received their invoice.
Not sure how to get rid of the web beacons, but retain the sender.
Any help appreciated.
Here is where it dies, and what the error is:
Errors I trap
1/3/2017 7:47 PM: ProcessAllMessages - GetAllMessages Exception - Length cannot be less than zero.
Parameter name: length
1/3/2017 7:47 PM: ProcessAllMessages - Retrieved 0 out of 1 email(s) successfully.
We had to move off of the OpenPop as it appears there were just some core issues handling certain mime types that came in by email. Since no one can control devices, email clients of email senders we needed a more robust solution that handled exceptions rather than quit at an exception.
We migrated/rewrote using exchange web services
https://msdn.microsoft.com/en-us/library/office/dn567668.aspx
it was pretty easy to migrate the code as we only had to change connections and a few basic objects. The majority of the framework we had written was not changed at all.

Why does my Github webhook keep timing out?

We couldn’t deliver this payload: Service Timeout
I was successfully sending webooks to my server 5 minutes ago, and now I just keep getting timeouts. I tried deleting the webook and re-adding it, changing the URL it points to, but nothing.
Am I flooding it with too many pushes, or is GitHub's webhook service just down?
It also turns out that GitHub has a 10-second timeout set on their webhooks. That is what I ran into. See the documentation here.
Unless there is some kind of error on the GitHub side (which doesn't seem to be the case at the moment, given their "System Status" history), you might check the program receiving the payload of that webhook.
See a similar problem in Supybot-plugins 225:
I contacted GitHub support and one of the employees has been troubleshooting this for me. Here is part of what he had to say about the issue:
I just tried making a request manually from one of our machines, and that went through with no error (see curl -v output below).
However, I did notice that it took extremely long for the request to be processed -- over 15 seconds (for 2 bytes of data).
Decoupling the listening and reception of the payload, from its proicessing, is generally the right approach, as I recommended ion "Perl Script slow over Tomcat 6.0 and generates service time out".
The first part should be as fast as possible.

Ejabberd server keeps logging me off and back on constantly

I'm building an iOS app, but the problem exists on all clients. iChat, Messages, Psi, etc. So because it exists on all clients I'm going to assume it's a server issue.
Has anyone ever experienced something like this? If so, what did you do to fix it? I'm sure it's some silly config setting or something but I simply can't figure this out. This is the only thing that looks like it might be related in ejabberd.log:
=ERROR REPORT==== 2012-09-05 12:07:12 ===
Mnesia(ejabberd#localhost): ** WARNING ** Mnesia is overloaded: {dump_log,
time_threshold}
Thanks in advance for any tips/pointers.
https://github.com/processone/ejabberd/blob/master/src/ejabberd_c2s.erl#L936 seems to have already been patched. The config variable is called resource_conflict and the value you want is setresource.
The above warning is (probably) not related to the issue you are facing. These mnesia events usually happens when the transaction log needs to be dumped, but the previous transaction log dump hasn't finished yet.
Problem that you are facing needs to be debugged for which you can set {log_level, 5} inside ejabberd.cfg. This will enable debug logging for ejabberd. Then look into the logs to find any guesses on why this is happening for you. Also, come back and paste your log file details here, probably we will be able to help you further. I have never faced such non-sensical issues with ejabberd.
Update after log file attachment:
As Joe wrote below, this is indeed happening because of resource conflict. Two of your clients are trying to login with same resource value. But in an ideal world this shouldn't matter. Jabber servers SHOULD take care of this by appending or prepending custom value on top of resource value requested by the client.
For example, here is what gtalk (even facebook chat) servers will do:
SENT <iq xmlns="jabber:client" type="set" id="1"><bind xmlns="urn:ietf:params:xml:ns:xmpp-bind"><resource>jaxl#resource</resource></bind></iq>
RCVD <iq id="1" type="result"><bind xmlns="urn:ietf:params:xml:ns:xmpp-bind"><jid>jabberxmpplibrary#gmail.com/jaxl#resou27F46704</jid></bind></iq>
As you can see my client requested to bind with resource value jaxl#resource but gtalk server actually bound my session with resource value jaxl#resou27F46704. In short, this is not a bug in your client but a bug in ejabberd.
To fix this you can do two things:
Resource value is probably hardcoded somewhere in your client configuration. Simply remove that. A good client will automatically take care of this by generating a random resource value at it's end.
Patch ejabberd to behave how gtalk server does (as shown above). This is the relevant section inside ejabberd_c2s.erl src which needs some tweaking. Also search for Replaced by new connection inside the c2s source file and you will understand what's going on.
This sounds like the "dueling resources" bug in your client. You may have two copies of your client running simultaneously using the same resource, and doing faulty auto-reconnect logic. When the second client logs in, the first client is booted offline with a conflict error. The first client logs back in, causing a conflict error on the second client. Loop.
Evidence for this is in your logfile, on line 3480:
D(<0.373.0>:ejabberd_c2s:1553) : Send XML on stream =
<<"<stream:error><conflict xmlns='urn:ietf:params:xml:ns:xmpp-streams'/>
<text xml:lang='en' xmlns='urn:ietf:params:xml:ns:xmpp-streams'>
Replaced by new connection
</text>
</stream:error>">>

QuickFix acceptance test

May someone please explain what exactly acceptance test for QuickFix have to do(test for)?
Right now i have done several test that test the latency and througput of messages but i have no idea what the acceptance test for QuickFix shoud test.
I have searched over the net for this but i didnt manage to find the answer of the question. So if someone know what i have to test for or have done such test please write it here so i and other like me can see it. Thanks for all the help in advance.
By 'acceptance test' I'm assuming you are referring to some kind of conformance test? If so, then it depends on the business scenario that you are trying to test and how the FIX connection supports that. For example, you FIX connection might be a pricing feed. In which case conformance testing might cover:
Fix session level tests (i.e. checking both sides are conforming to the FIX protocol
Testing subscription to symbols and that prices are being received
However, if you FIX session was an order feed then tests would include order related scenarios e.g. testing that you can submit orders, receive order updates (fills, rejections and cancellations etc). Testing the behaviour of orders if you get disconnected (ie. do your GTC orders get pulled if you lose connection from the exchange etc)
An STP conformance test would hopefully result in answers to questions like:
How do I guarantee that I have received all the deals
How can I replay deals that might have been done whilst I've been disconnected?
How do I uniquely identify a trade? (i.e which FIX tags or combination of tags do I need)
Whether you are conformance testing an STP, pricing or orders fix session, you will always want to do the basic fix session level tests.
Do this help?