How do i handle communication between FIX engines of different version - fix-protocol

I am developing a trading engine and i have to use FIX engines. If i use a FIX engine of a higher version can it communicate with a FIX engine of a lower version ?
Are there any FIX engines capable of automatically converting the request to a lower version in case they are communicating with a lower version FIX engine ?
Which version of FIX should i use ?

I am developing a trading engine and i have to use FIX engines. If i use a FIX engine of a higher version can it communicate with a FIX engine of a lower version ?
Yes. Many financial institutions still use the FIX 4.4 and 4.2 versions. And many stock exchanges use FIX 5.0. So there is backward compatibility between them to support the older versions unless and until everybody moves to the same version.
Are there any FIX engines capable of automatically converting the request to a lower version in case they are communicating with a lower version FIX engine ?
Automatically no. It doesn't happen that you input a FIX 5.0 message and you get a FIX 4.2 message. You have to accept a FIX 5.0 message and parse the message and convert it to a FIX 4.2 message. Quickfix is one open source library. There is Cameron, but not open source. You have to do it for all messages, but more specifically for messages you want to support. Some message structures are still same so shouldn't be a big bummer.
Version you need to use is dependent on the clients you need to exchange messages with. Should ask what version they use or are intending to migrate to ?

Related

Are open protocols compatible with old MAP versions

I am interested in using the new open protocols in acoustic telemetry and wanted to know if they are comparable with older MAP versions.
The Open Protocols (OPi and OPs) are currently not compatible with older MAP versions.
As mentioned by Kim B. Some older protocols as R64K are compatible up to MAP 114.
The overview of which protocols are compatible across manufacturers can help to explain: https://www.europeantrackingnetwork.org/en/compatibility-tag-protocols
The new open protocols are compatible with older MAP versions (up to MAP114 on the R64K codeset), but NOT the newer version (MAP115) which is encrypted. To find our more, you can visit the European Tracking Network website (https://www.europeantrackingnetwork.org/en/open-protocol); they have some pretty good explanations of what codesets are compatible with each other!
There are also issues with using the older protocols that are cross-compatible with MAP114 (InnovaSea). We have used S256 tags in an array with multiple manufacturer's receivers and have tons of false detections that make the animal movements extremely challenging to resolve. The results from OPi and OPs testing suggest that this issue can be resolved with the new protocols to improve interoperability!
But both R64K and the two new OPi and OPs are all compatible on other manufacturers protocols. I suggest you look up those (Thelma, Lotek and Sonotronics).

Upgrade and Downgrade design

I am developing an embedded linux based product. The linux user-space runs multiple processes that are part of the product. I want to implement a clever design of upgrade (and downgrade) of the firmware version. So for example, If I change a structure in a newer version, the newer process can know how to read the old data (which is stored on flash) and build from it the new structure, But!, when downgrade, the older process won't understand the new structure that is being saved on the flash.
So what is the best design for handling the upgrading (and downgrading) of a whole firmware version?

Using WebSockets in Objective-C

I have tried, unsuccessfully, to use websockets in objective-c with the following two libraries:
http://code.google.com/p/unitt/wiki/UnittWebSocketClient
https://github.com/zootreeves/iOS-WebSockets
In both cases I was unable to establish even a basic connection to a server (runnning on localhost). I was wondering if someone could please provide or point me in the direction of some code that will just simply connect to a server via a websocket and/or perform a handshake.
Ideally, it would be nice if the code could use one of the above libraries, but at this point I'm open to anything that'd work.
I've posted about some issues with UnitT beore but I haven't received any feedback so I'm not sure exactly what step I'm messing up on. Appreciate any help, thanks!
We just released SocketRocket. It supports the latest standard, RFC 6455, which neither ZTWebSocket, nor Unitt does. No CocoaAsyncSocket needed to boot.
Also, libPusher is experimenting with SocketRocket too.
The libPusher library uses the ZTWebSocket object. And we have a number of clients who've developed iOS applications against our service. So, the ZTWebSocket object works and should be a good starting point. The libPusher library would be a good reference for usage.
The key to making UnitT work is to find out what version of the specification is your server running. Then you use the correct version for the client. The latest (rev17) of the specification FINALLY allows for multiple versions and having the server send back an appropriate response, but none of the prior versions do. Therefore, you may not receive a meaningful error from the server, just a disconnection. Do you know what version your server is running?

Is it a good idea to prefer NSNumberFormatterBehavior10_4 over NSNumberFormatterBehaviorDefault?

I wonder if it would be more secure to rely on a NSNumberFormatterBehavior10_4 instead of default, because default may change arbitrary some day in future, and suddenly the app looks ugly. Or am I wrong with that?
On OS X, the two operation modes owe their history to the introduction in 10.4 of more useful behaviour based on standard open source libraries. For the purposes of binary compatibility, if you don't otherwise do anything then NSNumberFormatters are created with pre-10.4 behaviour.
iOS postdates the launch of OS X 10.4, so only the 10.4 behaviour is implemented and is the new default. With no legacy apps, there is no reason for pre-10.4 behaviour ever to be implemented.
Based on the approach taken on the desktop — specifically that the change was designed explicitly not to break backwards compatibility — I'd conclude that there's no benefit to stating that you want 10.4 rather than default behaviour.

When to upgrade libraries

I work with a lot of open source libraries in my daily tasks (java FYI). By the time a project comes close to maturing, many of the libraries have released newer versions. Do people generally upgrade just to upgrade, or wait until you see a specific bug? Often the release notes say things like "improved performance and memory management", so I'm not sure if it's worth it to potentially break something. But on the other hand most libraries strive and claim to not break anything when releasing new versions.
What is your stance on this? I'll admit, I am kind of addicted to upgrading the libraries. Often times it really does help with performance and making things easier.
The rule for us is to stay up to date before integration testing but to permit no changes to any library once we're past this point. Of course, if integration testing reveals flaws that are due to an issue with a library that has been fixed, then we'd go back and update. Fortunately, I cannot remember a situation where that has happened.
Update: I understand Philuminati's point about not upgrading until you have a reason. However, I think of it this way: I continuously pursue improvements in my own code and I use libraries built by people that I believe think the same way. Would I fail to use my own improved code? Well, no. So why would I fail to use the code that others have improved?
I keep what works until there is a reason to upgrade.
If the information pertaining the old version appears on secunia or securityfocus...
Otherwise - if new functionality is needed (better performance is also a 'functionality').
I'm with the lazy crowd - I can't remember ever formulating a different strategy than "upgrade when there is a reason to" - but now that I consider the question, there is something to be said about proactive upgrades.
Upgrading does make it easier for you to report a bug in the lib, should you find one. If you find a bug and have not upgraded, it's the first thing you're going to have to do before you get any help or support. You might as well do that proactively.
Especially if you have a good test suite, upgrading proactively will flush out problems early, and that is always a smart move.
It depends a lot on your deployments. If you are supporting multiple platforms then the very latest libraries may not be available on all at any given moment. I've been frustrated by trying to install something that requires the very latest version of some lib, and it's not available as a package yet.
If you deploy to customers you want to develop against libraries that are stable and widely available.