quickfixengine: possible to restrict logging? - quickfix

In quickfixengine is there a setting to specify the log level to restrict number of messages logged? It seems that we are login a lot of data so we would like to restrict it bit. I assume that logging too many messages should affect performance (don't have any hard data for or against).

You don't say which language you're using but I believe that this should work with both the C++ and Java APIs.
You will need to implement your own LogFactory and Log classes (the former is responsible for creating instances of the latter). Then you'll pass an instance of your custom LogFactory to your Initiator or Acceptor instance. Your Log class is where you will do the message filtering.
Understand that Log receives messages in string form, so you'll need to filtering either with string matching operations or convert the strings back to Messages and then filter using tags, though this may end up slowing you down more than just allowing all messages to be logger.

Related

Can QuickFixJ store only messages using JdbcStore

We are currently incorporating a FIX engine (using QuickFixJ) in our application. We will be the initiator and use trade capture reports to get informed on all trades happening on the platform.
When persisting the trade capture reports, we first buffer them for performance reasons and later insert them all at once on the database in a single transaction. We are using the JdbcStore to persist the sent FIX messages on the database as we cannot rely on the hard disk. However, we do not want the JdbcStore to persist the session information (target sequence number, sender sequence number etc) because this would open a new transaction for every single message we receive (which we want to avoid due to performance reasons). Instead, we manually save the last seen and sent sequence numbers.
I have not found a configuration of QuickFixJ which would allow this. If we create a JdbcStore using the JdbcStoreFactory, it expects a table on the database to store session information. Is there any way to configure QuickFixJ to only persist the sent messages, but not the session information?

Kafka Streams - Define Custom Relational/Non_Key_Value StateStore With Fault Tolerance

I am trying to implement event sourcing using kafka.
My vision for the stream processor application is a typical 3-layer Spring application in which:
The "presentation" layer is replaced by (implemented by?) Kafka streams API.
The business logic layer is utilized by the processor API in the topology.
Also, the DB is a relational H2, In-memory database which is accessed via Spring Data JPA Repositories. The repositories also implements necessary interfaces for them to be registered as Kafka state stores to use the benefits (restoration & fault tolerance)
But I'm wondering how should I implement the custom state store part?
I have been searching And:
There are some interfaces such as StateStore & StoreBuilder. StoreBuilder has a withLoggingEnabled() method; But if I enable it, when does the actual update & change logging happen? usually the examples are all key value stores even for the custom ones. What if I don't want key value? The example in interactive queries section in kafka documentation just doesn't cut it.
I am aware of interactive queries. But they seem to be good for queries & not updates; as the name suggests.
In a key value store the records that are sent to change log are straightforward. But if I don't use key value; when & how do I inform kafka that my state has changed?
You will need to implement StateStore for the actually store engine you want to use. This interface does not dictate anything about the store, and you can do whatever you want.
You also need to implement a StoreBuilder that act as a factory to create instances of your custom store.
MyCustomStore implements StateStore {
// define any interface you want to present to the user of the store
}
MyCustomStoreBuilder implements StoreBuilder<MyCustomStore> {
MyCustomStore builder() {
// create new instance of MyCustomStore and return it
}
// all other methods (except `name()`) are optional
// eg, you can do a dummy implementation that only returns `this`
}
Compare: https://docs.confluent.io/current/streams/developer-guide/processor-api.html#implementing-custom-state-stores
But if I don't use key value; when & how do I inform kafka that my state has changed?
If you want to implement withLoggingEnabled() (similar for caching), you will need to implement this logging (or caching) as part of your store. Because, Kafka Streams does not know how your store works, it cannot provide an implementation for this. Thus, it's your design decision, if your store supports logging into a changelog topic or not. And if you want to support logging, you need to come up with a design that maps store updates to key-value pairs (you can also write multiple per update) that you can write into a changelog topic and that allows you to recreate the state when reading those records fro the changelog topic.
Getting a fault-tolerant store is not only possible via change logging. For example, you could also plugin a remote store, that does replication etc internally and thus rely on the store's fault-tolerance capabilities instead of using change logging. Of course, using a remote store implies other challenges compare to using a local store.
For the Kafka Streams default stores, logging and caching is implemented as wrappers for the actual store, making it easily plugable. But you can implement this in any way that fits your store best. You might want to check out the following classes for the key-value-store as comparison:
https://github.com/apache/kafka/blob/2.0/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java
https://github.com/apache/kafka/blob/2.0/streams/src/main/java/org/apache/kafka/streams/state/internals/ChangeLoggingKeyValueBytesStore.java
https://github.com/apache/kafka/blob/2.0/streams/src/main/java/org/apache/kafka/streams/state/internals/CachingKeyValueStore.java
For interactive queries, you implement a corresponding QueryableStoreType to integrate your custom store. Cf. https://docs.confluent.io/current/streams/developer-guide/interactive-queries.html#querying-local-custom-state-stores You are right, that Interactive Queries is a read-only interface for the existing stores, because the Processors should be responsible for maintaining the stores. However, nothing prevents you to open up your custom store for writes, too. However, this will make your application inherently non-deterministic, because if you rewind an input topic and reprocess it, it might compute a different result, depending what "external store writes" are performed. You should consider doing any write to the store via the input topics. But it's your decision. If you allow "external writes" you will need to make sure that they get logged, too, in case you want to implement logging.

What is the relationship between the FIX Protocol's OrdID, ClOrdID, OrigClOrdID?

I'm pretty new to the FIX protocol and was hoping someone could help clarify some terms.
In particular could someone explain (perhaps with an example) the flow of NewOrderSingle, ExecutionReport, CancelReplaceRequest and how the fields ClOrdID, OrdID, OrigClOrdID are used within those messages?
A quick note about usages of fields. My experience is that many who implement FIX do it slightly differently. So be aware that though I am trying to explain correct usage you may find that there are differences between implementations. When I connect to a new broker I get a FIX specification which details exactly how they use the protocol. I have to be very careful to make sure where they have deviated from other implementations.
That said I will give you a rundown of what you have asked for.
There are more complicated orders but NewOrderSingle is the one most used. It allows you to create a trade for any asset. You will need to create a new order using this object / msg type. Then you will send it through your session using the method sendToTarget(). You can modify the message after this point through the toApp() method, assuming your application implements the quickfix.Application interface.
The broker (or whoever you are connected to) will send you a reply in the form of and Execution report. Using quickfix that reply will enter your application through the fromApp() callback. From there the best thing to do is to implement your app inheriting from the MessageCracker class (or implement it elsewhere) using the crack method from MessageCracker it will then call back a relevant onMessage() method call. You will need to implement a number of these onMessage() methods (it depends on specifically what you are doing as to which methods you will need), the main one being onMessage(ExecutionReport msg, SessionID session). This method will be called by message cracker when you receive and Execution report from the broker. This is the standard reply to a new order.
From there you handle the reply as required.
Some orders do not get filled immediately like Limit orders. They can be changed. For that you will need the CancelReplaceRequest. Your broker will give you details of how to do this specifically for them (again there are differences and not everyone does it the same). You will have to have done a NewOrderSingle first and then you will use this MsgType to update it.
ClOrdID is an ID that the client uses to identify the order. It is sent with the NewOrderSingle and returned in the ExecutionReport. The OrdID tag is in the ExecutionReport message, it is the ID that the broker will use to identify the order. OrgClOrdID is usually used to identify the original order in when you do and update (using CancelReplaceRequest), it is supposed to contain the ClOrdID of the original order. Some brokers want the original order only, others want the ClOrdID of the last update, so the first OrigClOrdID or will be the ClOrdID of the NewOrderSingle, then if there are subsequent updates to the same order then they will be the ClOrderID from the last CancelReplaceRequest. Some brokers want the last OrderID and not ClOrderID. Note that the CancelReplaceRequest will require a ClOrdID as well.

Remove read data for authenticated user?

In DDS what my requirement is, I have many subscribers but the publisher is single. My subscriber reads the data from the DDS and checks the message is for that particular subscriber. If the checking success then only it takes the data and remove from DDS. The message must maintain in DDS until the authenticated subscriber takes it's data. How can I achieve this using DDS (in java environment)?
First of all, you should be aware that with DDS, a Subscriber is never able to remove data from the global data space. Every Subscriber has its own cached copy of the distributed data and can only act on that copy. If one Subscriber takes data, then other Subscribers for the same Topic will not be influenced by that in any way. Only Publishers can remove data globally for every Subscriber. From your question, it is not clear whether you know this.
Independent of that, it seems like the use of a ContentFilteredTopic (CFT) is suitable here. According to the description, the Subscriber knows the file name that it is looking for. With a CFT, the Subscriber can indicate that it is only interested in samples that have a particular value for the file_name attribute. The infrastructure will take care of the filtering process and will ensure that the Subscriber will not receive any data with a different value for the attribute file_name. As a consequence, any take() action done on the DataReader will contain relevant information and there is no need to check the data first and then take it.
The API documentation should contain more detailed information about how to use a ContentFilteredTopic.

MSMQ querying for a specific message

I have a questing regarding MSMQ...
I designed an async arhitecture like this:
CLient - > WCF Service (hosted in WinService) -> MSMQ
so basically the WCF service takes the requests, processes them, adds them to an INPUT queue and returns a GUID. The same WCF service (through a listener) takes first message from queue (does some stuff...) and then it puts it back into another queue (OUTPUT).
The problem is how can I retrieve the result from the OUTPUT queue when a client requests it... because MSMQ does not allow random access to it's messages and the only solution would be to iterate through all messages and push them back in until I find the exact one I need. I do not want to use DB for this OUTPUT queue, because of some limitations imposed by the client...
You can look in your Output-Queue for your message by using
var mq = new MessageQueue(outputQueueName);
mq.PeekById(yourId);
Receiving by Id:
mq.ReceiveById(yourId);
A queue is inherently a "first-in-first-out" kind of data structure, while what you want is a "random access" data structure. It's just not designed for what you're trying to achieve here, so there isn't any "clean" way of doing this. Even if there was a way, it would be a hack.
If you elaborate on the limitations imposed by the client perhaps there might be other alternatives. Why don't you want to use a DB? Can you use a local SQLite DB, perhaps, or even an in-memory one?
Edit: If you have a client dictating implementation details to their own detriment then there are really only three ways you can go:
Work around them. In this case, that could involve using a SQLite DB - it's just a file and the client probably wouldn't even think of it as a "database".
Probe deeper and find out just what the underlying issue is, ie. why don't they want to use a DB? What are their real concerns and underlying assumptions?
Accept a poor solution and explain to the client that this is due to their own restriction. This is never nice and never easy, so it's really a last resort.
You may could use CorrelationId and set it when you send the message. Then, to receive the same message you can pick the specific message with ReceiveByCorrelationId as follow:
message = queue.ReceiveByCorrelationId(correlationId);
Moreover, CorrelationId is a string with the following format:
Guid()\\Number