How to compose OrderCancelRequest in QuickFix/J - quickfix

I am trying to create OrderCancelRequest using FIX.4.2 but confused with OrderID,OrigClOrdID and ClOrdID. I searched on the web but it was not clear to me. Please explain those parameters and provide snippet of code for OrderCancelRequest if possible.
Thanks in advance.

You wish to cancel an order you created with quickfix.fix42.NewOrderSingle. To send that message you had to assign it a unique quickfix.field.ClOrdID. For instance:
String instructionId = createNewInstructionId( );
quickfix.Message fixMessage = new quickfix.fix42.NewOrderSingle (
new ClOrdID( instructionId ),
new HandlInst( HandlInst.AUTOMATED_EXECUTION_ORDER_PUBLIC ),
new Symbol( symbol ),
new Side( Side.BUY ),
new TransactTime( ),
new OrdType( OrdType.LIMIT )
);
// ...
You need to store this instructionId for referencing in further messaging.
If the counterparty accepts the instruction, it does so with an EXECUTION_REPORT message (OrdStatus.NEW). This execution report will contain an quickfix.Field.OrderID field, which is a unique identifier for the order as assigned by the broker (uniqueness within single trading day, or uniqueness across days for multi-day orders). Store this OrderID for use in later instructions (orderIdBroker).
If you wish to cancel the order, you need to reference the instruction that created the order. The OrigClOrdID in this instance is the ClOrdID of the NewOrderSingle instruction that created the order. The ClOrdID field is a unique identifier for the cancel request (a new identifier you assign to the cancel request). If you wish (or the broker requires it) you can supply the OrderID you received from the broker:
String orderInstructionId = getOrderInstructionId( );
String cancelInstructionId = createNewInstructionId( );
quickfix.Message fixMessage = new quickfix.fix42.OrderCancelRequest (
new OrigClOrdID( orderInstructionId ),
new ClOrdID( cancelInstructionId ),
new Symbol( symbol ),
new Side( Side.BUY ),
new TransactTime( )
);
// If required, set the OrderID as assigned by the broker:
String orderIdBroker = getOrderIdBroker( );
fixMessage.setField( new OrderID( orderIdBroker ) );

ClOrdId is the id of the cancel order message you're going to send.
OrigClOrdId is the id of the order message you already sent.
OrderID is the internal id of the order (which may or may not mean anything to the receiver).
How you construct the cancel order depends on who you're sending it to. Here's some code:
QuoteCancel qc = new QuoteCancel();
qc.setField(new StringField(131, "RFQ123"));
qc.setField(new QuoteCancelType(1));
Have a look at Fiximate QuoteCancel for more. Here's the Fiximate front page.

Related

KStream-KStream leftJoin not consistently emitting after window expiry

We have a service where people can order a battery with their solar panels. As part of provisioning we try to fetch some details about the battery product, however it sometimes fails to get any data but we still want to send through the order to our CRM system.
To achieve this we are using the latest version of Kafka Streams leftJoin:
We receive an event on the order-received topic.
We filter out orders that do not contain a battery product.
We then wait up to 30mins for an event to come through on the order-battery-details topic.
If we dont receive that event, we want to send a new event to the battery-order topic with the data we do have.
This seems to be working fine when we receive both events, however it is inconsistent when we only receive the first event. Sometimes the order will come through immediately after the 30 min window, sometimes it takes several hours.
My question is, if the window has expired (ie. we failed to receive the right side of the join), what determines when the event will be sent? And what could be causing the long delay?
Here's a high level example of our service:
#Component
class BatteryOrderProducer {
#Autowired
fun buildPipeline(streamsBuilder: StreamsBuilder) {
// listen for new orders and filter out everything except orders with a battery
val orderReceivedReceivedStream = streamsBuilder.stream(
"order-received",
Consumed.with(Serdes.String(), JsonSerde<OrderReceivedEvent>())
).filter { _, order ->
// check if the order contains a battery product
}.peek { key, order ->
log.info("Received order with a battery product: $key", order)
}
// listen for battery details events
val batteryDetailsStream = streamsBuilder
.stream(
"order-battery-details",
Consumed.with(Serdes.String(), JsonSerde<BatteryDetailsEvent>())
).peek { key, order ->
log.info("Received battery details: $key", order)
}
val valueJoiner: ValueJoiner<OrderReceivedEvent, BatteryDetailsEvent, BatteryOrder> =
ValueJoiner { orderReceived: OrderReceivedEvent, BatteryDetails: BatteryDetailsEvent? ->
// new BatteryOrder
if (BatteryDetails != null) {
// add battery details to the order if we get them
}
// return the BatteryOrder
}
// we always want to send through the battery order, even if we don't get the 2nd event.
orderReceivedReceivedStream.leftJoin(
batteryDetailsStream,
valueJoiner,
JoinWindows.ofTimeDifferenceAndGrace(
Duration.ofMinutes(30),
Duration.ofMinutes(1)
),
StreamJoined.with(
Serdes.String(),
JsonSerde<OrderReceivedEvent>(),
JsonSerde<BatteryDetailsEvent>()
).withStoreName("battery-store")
).peek { key, value ->
log.info("Merged BatteryOrder", value)
}.to(
"battery-order",
Produced.with(
Serdes.String(),
JsonSerde<BatteryOrder>()
)
)
}
}
The leftJoin will not trigger as long as there are no new recods. So if I have an order-received record with key A at time t, and then there is no new record (on either side of the join) for the next 5 hours, then there will be no output for the join for these 5 hours, because the leftJoin will not be triggered. In particular, leftJoin needs to receive a record that has a timestamp > t + 30m, for a null result to be sent.
I think to satisfy your requirements, you need to work with the more low-level Processor API: https://kafka.apache.org/documentation/streams/developer-guide/processor-api.html
In a Processor, you can define a Punctuator that runs regularly and checks if an order has been waiting for more than half an hour for details, and sends off the null record accordingly.

Interpreting server response for events correclty

I would like to store values of event properties received from the server in a database. My problems are that in the event consumer:
I cant figure out which eventtype my client received.
I dont know how to map variant indexes to properties without knowing the EventType.
Events come with the property "EventType", which would solve my first problem. But since I am receiving many different event types, I do not know in which variant index it is located. Should I always relocate "EventType" at index 0 in the select clause whenever creating a new EventFilter?
For the second problem, item.getMonitoringFilter().decode(client.getSerializationContext())) offers a view on the property structure but I am not sure how to use it for mapping of variants to properties. Does anybody know how to solve those problems?
Here is the event consumer code that I use. It is taken from milo client examples.
for (UaMonitoredItem monitoredItem: mItems){
monitoredItem.setEventConsumer((item, vs) -> {
LOGGER.info(
"Event Received from: {}", item.getReadValueId().getNodeId());
LOGGER.info(
"getMonitoredItemId: {}", item.getMonitoredItemId());
LOGGER.info(
"getMonitoringFilter: {}", item.getMonitoringFilter().decode(client.getSerializationContext()));
for (int i = 0; i < vs.length; i++) {
LOGGER.info("variant[{}]:, datatype={}, value={}", i, vs[i].getDataType(), vs[i].getValue());
}
});
}
Thank you in advance.
Update:
Seems I have figured it out, by typcasting to EventFilter. Further information such as qName for event properties or event type node IDs can then be derived:
ExtensionObject eObject = item.getMonitoringFilter();
EventFilter eFilter = ((EventFilter) eObject.decode(client.getSerializationContext()));
QualifiedName qName = eFilter.getSelectClauses()[0].getBrowsePath()[0];
LiteralOperand literalOperand = (LiteralOperand) eFilter.getWhereClause().getElements()[0]
.getFilterOperands()[1].decode(client.getSerializationContext());
NodeId eventTypeNodeId = (NodeId) literalOperand.getValue().getValue();
Didn't you supply the filter in the first place when you created the MonitoredItem? Why do you need to "reverse engineer" the filter result to get back to what you did in the first place?
The properties you receive in the event data and the order they come in are defined by the select clause you used when creating the MonitoredItem. If you choose to select the EventId field then it will always be at the same corresponding index.

UaSerializationException: request exceeds remote max message size: 2434140 > 2097152

I am a rookie, I tried to use the following code for bulk subscription, but something went wrong, how can I solve this problem
OpcUaSubscriptionManager subscriptionManager = opcUaClient.getSubscriptionManager();
UaSubscription subscription = subscriptionManager.createSubscription(publishInterval).get();
List<MonitoredItemCreateRequest> itemsToCreate = new ArrayList<>();
for (Tag tag : tagList) {
NodeId nodeId = new NodeId(nameSpace, tag.getPath());
ReadValueId readValueId = new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);
MonitoringParameters parameters = new MonitoringParameters(
subscription.nextClientHandle(), //
publishInterval, //
null, // filter, null means use default
UInteger.valueOf(queueSize), // queue size
true // discard oldest
);
MonitoredItemCreateRequest request = new MonitoredItemCreateRequest(readValueId,
MonitoringMode.Reporting, parameters);
itemsToCreate.add(request);
}
BiConsumer<UaMonitoredItem, Integer> consumer =(item, id) ->
item.setValueConsumer(this::onSubscriptionValue);
List<UaMonitoredItem> items = subscription.createMonitoredItems(
TimestampsToReturn.Both,
itemsToCreate,
consumer
).get();
for (UaMonitoredItem item : items) {
if (!item.getStatusCode().isGood()) {
log.error("failed to create item for nodeId={} (status={})",item.getReadValueId().getNodeId(), item.getStatusCode());
}
}
How many items are you trying to create?
It seems that the resulting message exceeds the limits set by the server you are connecting to. You may need to break your list up and create the items in smaller chunks.
I do not know the library that you use, but one of the previous steps for a OPC UA client to connect to a server is to negotiate the maximum size of the buffers, the message total size and the max number or chunks a message can be sent, this process is called by the OPC UA documentation as "Handshake".
If your request is too long it should be split and sent in several chunks according to the limits previously negotiated with the server.
And the server will probably also reply in several chunks, all that has to be considered in the programming of an OPC UA client.

How do I collect from a flux without closing the stream

My usecase is to create an reactive endpoint like this :
public Flux<ServerEvent> getEventFlux(Long forId){
ServicePoller poller = new ServicePollerImpl();
Map<String,Object> params = new HashMap<>();
params.put("id", forId);
Flux<Long> interval = Flux.interval(Duration.ofMillis(pollDuration));
Flux<ServerEvent> serverEventFlux =
Flux.fromStream(
poller.getEventStream(url, params) //poll a given endpoint after a fixed duration.
);
Flux<ServerEvent> sourceFlux= Flux.zip(interval, serverEventFlux)
.map(Tuple2::getT2); // Zip the two streams.
/* Here I want to store data from sourceFlux into a collection whenever some data arrives without disturbing the downstream processing in Spring. So that I can access collection later on without polling again */
This sends back the data to front end as soon as it is available , however my second use case is to pool that data as it arrives into a separate collection , so that if a similar request arrives later on , I can offload the whole data from the pool without hitting the service again .
I tried to subscribe the flux , buffer , cache and collect the flux before returning from the original flux the controller , but all of that seems to close the stream hence Spring cant process it.
What are my options to tap into the flux and store values into a collection as and when they arrive without closing the flux stream ?
Exception encountered :
java.lang.IllegalStateException: stream has already been operated upon
or closed at
java.util.stream.AbstractPipeline.spliterator(AbstractPipeline.java:343)
~[na:1.8.0_171] at
java.util.stream.ReferencePipeline.iterator(ReferencePipeline.java:139)
~[na:1.8.0_171] at
reactor.core.publisher.FluxStream.subscribe(FluxStream.java:57)
~[reactor-core-3.1.7.RELEASE.jar:3.1.7.RELEASE] at
reactor.core.publisher.Flux.subscribe(Flux.java:6873)
~[reactor-core-3.1.7.RELEASE.jar:3.1.7.RELEASE] at
reactor.core.publisher.FluxZip$ZipCoordinator.subscribe(FluxZip.java:573)
~[reactor-core-3.1.7.RELEASE.jar:3.1.7.RELEASE] at
reactor.core.publisher.FluxZip.handleBoth(FluxZip.java:326)
~[reactor-core-3.1.7.RELEASE.jar:3.1.7.RELEASE]
poller.getEventStream returns a Java 8 stream that can be consumed only once. You can either convert the stream to a collection first or defer the execution of poller.getEventStream by using a supplier:
Flux.fromStream(
() -> poller.getEventStream(url, params)
);
Solution that worked for me as suggested by #a better oliver
public Flux<ServerEvent> getEventFlux(Long forId){
ServicePoller poller = new ServicePollerImpl();
Map<String,Object> params = new HashMap<>();
params.put("id", forId);
Flux<Long> interval = Flux.interval(Duration.ofMillis(pollDuration));
Flux<ServerEvent> serverEventFlux =
Flux.fromStream(
()->{
return poller.getEventStream(url, params).peek((se)->{reactSink.addtoSink(forId, se);});
}
);
Flux<ServerEvent> sourceFlux= Flux.zip(interval, serverEventFlux)
.map(Tuple2::getT2);
return sourceFlux;
}

State Machine Persistence WorkFlow

Hey all, I have created a WinForms to handle Persistence Activities using the Windows WorkFlow Foundation. I'm using the .NET 3.0 SQL and VS2005 as the IDE with C# as the code language. Also, the environment is mandated to me by the corporate policy for development. So until the dinosaurs decide to upgrade, I'm stuck with VS2005.
My probelm is this, I'm able to work with 1 workflow at a time, and I'd like to be able to handle Multiple workflows. As in when I click the Submit button on my form, I'd like to be able to create a new WorkFlow instance.
I created the runtime and add all the appropriate services. I hook in persistence, and when I click the Submit I start an instance of the WorkFlow. I'm relatively new to the WorkFlow Foundation, and the MSDN links have provided little to no help for me. If anyone could put me in the right direciton within my source code, that would be helpful.
I have attached a link to the source for my project.
Click Here for the Source
Thanks in Advance!
I had a look and it appears that you are creating a new workflow each time you click submit. I get a new instance id, which is a good sign :) PopulatePSUP(string instanceID) captures the instance id for the dropdown. But you are only storing one instance id at a time in Guid _instanceID. This form level variable is then used for all the button events. You could insead use the cboPSUPItems.Text.
Something like:
private void btnPSUPApprove_Click(object sender, EventArgs e)
{
string instanceId = this.cboPSUPItems.Text;
if ( instanceId.Length > 0 )
{
myArgs.Approved = true;
approved = "Yes";
this.resumeHistory[ instanceId ].Clear( );
this.resumeHistory[ instanceId ].Add( "Name: " + applicantName );
this.resumeHistory[ instanceId ].Add( "Email:" + applicantEmail );
this.resumeHistory[ instanceId ].Add( "Text:" + applicantText );
this.resumeHistory[ instanceId ].Add( "Approved:" + approved );
this.resumeHistory[ instanceId ].Add( "Denied:" + denied );
this.resumeHistory[ instanceId ].Add( "PD Approval Requested:" + pDRequest );
resumeService.RaisePSUPApprovedEvent( new Guid(instanceId) , myArgs );
this.cboPSUPItems.Items.Remove( this.cboPSUPItems.SelectedItem );
txtPSUPNotes.Clear( );
}
}
You might want to think about using a collection/list to store the instanceIds in as well. For any workflow wide logic.
Something like:
List<Guid> _instanceIds = new List<Guid>( );
...
_instanceIds.Add( instance.InstanceId );