Why are repeating groups required when requesting market data over FIX? - quickfix

Can anyone tell me, why we need to use repeating groups in market data request. And what response/reply should we receive from acceptor against market data request. Please tell how can we receive market data request on acceptor side?
Sending Market Data request
public void sendMarketDataRequest(SessionID sessionId, String request, int ord){ // request new or old
String bankName = "HBL";
String mdReqCcyPair = "EURUSD";
String mkdreqId = "010qwerty";
SubscriptionRequestType type = new SubscriptionRequestType('1');
if(request.equals("new")){
reqId.put(mkdreqId, mkdreqId);
}else{
type.setValue('2');
}
quickfix.fix44.MarketDataRequest mdRequest = new quickfix.fix44.MarketDataRequest(new MDReqID(mkdreqId), type, new MarketDepth(1));
mdRequest.setField(new quickfix.field.Symbol(mdReqCcyPair));
mdRequest.setField(new Product(2));
mdRequest.setField(new NoRelatedSym(1));
mdRequest.setField(new MDUpdateType(0));
mdRequest.setField(new NoMDEntryTypes(3));
mdRequest.setField(new StringField(582, "1"));
quickfix.fix44.MarketDataSnapshotFullRefresh.NoMDEntries group = new quickfix.fix44.MarketDataSnapshotFullRefresh.NoMDEntries();
group.set(new MDEntryType('0'));
group.set(new MDEntryPx(12.32));
group.set(new MDEntrySize(10));
group.set(new OrderID("OrderId"));
mdRequest.addGroup(group);
group.set(new MDEntryType('1'));
group.set(new MDEntryPx(12.32));
group.set(new MDEntrySize(10));
group.set(new OrderID("OrderId"));
mdRequest.addGroup(group);
qCcyPair.substring(0, 3);
mdRequest.setField(new Currency(mdReqDealtCcy));
mdRequest.setField(new NoPartyIDs(1));
mdRequest.setField(new PartyID(bankName));
try{
boolean re = Session.sendToTarget(mdRequest, sessionId);
System.out.println(mdRequest);
System.out.println(re);
}catch(Exception e){e.printStackTrace();}
}
Receiving End Code
public void onMessage( quickfix.fix44.MarketDataRequest message, SessionID sessionID )
throws FieldNotFound, UnsupportedMessageType, IncorrectTagValue {
System.out.println("On Message: "+message);
}

Market data requests are not normally used for a single instrument; you normally want market data for a set of instruments. Each group in the repeating group set represents an instrument you want data for. The response will depend on your counterparty and when you last had a full market data refresh (usually daily). On your initial request and then at a fixed schedule thereafter you will receive a full market data refresh message . If your counterparty supports an intraday update model you will then receive snapshot refresh messages which are partial data refreshes. The snapshot message provides an update on just the market data that has changed since last refresh (full or partial) and is intended to be a smaller message and therefore, hopefully, lower latency. Not all counterparties support partial refresh. If you are on the acceptor side where you are receiving market data requests (obviously normally on the sell side) you should provide a full market data refresh first covering all of the requested instrument details. Whether you support incremental updates is a business decision.

Related

KStream-KStream leftJoin not consistently emitting after window expiry

We have a service where people can order a battery with their solar panels. As part of provisioning we try to fetch some details about the battery product, however it sometimes fails to get any data but we still want to send through the order to our CRM system.
To achieve this we are using the latest version of Kafka Streams leftJoin:
We receive an event on the order-received topic.
We filter out orders that do not contain a battery product.
We then wait up to 30mins for an event to come through on the order-battery-details topic.
If we dont receive that event, we want to send a new event to the battery-order topic with the data we do have.
This seems to be working fine when we receive both events, however it is inconsistent when we only receive the first event. Sometimes the order will come through immediately after the 30 min window, sometimes it takes several hours.
My question is, if the window has expired (ie. we failed to receive the right side of the join), what determines when the event will be sent? And what could be causing the long delay?
Here's a high level example of our service:
#Component
class BatteryOrderProducer {
#Autowired
fun buildPipeline(streamsBuilder: StreamsBuilder) {
// listen for new orders and filter out everything except orders with a battery
val orderReceivedReceivedStream = streamsBuilder.stream(
"order-received",
Consumed.with(Serdes.String(), JsonSerde<OrderReceivedEvent>())
).filter { _, order ->
// check if the order contains a battery product
}.peek { key, order ->
log.info("Received order with a battery product: $key", order)
}
// listen for battery details events
val batteryDetailsStream = streamsBuilder
.stream(
"order-battery-details",
Consumed.with(Serdes.String(), JsonSerde<BatteryDetailsEvent>())
).peek { key, order ->
log.info("Received battery details: $key", order)
}
val valueJoiner: ValueJoiner<OrderReceivedEvent, BatteryDetailsEvent, BatteryOrder> =
ValueJoiner { orderReceived: OrderReceivedEvent, BatteryDetails: BatteryDetailsEvent? ->
// new BatteryOrder
if (BatteryDetails != null) {
// add battery details to the order if we get them
}
// return the BatteryOrder
}
// we always want to send through the battery order, even if we don't get the 2nd event.
orderReceivedReceivedStream.leftJoin(
batteryDetailsStream,
valueJoiner,
JoinWindows.ofTimeDifferenceAndGrace(
Duration.ofMinutes(30),
Duration.ofMinutes(1)
),
StreamJoined.with(
Serdes.String(),
JsonSerde<OrderReceivedEvent>(),
JsonSerde<BatteryDetailsEvent>()
).withStoreName("battery-store")
).peek { key, value ->
log.info("Merged BatteryOrder", value)
}.to(
"battery-order",
Produced.with(
Serdes.String(),
JsonSerde<BatteryOrder>()
)
)
}
}
The leftJoin will not trigger as long as there are no new recods. So if I have an order-received record with key A at time t, and then there is no new record (on either side of the join) for the next 5 hours, then there will be no output for the join for these 5 hours, because the leftJoin will not be triggered. In particular, leftJoin needs to receive a record that has a timestamp > t + 30m, for a null result to be sent.
I think to satisfy your requirements, you need to work with the more low-level Processor API: https://kafka.apache.org/documentation/streams/developer-guide/processor-api.html
In a Processor, you can define a Punctuator that runs regularly and checks if an order has been waiting for more than half an hour for details, and sends off the null record accordingly.

UaSerializationException: request exceeds remote max message size: 2434140 > 2097152

I am a rookie, I tried to use the following code for bulk subscription, but something went wrong, how can I solve this problem
OpcUaSubscriptionManager subscriptionManager = opcUaClient.getSubscriptionManager();
UaSubscription subscription = subscriptionManager.createSubscription(publishInterval).get();
List<MonitoredItemCreateRequest> itemsToCreate = new ArrayList<>();
for (Tag tag : tagList) {
NodeId nodeId = new NodeId(nameSpace, tag.getPath());
ReadValueId readValueId = new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);
MonitoringParameters parameters = new MonitoringParameters(
subscription.nextClientHandle(), //
publishInterval, //
null, // filter, null means use default
UInteger.valueOf(queueSize), // queue size
true // discard oldest
);
MonitoredItemCreateRequest request = new MonitoredItemCreateRequest(readValueId,
MonitoringMode.Reporting, parameters);
itemsToCreate.add(request);
}
BiConsumer<UaMonitoredItem, Integer> consumer =(item, id) ->
item.setValueConsumer(this::onSubscriptionValue);
List<UaMonitoredItem> items = subscription.createMonitoredItems(
TimestampsToReturn.Both,
itemsToCreate,
consumer
).get();
for (UaMonitoredItem item : items) {
if (!item.getStatusCode().isGood()) {
log.error("failed to create item for nodeId={} (status={})",item.getReadValueId().getNodeId(), item.getStatusCode());
}
}
How many items are you trying to create?
It seems that the resulting message exceeds the limits set by the server you are connecting to. You may need to break your list up and create the items in smaller chunks.
I do not know the library that you use, but one of the previous steps for a OPC UA client to connect to a server is to negotiate the maximum size of the buffers, the message total size and the max number or chunks a message can be sent, this process is called by the OPC UA documentation as "Handshake".
If your request is too long it should be split and sent in several chunks according to the limits previously negotiated with the server.
And the server will probably also reply in several chunks, all that has to be considered in the programming of an OPC UA client.

How do I collect from a flux without closing the stream

My usecase is to create an reactive endpoint like this :
public Flux<ServerEvent> getEventFlux(Long forId){
ServicePoller poller = new ServicePollerImpl();
Map<String,Object> params = new HashMap<>();
params.put("id", forId);
Flux<Long> interval = Flux.interval(Duration.ofMillis(pollDuration));
Flux<ServerEvent> serverEventFlux =
Flux.fromStream(
poller.getEventStream(url, params) //poll a given endpoint after a fixed duration.
);
Flux<ServerEvent> sourceFlux= Flux.zip(interval, serverEventFlux)
.map(Tuple2::getT2); // Zip the two streams.
/* Here I want to store data from sourceFlux into a collection whenever some data arrives without disturbing the downstream processing in Spring. So that I can access collection later on without polling again */
This sends back the data to front end as soon as it is available , however my second use case is to pool that data as it arrives into a separate collection , so that if a similar request arrives later on , I can offload the whole data from the pool without hitting the service again .
I tried to subscribe the flux , buffer , cache and collect the flux before returning from the original flux the controller , but all of that seems to close the stream hence Spring cant process it.
What are my options to tap into the flux and store values into a collection as and when they arrive without closing the flux stream ?
Exception encountered :
java.lang.IllegalStateException: stream has already been operated upon
or closed at
java.util.stream.AbstractPipeline.spliterator(AbstractPipeline.java:343)
~[na:1.8.0_171] at
java.util.stream.ReferencePipeline.iterator(ReferencePipeline.java:139)
~[na:1.8.0_171] at
reactor.core.publisher.FluxStream.subscribe(FluxStream.java:57)
~[reactor-core-3.1.7.RELEASE.jar:3.1.7.RELEASE] at
reactor.core.publisher.Flux.subscribe(Flux.java:6873)
~[reactor-core-3.1.7.RELEASE.jar:3.1.7.RELEASE] at
reactor.core.publisher.FluxZip$ZipCoordinator.subscribe(FluxZip.java:573)
~[reactor-core-3.1.7.RELEASE.jar:3.1.7.RELEASE] at
reactor.core.publisher.FluxZip.handleBoth(FluxZip.java:326)
~[reactor-core-3.1.7.RELEASE.jar:3.1.7.RELEASE]
poller.getEventStream returns a Java 8 stream that can be consumed only once. You can either convert the stream to a collection first or defer the execution of poller.getEventStream by using a supplier:
Flux.fromStream(
() -> poller.getEventStream(url, params)
);
Solution that worked for me as suggested by #a better oliver
public Flux<ServerEvent> getEventFlux(Long forId){
ServicePoller poller = new ServicePollerImpl();
Map<String,Object> params = new HashMap<>();
params.put("id", forId);
Flux<Long> interval = Flux.interval(Duration.ofMillis(pollDuration));
Flux<ServerEvent> serverEventFlux =
Flux.fromStream(
()->{
return poller.getEventStream(url, params).peek((se)->{reactSink.addtoSink(forId, se);});
}
);
Flux<ServerEvent> sourceFlux= Flux.zip(interval, serverEventFlux)
.map(Tuple2::getT2);
return sourceFlux;
}

cometd bayeux can't send message to a specific client

//StockPriceEmitter is a "dead loop" thread which generate data, and invoke StockPriceService.onUpdates() to send data.
#Service
public class StockPriceService implements StockPriceEmitter.Listener
{
#Inject
private BayeuxServer bayeuxServer;
#Session
private LocalSession sender;
public void onUpdates(List<StockPriceEmitter.Update> updates)
{
for (StockPriceEmitter.Update update : updates)
{
// Create the channel name using the stock symbol
String channelName = "/stock/" + update.getSymbol().toLowerCase(Locale.ENGLISH);
// Initialize the channel, making it persistent and lazy
bayeuxServer.createIfAbsent(channelName, new ConfigurableServerChannel.Initializer()
{
public void configureChannel(ConfigurableServerChannel channel)
{
channel.setPersistent(true);
channel.setLazy(true);
}
});
// Convert the Update business object to a CometD-friendly format
Map<String, Object> data = new HashMap<String, Object>(4);
data.put("symbol", update.getSymbol());
data.put("oldValue", update.getOldValue());
data.put("newValue", update.getNewValue());
// Publish to all subscribers
ServerChannel channel = bayeuxServer.getChannel(channelName);
channel.publish(sender, data, null); // this code works fine
//this.sender.getServerSession().deliver(sender, channel.getId(), data, null); // this code does not work
}
}
}
this line channel.publish(sender, data, null); // this code works fine works fine, now I don't want channel to publish message to all clients subscirbed with it, I want to send to a specific client, so I write this this.sender.getServerSession().deliver(sender, channel.getId(), data, null);, but it does not work, browser can't get message.
thx in advance.
I strongly recommend that you spend some time reading the CometD concepts page, in particular the section about sessions.
Your code does not work because you are sending the message to the sender, not to the recipient.
You need to pick which remote ServerSession you want to send the message to among the many that may be connected to your server, and call serverSession.deliver(...) on that remote ServerSession.
How to pick the remote ServerSession depends on your application.
For example:
for (ServerSession session : bayeuxServer.getSessions())
{
if (isAdminUser(session))
session.deliver(sender, channel.getId(), data, null);
}
You have to provide an implementation of isAdmin(ServerSession) with your logic, of course.
Note that you don't need to iterate over the sessions: if you happen to know the session id to deliver to, you can do:
bayeuxServer.getSession(sessionId).deliver(sender, channel.getId(), data, null);
Also refer to the CometD chat demo shipped with the CometD distribution, that contain a full fledged example of how to send a message to particular session.

How to resend a message from the JBoss 4.2.2 message queue after retry expired

Is there a way to resend expired messages in a JBoss 4.2.2 message queue? The issue is they exceeded their retry amounts, but now the problem is fixed, so is there a way to resend them?
In JBoss 3 they were just text files that you could move around. Now that it is stored in a database, how can you do it?
Have a look at Hermes JMS. It's an open source tool for browsing JMS queues and topics. It can replay messages that end up on the broker's undeliverable queue.
This is what I ended up doing:
Hashtable t = new Hashtable();
t.put(Context.PROVIDER_URL, "localhost:1099");
t.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
Context ctx = new InitialContext(t);
Queue q = (Queue) ctx.lookup("/queue/DLQ");
//----------------------------
ConnectionFactory cf = (ConnectionFactory) ctx.lookup("/ConnectionFactory");
Connection connection = cf.createConnection();
Session session = connection.createSession(true, 0);
//---------------------------------
MessageConsumer consumer = session.createConsumer(q);
connection.start();
SpyObjectMessage m;
Queue originialDestination = null;
//There can only be one in my case, but really you have to look it up every time.
MessageProducer producer = null;
while ((m = (SpyObjectMessage) consumer.receive(5000)) != null) {
Object o = m.getObject();
Date messageDate = new Date(m.getJMSTimestamp());
String originalQueue = m.getStringProperty("JBOSS_ORIG_DESTINATION");
if (originialDestination == null) {
originialDestination = (Queue) ctx.lookup("/queue/" +
originalQueue.substring(originalQueue.indexOf('.') + 1));
producer = session.createProducer(originialDestination);
}
producer.send(session.createObjectMessage((Serializable) o));
m.acknowledge();
}
//session.commit(); //Uncomment to make this real.
connection.close();
ctx.close();
Note: I work for CodeStreet
Our 'ReplayService for JMS' product is built exactly for this use case: search and retrieve previously published messages (n-times delivery) - JMS is really designed for a 1-time delivery.
With ReplayService for JMS, you would configure a WebLogic recording to record all messages published to your topic or queue. Through a Web-based GUI, you can then search for individual messages (by substring, XPath or JMS Selector) and then replay them again to the original JMS destination.
See http://www.codestreet.com/marketdata/jms/jms_details.php for further details.