Nearby API - queuing of payloads - google-nearby

I have an use case where I am sending control data from AndroidThings device to an Android mobile phone - it is periodal reading of voltage 10 times a second, so every 100 millis.
But since this is Nearby API feature - concering sending payloads:
Senders use the sendPayload() method to send a Payload. This method can be invoked multiple times, but since we guarantee in-order delivery, the second Payload onwards will be queued for sending until the first Payload is done.
What happens in reality in my case is, based on the the fact that transmission speed varies, I am getting readings on the phone with delay that is increasing, simply queue gets bigger and bigger.
Any ideas how to overcome this ? Basically I don't need in-order delivery.
My first idea is to have some sort of confirmation of payload delivery and only after the confirmation of receipt the second payload should be sent to recipient.
Thanks for ideas
UPDATE:
STREAM type of payload is a perferct solution. If the InputStream transfers more than one set of readings (reading inlcude voltage, maxvoltage etc. altogether 32 bytes of data) then I use the skip method to skip to the very last readings.

For your use-case, I would recommend using a STREAM Payload, and then you can keep streaming your control data over that single Payload -- this is exactly one of the use-cases for which we created STREAM. :)

Yup, your first idea sounds like the right thing to do. There's no way to turn off in-order delivery of payloads inside Nearby Connections, so you'll have to handle dropping payloads yourself.
I would build a class that caches the 'most recent voltage'. As you get new readings, this value will overwrite itself each time.
private volatile Long mostRecentVoltage;
public void updateVoltage(long voltage) {
if (mostRecentVoltage != null) {
Log.d(TAG, String.format("Dropping voltage %d due to poor network latency", mostRecentVoltage));
}
mostRecentVoltage = voltage;
}
Then add another piece of logic that'll grab the cached value every time the previous payload was successfully sent.
#Nullable
public Long getVoltage() {
try {
return mostRecentVoltage;
} finally {
mostRecentVoltage = null;
}
}

Related

What's the best way to subsample a ZMQ PUB SUB connection?

I have a ZMQ_PUB socket sending messages out at ~50Hz. One destination needs to react to each message, so it has a standard ZMQ_SUB socket with a while(true) loop checking for new messages. A second destination should only react once a second to the "most recent" message. That is, my second destination needs to subsample.
For the second destination, I believe I'd want to have a time-based loop that is called at my desired rate (1Hz) and recv() the latest message, dropping the rest. I believe this is done via a ZMQ_HWM on the subscriber. Is there another option that needs to be set somewhere?
Do I need to worry about the different subscribers having different HWMs? Will the publisher become angry? It's a shame ZMQ_RATE only applies to multicast sockets.
Is there a best way to accomplish what I'm attempting?
zmq v3.2.4
The HighWaterMark will not be a fantastic solution for your problem. Setting it on the subscriber to, let's say, 10 and reading 1 message per second, will just give you the old messages first, slowly, and throw away all the new, because it's limit are reached.
You could either use a topic on you publisher that makes you able to filter out every 50th message like making the topic messageCount % 50 and subscribe to 0.
Otherwise maybe you shouldn't use zmq's pub/sub, but instead do you own look alike with router/dealer that allows you to subscribe to sampled messages.
Lastly you could also just send them all. 50 m/s is hardly anything in zmq (if they aren't heavy on data, like megs) and then only use every 50th message.

How to query large numbers of Akka actors and store results in a database?

I am building a securities trading simulator in Scala/Akka. Each TraderActor has a var wealth that fluctuates over time as the actor trades via the market.
At various time intervals, I would like to query all of the TradingActors to get the current value of their respective 'wealth' and store all of the results in a database for later analysis. How might I accomplish this?
Querying million of actors to retrieve the value that they have is not a good idea because
whenever you get the entire aggregated value, those value will be stale.
You can not have realtime report
So, you need kinda distributed eventing system like Kafka to push the value to that upon any change. Then you can define consumer of Kafka which subscribed to it and receive events and aggregate or visualise etc.
In this way you will have live reporting system without setting up any cronjob to periodically goes through actors and retrieve their state.
I would send a StoreMessage that would tell the TraderActors to send their wealth value to a StoreController actor ref through some StoreData message.
The StoreController would then receive the StoreData messages and either store their content as they are received, or route them to a StoreWorker that would store them as they are received (making StoreController a router), or stack them before writing them, or any other strategy that suits your needs.
The way you want the StoreController to handle the received wealth mostly depend on your database, the number of TraderActors, how often you would like to store the values, etc.
I think the event bus implementation that comes with Akka is there for this very purpose.

Streaming Of ZeroMQ Events Back To Client

I have a use case where by i wish to have a ZeroMQ Request / Reply socket 'stream' back results, is this possible with MultiPart messages (i.e. The Reply sockets streams the frames back before HasMore = false?) or am i approaching this incorrectly?
The situation:
1) Client makes a query (Request) for some records
2) Server looks up Database for results and responds with the current large amount records (Reply) split into frames
3) Server must wait until a Server Side event is generated before the final Frame is sent (HasMore = false)
4) Client wont get the previous Frames until the Final Event has been generated and HasMore = false
Thanks for your help.
As far as I understand what you're aiming for, it sounds like what you have will work the way you expect. See here for more discussion on message frames. The salient points:
As you say, all of the frames will be sent to the client at one time, they will be stored on the server until HasMore is set to false.
One important thing to remember here, if it's a truly large amount of data, you must be able to fit the entire data set into memory, because it'll be stored in your server memory until the entire message with all frames is complete, and then it'll be received into memory before it's processed on the client side.
I assume primarily what you're looking for is a way to iteratively build up a message before you send it? And perhaps to be able to deal with the data on the client iteratively as well? Also you get a guarantee that you won't lose part of the data in the middle, you either get the whole message or lose the whole message (as opposed to instead sending each frame as a separate message). This is one of the primary use cases for frames, so you've done well.
The only thing I object to is using the word "stream", as that implies that the data is being sent to the client continuously as it's being processed on the server, and that's explicitly not what you're trying to do (nor is it possible with ZMQ message frames).

Bloomberg Java API - bond yield in real time subscription

Goal:
I use Bloomberg Java API's subscription service to monitor bond prices in real time (subscribing to ASK/BID real time fields). However in the RESPONSE messages, bloomberg does not provide the associated yield for the given price. I need a way to calculate the yields.
Attempt:
Here's what I've tried:
Within in the code that processes Events coming backing from a real time subscription, when I get a BID or ASK response, I extract the price from the message element, and then initiates a new synchronous reference data request, using overrides to get the YAS_BOND_YLD by providing YAS_BOND_PX and setting the overriding flag.
Problem:
This seems very slow and cumbersome. Is there a better way other than having to calculate yields myself?
In my code, I seem to be able to process real time prices if they are being sent to me slowly. If a few bonds' prices were updated at the same time (say, in MSG1 pricing), I seem to only capture one out of these updates, it feels like I'm missing the other events.. Is this because I cannot use a synchronous reference data request while the subscription is still alive?
Thanks.
bloomberg does not provide the associated yield for the given price
Have you tried retrieving the ASK_YIELD and BID_YIELD fields? They may be what you are looking for.
Problem: This seems very slow and cumbersome.
Synchronous one-off requests are slower than real time subscription. Unless you need real time data on the yield, you could queue the requests and send them all at once every x seconds for example. The time to get 100 or 1 yield is probably not that different, and certainly not 100 times slower.
In my code, I seem to be able to process real time prices if they are being sent to me slowly. If a few bonds' prices were updated at the same time (say, in MSG1 pricing), I seem to only capture one out of these updates, it feels like I'm missing the other events.. Is this because I cannot use a synchronous reference data request while the subscription is still alive?
You should not miss items just because you are sending a synchronous request. You may get a "Slow consumer warning" but that's about it. It's difficult to say more without seeing your code. However, if you want to make sure your real time data is not delayed by your synchronous requests, you should use two separate Sessions.

nServiceBus with large XML messages

I have read about the true messaging and that instead of sending payload on the bus, it sends an identifier. In our case, we have a lot of legacy apps/services and those were designed to receive the payload of messages (xml) that is close to 4MB (close MSMQ limit). Is there a way for nService bus to handle large payload and persist messages automatically or another work-around, so that the publisher/subscriber services don't have to worry neither about the payload size, nor about how to de/re-hydrate the payload?
Thank you in advance.
You could use the Message Sequence pattern. In NServiceBus, you would split the payload in the sender, wrap the chunks in a custom 'Sequence' IMessage, and then implement a saga at the other end to extract the chunks & reassemble. You would need to put some effort into error handling & timeouts.
You can always use the quick "fix" of compressing the messages.
A POCO serialized with the binary serializer can be compressed down by a large margin. We saw our messages that were 20mb compressed down to 3.1mb.
So if your messages are hovering around 4mb it might be simple to just write an IMessageSerializer that automatically compresses the message while it is on the wire.
I'm not aware of any internal NServiceBus capability to associate extra data with a message out of band.
I think you're right on the mark - if the entire payload can't fit within the limit, then it's better to persist it elsewhere on your own and then passing an ID.
However, it may be possible for you to design a message structure such that a message could implement an IHasPayload interface (which would perhaps incorporate an ID and a Type?), and then your application logic could have a common method for getting the payload given an IHasPayload message.