According to Android's docs, a Cronet request's priority is sent to the server:
The library allows you to set a priority tag for the requests. The server can use the priority tag to determine the order in which to handle the requests.
(In other libraries, priority only affects the order in which requests are sent when they start to queue up.)
For an HTTP/2 request, I assume "priority tag" (from the quote above) refers to stream priorities, which is a weight assigned to a stream that requests are sent over. Is there any documentation on how Cronet's request priorities (idle, lowest, low, medium, highest) relate to HTTP/2 stream weights (1-256)?
Related
As per documentation, sc_http_req_rate will return the rate of http requests from the stick table, but only from the currently tracked counters.
From testing, this means that if you are incrementing your counter using http-response rather than http-request, the field is unavailable. It cannot be used for rate limiting, or sent on to the backends.
A down to earth conseguence of this is that I cannot limit the number of bots generating too many 404 requests.
How can I load and use the stick table http rate during the request if it’s only tracked in the response?
Thank you :)
The scenario:
I have thousands of requests I need to issue each day.
I know the number at the beginning of the day and hopefully I want to send all the data about the requests to pubsub. Message per request.
I want to make the requests in constant rate. for example if I have 172800 requests, I want to process 2 in each second.
The ultimate way will involved pubsub push and cloud run.
Using pull with long running instances is also an option.
Any other option are also welcome.
I want to avoid running in a loop and fetch records from a database with limit.
This is how I am doing it today.
You can use batch and flow control settings for fine-tuning Pub/Sub performance which will help in processing messages at a constant rate.
Batching
A batch, within the context of Cloud Pub/Sub, refers to a group of one or more messages published to a topic by a publisher in a single publish request. Batching is done by default in the client library or explicitly by the user. The purpose for this feature is to allow for a higher throughput of messages while also providing a more efficient way for messages to travel through the various layers of the service(s). Adjusting the batch size (i.e. how many messages or bytes are sent in a publish request) can be used to achieve the desired level of throughput.
Features specific to batching on the publisher side include setElementCountThreshold(), setRequestByteThreshold(), and setDelayThreshold() as part of setBatchSettings() on a publisher client (the naming varies slightly in the different client libraries). These features can be used to finely tune the behavior of batching to find a better balance among cost, latency, and throughput.
Note: The maximum number of messages that can be published in a single batch is 1000 messages or 10 MB.
An example of these batching properties can be found in the Publish with batching settings documentation.
Flow Control
Flow control features on the subscriber side can help control the unhealthy behavior of tasks on the pipeline by allowing the subscriber to regulate the rate at which messages are ingested. These features provide the added functionality to adjust how sensitive the service is to sudden spikes or drops of published throughput.
Some features that are helpful for adjusting flow control and other settings on the subscriber are setMaxOutstandingElementCount(), setMaxOutstandingRequestBytes(), and setMaxAckExtensionPeriod().
Examples of these settings being used can be found in the Subscribe with flow control documentation.
For more information refer to this link.
If you are having long running instances as subscribers, then you will need to set relevant FlowControl settings for example .setMaxOutstandingElementCount(1000L)
Once you have set it to the desired number (for example 1000), this should control the maximum amount of messages the subscriber receives before pausing the message stream, as explained in the code below from this documentation:
// The subscriber will pause the message stream and stop receiving more messsages from the
// server if any one of the conditions is met.
FlowControlSettings flowControlSettings =
FlowControlSettings.newBuilder()
// 1,000 outstanding messages. Must be >0. It controls the maximum number of messages
// the subscriber receives before pausing the message stream.
.setMaxOutstandingElementCount(1000L)
// 100 MiB. Must be >0. It controls the maximum size of messages the subscriber
// receives before pausing the message stream.
.setMaxOutstandingRequestBytes(100L * 1024L * 1024L)
.build();
There is the throttle function on Source https://doc.akka.io/docs/akka/current/stream/operators/Source-or-Flow/throttle.html but this only works in a local context (1 server). If I wanted to share a rate limit (for 3rd party api calls) with other servers (say I have 2 servers instead of 1 for redundancy), I'd like the rate limit to efficiently be spread across the 2 servers (if one server dies from out of memory, the other server should pick up the freed up rate limit until the dead server restarts).
Is this possible somehow through akka's Source assuming I have something like Redis returning whether an action is allowed or disallowed + what the time until an action will be allowed?
Off the top of my head, you can dispense with Redis and use Akka Cluster to deal with failure detection: and set up an actor to subscribe to the cluster events (member joined, member left/downed) and update the local throttle.
Local dynamic throttling can be implemented via a custom graph stage (materializing as a handle through which to change the throttle), or you can also do that via an actor (in which case an ask stage is nice). In the latter case, you can go further and have the throttling actors coordinate among themselves to reallocate unused request capacity between nodes.
I use akka-streams' ActorPublisher actor as a streaming per-connection Source of data being sent to an incoming WebSocket or HTTP connection.
ActorPublisher's contract is to regularly request data by supplying a demand - number of elements that can be accepted by downstream. I am not supposed to send more elements if the demand is 0. I observe that if I buffer elements, when consumer is slow, that buffer size fluctuates between 1 and 60, but mostly near 40-50.
To stream I use akka-http's ability to set WebSocket output and HttpResponse data to a Source of Messages (or ByteStrings).
I wonder how the back-pressure works in this case - when I'm streaming data to a client through network. How exactly these numbers are calculated? Does it check what's happening on network level?
The closest I could find for your question "how the back-pressure works in this case" is from the documentation:
Akka HTTP is streaming all the way through, which means that the
back-pressure mechanisms enabled by Akka Streams are exposed through
all layers–from the TCP layer, through the HTTP server, all the way up
to the user-facing HttpRequest and HttpResponse and their HttpEntity
APIs.
As to "how these numbers are calculated", I believe that is specified in the configuration settings.
I heard that websocket messages are received in order, because websocket runs over TCP.
Then what is the purpose of 'sequence number'?
This is the explanation of sequence number in websocket.
But I'm wondering why does that sequence number is needed, if we have a 'in-order' received message.
The sequence number allows you to map your requests to responses even if the responses don't come in the order you make them.
HTTP and other relevant protocols support pipelining. Also there is no need for the request responses to be sent back to you in any specific order. Each one may be processed according to its individual cost or dispatched across a server farm and reassembled in an order that is not predetermined. Either way, if they are out of order you will need a key to map the response back to your request.