Haproxy stats related clarifications - haproxy

Attached my stats page screenshot
haproxystats
Here are my question / clarifications needed:
Under "session rate" - Cur - 13 - Does it mean 13 webrequests are being processed?
If you see under "sessions" - Cur - 250 Max 250 Limit 250 - What does it means?
I observed most of the time "sessions" - Cur values keep more than 200.
Can someone please clarify about these?
Thank you.

"Cur" means the current rate per second of incoming connections. Whether they're being processed yet is shown in the drilldown metrics when you hover over that metric.
This shows the current rate per second is at 250, the most that value has historically reached is 250, and you have a limit on it of 250. Are you using "rate-limit session"?
Here is more information about what happens when the limit is reached: https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#rate-limit

Related

GitHub's REST API for events not reporting all the 300 public events although it is within 90 days

According to GitHub's REST API events documentation (https://docs.github.com/en/rest/activity/events), I should be getting events that have been made by a user in the past 90 days (max 300 events). But for some usernames, I am not able to get all the 300 events even though it is within 90 days.
A minimum working example is as follows:
https://api.github.com/users/github-actions[bot]/events?per_page=100&page=1 - gives 100 events
https://api.github.com/users/github-actions[bot]/events?per_page=100&page=2 - gives 100 events
https://api.github.com/users/github-actions[bot]/events?per_page=100&page=3 - gives 85 to 95 events (rarely 100)
The time difference between the first event on page 1 and the last event on page 3 is less than 5 minutes. At this rate of the account's activity, I should be able to get the latest 300 events, but I am not getting it.
Kindly let me know if anyone knows a reason for this and/or a workaround to get all the events.
Thank you.

Is the firestore pricing example for bandwith/network egress calculated correctly?

Issue
I try to understand the Firestore pricing by recalculating their Firestore pricing example (https://cloud.google.com/firestore/docs/billing-example#see-chats). I am not able to understand their network egress calculation per task.
From my point of view, they split up the network egress bill into 2 tasks.
1. See lists: 5 app openings * 10 updated chats * 0.5KB Group document size transit = 25KB Networking
2. Read messages: 30 new messages * 0.25KB message document size transit = 7,5KB
3. Total: 25KB + 7,5KB = 32,5KB
But in their overview, the documentation says:
Network Egress: (50 * 0.25KB) + (30 * 0.25KB) = 20KB / user / day
Questions
Did I understand the calculation correctly? Why is there a difference? What am I doing wrong or is the documentation slightly wrong?
Please provide a calculation example if possible.
Screenshots:

How does pagination affect rate limit

I am looking at http://stocktwits.com/developers/docs/parameters and am wondering if anyone has used pagination before.
The doc says there is a limit of 800 messages, how does that interact with the request limit? Could I in theory query 200 different stock tickers every hour and get back (up to) 800 messages?
If so that sounds like a great way to get around the 30 message limit.
The documentation is unclear on this and we are rolling out new documentation that explains this more clearly.
Every stream request will have a default and max limit of 30 messages per response, regardless of whether the cursor params are present or not. So you could query 200 different stock streams every hour and get up to 6,000 messages or 12,000 if sending your access token along with the request. 200 request per hour for non authenticated requests and 400 for authenticated requests.

What do we mean by "top percentile" or TP based latency?

When we discuss performance of a distributed system we use the terms tp50, tp90, tp99.99 TPS.
Could someone explain what do we mean by those?
tp90 is a maximum time under which 90% of requests have been served.
Imagine you have times:
10s
1000s
100s
2s
Calculating TP is very simple:
sort all times in ascending order: [2s, 10s, 100s, 1000s]
find latest item in portion you need to calculate. For TP50 it will ceil(4*.5)=2 requests. You need 2nd request. For TP90 it will be ceil(4*.9)=4. You need 4th request.
get time for the item found above. TP50=10s. TP90=1000s
Say if we are referring to in-terms of performance of an API, TP90 is the max time under which 90% of requests have been served.
TPx: Max response time taken by xth percentile of requests.
time taken by 10 requests in ms [2,1,3,4,5,6,7,8,9,10] - there are 10 response times
TP100 = 10
TP90 = 9
TP50 = 5

Whats the batch request limit for Facebooks Graph API?

Does anyone know whats the limit for batch requests when using FBs graph API?
From their documentation:
Limit
We currently limit the number of batch requests to 20.
Source
That's not clear. Is that 20 per 600 seconds? Per day? Total for one app ever?
It is 50 now. 50 request/batch
It means that 20 individual requests are allowed to be batched together into a single batched request, which saves you from sending 20 individual http requests over at the same time.
If you have more than twenty (20) you can build an array and then break them into groups of 20 or less and loop them thru your PHP in one "session". We have one app that will run 600 - 700 OG requests but it is sloooooooow, up to 300 seconds, some times, depending on FB.