wrk --latency the mean of latency distribution - latency

i use wrk to test my service
wrk -t2 -c10 -d20s --latency http://192.168.0.105:8102/get
output
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 525.29ms 210.25ms 1.73s 82.12%
Req/Sec 11.21 7.05 40.00 65.48%
Latency Distribution
50% 489.47ms
75% 570.62ms
90% 710.66ms
99% 1.56s
377 requests in 20.08s, 4.54MB read
Socket errors: connect 0, read 0, write 0, timeout 1
Requests/sec: 18.77
Transfer/sec: 231.74KB
but i do not understand the mean of Latency Distribution
Latency Distribution
50% 489.47ms
75% 570.62ms
90% 710.66ms
99% 1.56s

i got it it's 95th percentile mathematical calculation

Related

Troubleshooting Latency Increase for Lambda to EFS Reads

The Gist
We've got a Lambda job running that reads data from EFS (elastic throughout) for up to 200 TPS of read requests.
PercentIOLimit is well below 20%.
Latency goes from about 20 ms to about 400 ms during traffic spikes.
Are there any steps I can take to get more granularity into where the latency for the reads is coming from?
Additional Info:
At low TPS (~5), reads take about 10-20 ms.
At higher TPS (~50), p90 can take 300-400 ms.
I'd really like to narrow down what limit is causing these latency spikes, especially when the IOPercent usage is around 60%.

IOS How to debug 9 seconds cpu time over 36 seconds (25% cpu average), exceeding limit of 15% cpu over 60 seconds

The high cpu usage rate threshold looks to have been lowered on iOS 15. Possibly from 80% in 60sec to 15% in 60sec? I have noticed that my app does NOT run correctly on IOS 15 and the background operations seem to stop like location, and some timers... I have a lot of #Published property's being updated while in the background, would this affect the background operations terminating after ~40 seconds in the background? If So how would I go about Updating my UI and keeping the constant updates to the published property.
I am getting the message:
Event: cpu usage
Action taken: none
CPU: 9 seconds cpu time over 36 seconds (25% cpu average), exceeding limit of 15% cpu over 60 seconds
CPU limit: 9s
Limit duration: 60s
CPU used: 9s
CPU duration: 36s
Duration: 35.85s
Duration Sampled: 25.92s
Steps: 5

Locust 95 percentile is higher than max

Sometimes when I run locust for some scenarios 95 percentile value is more than max. As far as I understood 95 percentile means the 95% of requests took lesser time than this.So how can Max value be lesser than 95 percentile? I am I doing something wrong here.
I also found that this only happens when there very less number of requests like 15 or less.
Percentiles are approximated in Locust.
This is done for performance reasons, as calculating an exact percentile would need to consider every sample (and doing this continously for large runs would just not work)
Min, max and average (mean) are accurate though.
And in longer runs (more than those 15 requests) the 95th percentile should not exceed your max.

CPU utilization calculation

I've read in many places that a simple and decent way to get the % of CPU utilization is by this formula:
CPU utilization = 1 - p^n
where:
p - blocked time
n - number of processes
But i can't find an explanation for it. Seems it has to do with statistics, but i can't wrap my head around it.
My starting point is: if i have 2 processes with 50% wait time, then the formula would yield 1 - 1/4 = 75% CPU utilization. But my broken logic begs the question: if one process is blocked on I/O and the other is swapped in to run when the first is blocked(whatever the burst is), that means that while one waits, the second runs and their wait time overlap. Isn't that 100% CPU utilization? I think this is true only when the first half of the programs is guaranteed to run without IO need.
Question is: How is that formula taking into account every other possibility?
You need to think in terms of probabilities. If the probability of each of the cores to be idle (waiting for IO) is 0.5 then the probability of the CPU to be in idle is the probability of all of the cores to be in idle at the same time. That is 0.5 * 0.5 = 0.25 and so the probability the CPU is doing work is 1 - 0.25 = 0.75 = 75%
CPU utilisation is given as 1 - probability of CPU to being in the idle state
and CPU remain in the idle state when all the process loaded in the main memory is blocked time(I/O)
So if n process has wait time of 50% the probability that all the process are in
block(I/O) state

Calculate CPU Utilization

i have an task to calculate CPU utilization, I have 4 proccess
P1 wait for I/O 30% of his time.
P2 wait for I/O 40% of his time.
P3 wait for I/0 20% of his time.
P4 wait for I/0 50% of his time.
my result is 0.99999993...it seems to me unreasonable
The probability that all processes are waiting for I/O (and therefore the CPU is idle) is:
0.3 * 0.4 * 0.2 * 0.5 = 0.012
The CPU is therefore busy with a probability of: (1 - 0.012) = 0.988, i.e. CPU utilization = 98.8%.