Google Chrome devtool : Issue with started at & queued at timing - google-chrome-devtools

In chrome 1st request Queued at 0.0 ms and stared at 1 ms but
2nd immediate request queued at 15 s and stared at 10 s. not able to understand why 2nd request start at 10 and Queued at 15 senter image description here where is 5 sec missing. plz help

Related

Send mail with some delay time in odoo11

I have to send a mail when stage changes with a delay of some specific time. For example, in my case, I have to send mail after 10 minutes of stage changes. The time should be configurable.
I have try to achieve this by making force parameter to false as below:-
self.env['mail.template'].browse(template.id).send_mail(self.id, force_send=False)
And after that i have changed time intervals of "Mail: Email Queue Manager" template in scheduled action as per my requirement. In this case arises that When i send 2 mails lets say first mail on 11:30 and second mail on 11:33 and scheduled action will be performed on 11:35. So it sends both mails on 11:35 instead of sending mails on 11:35 and 11:38 respectively if i scheduled it at every 5 minutes after stage changes.
So how can i achieve this?
You may pass the send mail function to a new thread with execution time 5 minutes
Imagine that the Odoo project is running in a main thread and you are gonna send the mail in a second thread
import threading
import time
def odoo_project():
for x in range(0,11):
print(x)
time.sleep(1)
def mail_sender():
for x in range(100,103):
print(x)
time.sleep(5)
t1 = threading.Thread(target=odoo_project)
t2 = threading.Thread(target=mail_sender)
t1.start()
t2.start()
This will output
0
100
1
2
3
4
101
5
6
7
8
9
102
10
which means the Odoo project will continue working while the mail thread is gonna send the mail after 5 minutes -replace 5 with 5 * 60 in the example-
Also it would be a better design if you did a config field that you type the delay value in it instead of hard-coding the 5 minutes in the code in case you wanted future updates

Gatling Scenario Response time

I am doing load test for an api which average response time is 5 sec
in my script i setup constantUserPerSecond 2 and duration 150 second
.inject(constantUsersPerSec(2) during (150 seconds)),
will it generate 2 request per second ? or less , because of 1 request will take 5 second to complete ?
constantUsersPerSec(2) will start a new user executing the scenario every .5 seconds or so. For this sort of injection profile gatling doesn't take into consideration how long it takes for a request to complete.

Jmeter Throughput shaping timer Questions

So, I am using JMeter's throughput shaping timer to test the performance of our REST Server. I noticed a few things i did not expect.
First of all my setup details :
1)JMeter Version : 3.0 r1743807
2)JMX file : DropBox Link
Now , my questions :
1)The throughput shaping timer is configured to run for 60 seconds(100rps - 30 secs, 200 rps - next 30 secs). But the actual test runs only for 3 seconds as shown below. Why?
2) As per the plan the number of requests per second should go from 100 - 200. But here it seems to decrease , as in above.
3)As per this plugin's documentation , the number of thread groups = desired requests per second * server response time / 1000 . Is it because how this plugin internally works or is it a simple logic i am missing?
The issue is with the Thread Group settings.
You only one have 1 iteration and ramp up 300 users in 1 second. So if Jmeter can send all the 300 requests and get the response, JMeter will finish the test immediately. Those timer settings will apply only if the test is running.
If you need the test to run for some duration (say 60 seconds), then set the loop count to forever & duration to 60

Real time process missing deadline with SCHED_RR

I have below configs on ARMv7 embedded OMAP system.
sched_rt_period_us = 1000000 = 1 sec
sched_rt_runtime_us = 950000 = 0.95 sec
And i have 4 Real time processes running with SCHED_RR and pri = 1
and sched_rr_get_interval () returned 93750000 nanosec, i.e. 0.093750 sec on system.
I have added a new process with SCHED_RR and pri of 1 and same default rr_interval
of 0.09375 sec.
According to this configs:
On every second 5 RT processes must execute 2 times each (0.09375 * 10 = 0.9375 sec) and
rest of the time interval of 1 Sec is to be used by non-RT tasks
i.e., 1.0 - 0.9375 = 0.0625 Sec.
But as i see from execution the 5th newly added task misses the timeline and only executes randomly and produces output every 1 sec or indeterminate. Please help me on how to make
this new process deterministic so that it executes twice per sec as per above configs.
I tried to configure static pri of 2 and also checked with SCHED_FIFO but got the same
results.
Or is there anything i am missing in these calculations.
I am using :
Linux xxxx 2.6.33 #2 PREEMPT Tue Aug 14 16:13:05 CEST 2012 armv7l GNU/Linux
Are you sure that the scheduler does not fail because it is not able to honor the scheduling requests? I mean, that fifth task doesn't meet the deadline because the system is too heavily loaded?
As far as I know, sched_setscheduler does not have a way to signal that the system load is too heavy. To know if the system is able to meet the request, you need another scheduling algorithm, such as edf. Maybe you want to check its implementation for linux.

TCP, HTTP and the Multi-Threading Sweet Spot

I'm trying to understand the performance numbers I'm getting and how to determine the optimal number of threads.
See the bottom of this post for my results
I wrote an experimental multi-threaded web client in perl which downloads a page, grabs the source for each image tag and downloads the image - discarding the data.
It uses a non-blocking connect with an initial per file timeout of 10 seconds which doubles after each timeout and retry. It also caches IP addresses so each thread only has to do a DNS lookup once.
The total amount of data downloaded is 2271122 bytes in 1316 files via 2.5Mbit connection from http://hubblesite.org/gallery/album/entire/npp/all/hires/true/ . The thumbnail images are hosted by a company which claims to specialize in low latency for high bandwidth applications.
Wall times are:
1 Thread takes 4:48 -- 0 timeouts
2 Threads takes 2:38 -- 0 timeouts
5 Threads takes 2:22 -- 20 timeouts
10 Threads take 2:27 -- 40 timeouts
50 Threads take 2:27 -- 170 timeouts
In the worst case ( 50 threads ) less than 2 seconds of CPU time are consumed by the client.
avg file size 1.7k
avg rtt 100 ms ( as measured by ping )
avg cli cpu/img 1 ms
The fastest average download speed is 5 threads at about 15 KB / sec overall.
The server actually does seem to have pretty low latency as it takes only 218 ms to get each image meaning it takes only 18 ms on average for the server to process each request:
0 cli sends syn
50 srv rcvs syn
50 srv sends syn + ack
100 cli conn established / cli sends get
150 srv recv's get
168 srv reads file, sends data , calls close
218 cli recv HTTP headers + complete file in 2 segments MSS == 1448
I can see that the per file average download speed is low because of the small file sizes and the relatively high cost per file of the connection setup.
What I don't understand is why I see virtually no improvement in performance beyond 2 threads. The server seems to be sufficiently fast, but already starts timing out connections at 5 threads.
The timeouts seem to start after about 900 - 1000 successful connections whether it's 5 or 50 threads, which I assume is probably some kind of throttling threshold on the server, but I would expect 10 threads to still be significantly faster than 2.
Am I missing something here?
EDIT-1
Just for comparisons sake I installed the DownThemAll Firefox extension and downloaded the images using it. I set it to 4 simultaneous connections with a 10 second timeout. DTM took about 3 minutes to download all the files + write them to disk, and it also started experiencing timeouts after about 900 connections.
I'm going to run tcpdump to try and get a better picture what's going on at the tcp protocol level.
I also cleared Firefox's cache and hit reload. 40 Seconds to reload the page and all the images. That seemed way too fast - maybe Firefox kept them in a memory cache which wasn't cleared? So I opened Opera and it also took about 40 seconds. I assume they're so much faster because they must be using HTTP/1.1 pipelining?
And the Answer Is!??
So after a little more testing and writing code to reuse the sockets via pipelining I found out some interesting info.
When running at 5 threads the non-pipelined version retrieves the first 1026 images in 77 seconds but takes a further 65 seconds to retrieve the remaining 290 images. This pretty much confirms MattH's theory about my client getting hit by a SYN FLOOD event causing the server to stop responding to my connection attempts for a short period of time. However, that is only part of the problem since 77 seconds is still very slow for 5 threads to get 1026 images; if you remove the SYN FLOOD issue it would still take about 99 seconds to retrieve all the files. So based on a little research and some tcpdump's it seems like the other part of the issue is latency and the connection setup overhead.
Here's where we get back to the issue of finding the "Sweet Spot" or the optimal number of threads. I modified the client to implement HTTP/1.1 Pipelining and found that the optimal number of threads in this case is between 15 and 20. For example:
1 Thread takes 2:37 -- 0 timeouts
2 Threads takes 1:22 -- 0 timeouts
5 Threads takes 0:34 -- 0 timeouts
10 Threads take 0:20 -- 0 timeouts
11 Threads take 0:19 -- 0 timeouts
15 Threads take 0:16 -- 0 timeouts
There are four factors which
affect this; latency / rtt , maximum end-to-end bandwidth, recv buffer size
and the size of the image files being downloaded. See this site for a
discussion on how receive buffer size and RTT latency affect available
bandwidth.
In addition to the above, average file size affects the maximum per connection
transfer rate. Every time you issue a GET request you create an empty gap in
your transfer pipe which is the size of the connection RTT. For example, if
you're Maximum Possible Transfer Rate ( recv buff size / RTT ) is 2.5Mbit and
your RTT is 100ms, then every GET request incurs a minimum 32kB gap in your
pipe. For a large average image size of 320kB that amounts to a 10% overhead
per file, effectively reducing your available bandwidth to 2.25Mbit. However,
for a small average file size of 3.2kB the overhead jumps to 1000% and
available bandwidth is reduced to 232 kbit / second - about 29kB.
So to find the optimal number of threads:
Gap Size = MPTR * RTT
MPTR / (MPTR / Gap Size + AVG file size) * AVG file size)
For my above scenario this gives me an optimum thread count of 11 threads, which is extremely close to my real world results.
If the actual connection speed is slower than the theoretical MPTR then it
should be used in the calculation instead.
Please correct me this summary is incorrect:
Your multi-threaded client will start a thread that connects to the server and issues just one HTTP GET then that thread closes.
When you say 1, 2, 5, 10, 50 threads, you're just referring to how many concurrent threads you allow, each thread itself only handles one request
Your client takes between 2 and 5 minutes to download over 1000 images
Firefox and Opera will download an equivalent data set in 40 seconds
I suggest that the server rate-limits http connections, either by the webserver daemon itself, a server-local firewall or most likely dedicated firewall.
You are actually abusing the webservice by not re-using the HTTP Connections for more than one request and that the timeouts you experience are because your SYN FLOOD is being clamped.
Firefox and Opera are probably using between 4 and 8 connections to download all of the files.
If you redesign your code to re-use the connections you should achieve similar performance.