I know with reachability you can check if you are connected to the internet. But is there a way to determine the speed of that connection?
I am trying to calculate upload speed as well as download speed separately.
How to determine the speed on internet programmatically?
If you use NSURLConnection to grab a large file (say, 1 MB or greater), you can use a delegate to track intermediate download progress.
Specifically: If you measure the difference in bytes downloaded and the difference in time between calls to the delegate, then you can calculate the ongoing speed in bytes per second (or other time unit).
step 1: Take downloadable file url and configure a it with a NSURLSession and its method dataTaskWithUrl.
step 2 : Integrate NSURLSessionDelegate, NSURLSessionDataDelegate method in your controller.
step 3: Take two CFAbsoluteTime variable which store starTime and assign CFAbsoluteTimeGetCurrent() and second one stopTime in didReceiveData: Delegate method.
step 4 : Count speed like this
CFAbsoluteTime elapsedTime = stopTime - startTime;
float speedOfConnection = elapsedTime != 0 ? [data length] / (stopTime - startTime) / 1024.0 / 1024.0 : -1;
There are 2 main ways to calculate download/upload speed.
Passive testing - This is done by using iOS methods which give you currently transferred bytes. You poll this frequently and calculate speed by transferred bytes divided by time. This method will give you speed which is more closer to the actual user experience. However, it will not provide you with capacity measurement - i.e what is the expected capacity of the connection. For example for fixed networks ISPs usualy sell packages based on speed - e.g. 100 Mbit package, 1 Gbit package. If you want to see if that ISP is delivering the speed, Passive approach is not the way! The speeds will be much lower as users are not using full capacity all the time.
Active testing - this method requires downloading and uploading data to the remote server to get the download/upload speed and latency. This is what previous commenters here suggested. The important way is to realize if you want to test using single thread or multiple threads. Single thread will not saturate the internet connection and will not show you maximum capacity of the connection.
There are many methodologies how to test "internet speed", there is no one "true" speed. You can check following standards to give you more idea what is the recommended way:
https://itu.int/en/ITU-T/C-I/Pages/IM/Internet-speed.aspx
https://itu.int/itu-t/recommendations/rec.aspx?rec=q.3960
https://itu.int/ITU-T/recommendations/rec.aspx?rec=14125
https://tools.ietf.org/pdf/rfc6349.pdf
You can also use our iOS SDK which should do what you need and is compliant with ITU standard:
https://github.com/speedchecker/speedchecker-sdk-ios
Related
I have tried couple of suggestions as mentioned in other sites on how to configure/Limit 100 requests per minute for a given REST endpoint for a single user. its not working !
Can someone please guide me to setup on how to limit a 100 requests for a given REST endpoint?
Thankyou in Advance!!
The easiest way is adding Constant Throughput Timer however be aware that it's precise enough on minute level so you will have to let your test to run for at least a minute before you start seeing the rate limiting, if your test throughput is higher during the first minute - consider playing with ramp-up.
If you have only 1 user and your test runs for a minute or less you will have to consider the following options:
Precise Throughput Timer
Throughput Shaping Timer
the latter one is extremely easy to use and it provides visual way of defining the target throughput:
As I know Ramp-up function has been removed from locust.
Just wondering if the hatching process was the same as or similar to ramp-up? Or is there anyway to simulate the situation?
Not sure about what ramp up function you are talking about. There are plenty of options for controlling ramp-up in locust, including the new step-load function:
--step-load Enable Step Load mode to monitor how performance
metrics varies when user load increases. Requires
--step-clients and --step-time to be specified.
--step-clients STEP_CLIENTS
Client count to increase by step in Step Load mode.
Only used together with --step-load
--step-time STEP_TIME
Step duration in Step Load mode, e.g. (300s, 20m, 3h,
1h30m, etc.). Only used together with --step-load
Just found a workaround by Taurus to schedule the load.
I would to calculate latency time of a running audio/video call.
According to these parameters of RTCStatsReport object, how can I retrieve the delay time?
latency = packetsize / delay + bandwidth
I think what you want is the RTT (round-trip time), which is available as "googRtt" in Chrome. You can see it if you go to chrome://webrtc-internals or you can get it programmatically via the stats interface: https://webrtc.github.io/samples/src/content/peerconnection/constraints/ (click the capture and connect button and then scroll down to the statistics).
Note that two of the reports should have googRtt in them: one is for audio and the other for video.
I know with reachability you can check if you are connected to the internet. But is there a way to determine the speed of that connection?
I am trying to calculate upload speed as well as download speed separately.
How to determine the speed on internet programmatically?
If you use NSURLConnection to grab a large file (say, 1 MB or greater), you can use a delegate to track intermediate download progress.
Specifically: If you measure the difference in bytes downloaded and the difference in time between calls to the delegate, then you can calculate the ongoing speed in bytes per second (or other time unit).
step 1: Take downloadable file url and configure a it with a NSURLSession and its method dataTaskWithUrl.
step 2 : Integrate NSURLSessionDelegate, NSURLSessionDataDelegate method in your controller.
step 3: Take two CFAbsoluteTime variable which store starTime and assign CFAbsoluteTimeGetCurrent() and second one stopTime in didReceiveData: Delegate method.
step 4 : Count speed like this
CFAbsoluteTime elapsedTime = stopTime - startTime;
float speedOfConnection = elapsedTime != 0 ? [data length] / (stopTime - startTime) / 1024.0 / 1024.0 : -1;
There are 2 main ways to calculate download/upload speed.
Passive testing - This is done by using iOS methods which give you currently transferred bytes. You poll this frequently and calculate speed by transferred bytes divided by time. This method will give you speed which is more closer to the actual user experience. However, it will not provide you with capacity measurement - i.e what is the expected capacity of the connection. For example for fixed networks ISPs usualy sell packages based on speed - e.g. 100 Mbit package, 1 Gbit package. If you want to see if that ISP is delivering the speed, Passive approach is not the way! The speeds will be much lower as users are not using full capacity all the time.
Active testing - this method requires downloading and uploading data to the remote server to get the download/upload speed and latency. This is what previous commenters here suggested. The important way is to realize if you want to test using single thread or multiple threads. Single thread will not saturate the internet connection and will not show you maximum capacity of the connection.
There are many methodologies how to test "internet speed", there is no one "true" speed. You can check following standards to give you more idea what is the recommended way:
https://itu.int/en/ITU-T/C-I/Pages/IM/Internet-speed.aspx
https://itu.int/itu-t/recommendations/rec.aspx?rec=q.3960
https://itu.int/ITU-T/recommendations/rec.aspx?rec=14125
https://tools.ietf.org/pdf/rfc6349.pdf
You can also use our iOS SDK which should do what you need and is compliant with ITU standard:
https://github.com/speedchecker/speedchecker-sdk-ios
If the throughput is increase how will be changed the response and request time?
If I have the data(request/min)?
JMeter's definition of throughput can be seen here: https://jmeter.apache.org/usermanual/glossary.html
Basically its a measure of how many requests that JMeter were able to send to your test site/application in one second. Or in another word the number of requests that your test site/application was able to receive from JMeter in one second. An increase in the throughput will mean your site/application was able to receive more requests per second while a decrease will mean a reduction in the number of request it handled per second.
The relationship between throughput with response/request time totally depends as ysth stated. I typically use this number to see the load of the server but run the test several times (30x min) and take the average.
There's not necessarily a relationship. Can you tell us anything more about why you want to know this, what you plan to do with the information, etc.? It may help get you an answer better suited to your needs.
After completion of the project development as a developer, we are responsible to test the performance of the application.
As part of performance testing, we have to check
1)Response time of application
2)bottle nack of application
3)Throughput of application
Throughput of application:-
In general 'Request capacity of application in a given time.'
As per Apache JMeter doc :-
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).