Calculate average Round Trip Time? [closed] - average

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have used the traceroute command and pinged my Amazon INstance. This is the result I got back:
traceroute to 10.250.19.146 (10.250.19.146), 30 hops max, 60 byte packets
1 ip-10-8-145-1.us-west-2.compute.internal (10.8.145.1) 0.996 ms 1.234 ms 3.698 ms
2 100.70.166.213 (100.70.166.213) 0.855 ms 1.179 ms 100.70.166.117 (100.70.166.117) 0.860 ms
3 100.70.175.66 (100.70.175.66) 0.925 ms 100.70.175.174 (100.70.175.174) 0.771 ms 100.70.175.238 (100.70.175.238) 0.811 ms
4 100.70.173.157 (100.70.173.157) 0.811 ms 100.70.172.193 (100.70.172.193) 0.866 ms 100.70.173.69 (100.70.173.69) 0.849 ms
5 100.70.164.46 (100.70.164.46) 4.411 ms 100.70.163.206 (100.70.163.206) 4.655 ms 4.915 ms
6 ip-10-250-19-146.us-west-2.compute.internal (10.250.19.146) 0.563 ms 0.267 ms 0.267 ms
Using the data, how can I calculate the average RTT time?

You shouldn't need to use traceroute for average RTT to a specific destination. You can just use ping for that. If that is what you are trying to do, then just average the 'time' field in the output from the ping command over several packets.
If you are actually trying to get the average RTT to each hop along the path to a destination, then you can just average the three times on each line displayed (traceroute sends three packets by default and gives you the RTT for each). That will give you the average RTT for each hop (small sample obviously, you would want to adjust traceroute to send more packets to each hop for more complete average).
EDIT: If for some reason, you need to use traceroute and are only interested in the final destination, you can just average the times displayed on the last line of the output. That's the average RTT to final destination.

Related

How does iperf calculate throughput

I am trying to calculate ethernet throughput using python (by creating UDP socket). I got throughput in 10-15 MBps range. When I cross verified using iperf, iperf is showing throughput as 35 MBps.
what logic does iperf use to calculate throughput?
does it use UDP or TCP protocol?
For iperf2 and UDP (-u) the value is the number of packets * udp payload / time. If -i is used then interval reports will report per that. A final report gives the -t value. For TCP, it's the bytes read / time.
Bob

Need help in modelling a delay element in Simulink [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need to generate a pulse which steps from 0 to 1 after an initial predetermined time has elapsed. When the new predetermined time is available, the pulse should again step from 1 to 0. It should step from 0 to 1 after that time has elapsed. This model has to be implemented in Simulink.
Thanks.
I'm assuming that the times at which the on/off behaviours are to be performed are available before the model simulation begins. Let's say that it's 2 seconds of value 0 and then 3 seconds of value 1.
Use the Pulse Generator block in the Sources library of Simulink. The trick is starting with a zero. To do this, set the Amplitude to 1 second, the Period to 5s, the Pulse Width to 60% and the Phase Delay to 2s.
The output will look as below.

Data transfer rates for iOS

We are developing a application that transfers data from iOS to a server.
On our latest test the upload speed at both the beginning and end of the transfer was .46 Mbps and the ammount of data transfer was 14.5 MB. That should take about 4 minutes according to the math. It took 6 minutes and 19 seconds. Is that a standard ammount of time for that data to be transfered? Or is this an issue with the coding?
You can lose 3-15% in TCP overhead, and that's before you account for any packet retransmission due to errors or packet loss. Your actual transmission time is enough to indicate that you probably are experiencing delay in data transmission related to more than just TCP overhead. http://www.w3.org/Protocols/HTTP/Performance/Nagle/summary.html is one good reference for some detailed metrics on TCP overhead.
You should run a tcpdump on the server side to get a more accurate picture of what's occurring.

iPhone 4S - BLE data transfer speed

I've been tinkering around with the BLE (Bluetooth Low Energy) connectivity classes quiet a bit lately and haven't been able to make it transfer data any faster than 1KB / 5 seconds. I believe, in the documentation, it says the max speed is 60 bytes per 20 milliseconds. With data transfer and counting the Ack transfer after each set of packets, I believe we should be able to go as fast as 1.5KB per second. So my code is around 7-8 times slower than it should be.
I'm just wondering if anyone has been able to do data transfer in BLE as fast as the documentation says it should be able to do. What sort of speed are you getting if faster than mine?
Thanks a lot
see at the guidlines of apple and you will see that a connection update request is required to speed up your connection.
https://developer.apple.com/hardwaredrivers/BluetoothDesignGuidelines.pdf
I have min=20ms max 40 ms
I hope I could help
Roman
If you are able to use higher MTU size (negotiated by the iOS) then you would be able to increase the bandwidth even more, because there is a 4 byte L2CAP header and a 3 byte ATT header that wouldn't be transmitted more than in one packet.
If you are able to transmit 6 packets pr connection interval, then you would be able to put in 35 byte extra per connection interval (the 7 byte header would still be there for the first packet) The MTU size could also be split over several connection intervals, increasing the throughput with 7 more bytes pr connection interval. (Just takes longer time to assemble the packet again.) The max MTU size allowed by ATT is 515 bytes (Max size of att is 512 bytes + 3 byte header for opcode + handle)

how a 32bit processor can address 4 gigabytes of memory [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I didnt understand this because 2^32 is 4 giga bits not bytes right ? since 2^2 * 1024* 1024* 1024 bits right ? Am I wrong ?
The smallest individually addressable unit of memory is a byte. Bits don't have addresses. You have to read a byte or more and then do bit masking and such to get at the individual bits.
As far as i can recall from my college days, this is how it goes
If 32 = size of the address bus, then the total number of memory addresses that can be addressed = 2^32 = 4294967296
However, these are 4294967296 Addresses of memory locations. Since each memory location itself = 1 Byte, hence this gives us 4294967296 bytes that can be addressed.
Hence 4GB Memory can be addressed.
No, it is Gigabytes. A byte has 8 bits so you have to multiply the resulting number by 8 to get the bits. As john said in his answer you cant address individual bits, you will have to do bit shifting and masking to get to individual bits.
In the old console days SNES and Megadrive games were measured in MegaBits because by definition an 8MegaBit game sounds bigger than a 1 MegaByte game. In the end most people said 8Megs so again the confusion gave the impression of 8Megabytes for most people. Im not sure if brett is talking about SNES or Megadrive programming but remember 8 Megabits = 1 Megabyte.
the above answer solves it, and if you wish to address more then 4 gb then you can use an offset memory register, that can help you address a wider range.