Set burst for bandwidth limit for a pod - kubernetes

For what I know, there're two way to limit bandwidth with k8s.
First, configure the CNI with
{
"type": "bandwidth",
"capabilities": {"bandwidth": true},
"ingressRate": 10000000,
"ingressBurst": 10000000,
"egressRate": 10000000,
"egressBurst": 10000000
}
And second, annotate a pod with:
annotations:
kubernetes.io/ingress-bandwidth: 8M
kubernetes.io/egress-bandwidth: 8M
I read through the documentations but didn't find any way to configure burst for a pod.
But the default burst is too large to make the limitation useful:
qdisc tbf 1: dev calia2333445823 root refcnt 2 rate 8Mbit burst 256Mb lat 25.0ms
And it seems when the CNI is configured with bandwidth limit, then pod annotations will not override the CNI's configure and take effect.
So how can I set a burst for just a pod?
iperf output to illustrate the burst:
server:
Accepted connection from 192.168.0.34, port 49470
[ 5] local 192.168.203.129 port 1234 connected to 192.168.0.34 port 49472
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-1.00 sec 246 MBytes 2060541 Kbits/sec (omitted)
[ 5] 1.00-2.00 sec 935 KBytes 7657 Kbits/sec (omitted)
[ 5] 2.00-3.00 sec 933 KBytes 7645 Kbits/sec (omitted)
[ 5] 0.00-1.00 sec 935 KBytes 7655 Kbits/sec
[ 5] 1.00-2.00 sec 935 KBytes 7659 Kbits/sec
[ 5] 2.00-3.00 sec 935 KBytes 7655 Kbits/sec
[ 5] 3.00-4.00 sec 933 KBytes 7645 Kbits/sec
[ 5] 4.00-5.00 sec 932 KBytes 7637 Kbits/sec
[ 5] 5.00-6.00 sec 935 KBytes 7657 Kbits/sec
[ 5] 6.00-7.00 sec 936 KBytes 7667 Kbits/sec
[ 5] 7.00-8.00 sec 932 KBytes 7632 Kbits/sec
[ 5] 8.00-9.00 sec 935 KBytes 7659 Kbits/sec
[ 5] 9.00-10.00 sec 933 KBytes 7647 Kbits/sec
[ 5] 10.00-11.00 sec 935 KBytes 7654 Kbits/sec
[ 5] 11.00-11.15 sec 141 KBytes 7582 Kbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-11.15 sec 0.00 Bytes 0.00 Kbits/sec sender
[ 5] 0.00-11.15 sec 10.2 MBytes 7650 Kbits/sec receiver
client:
[ 4] local 192.168.0.34 port 49472 connected to 192.168.203.129 port 1234
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 247 MBytes 2073202 Kbits/sec 0 324 KBytes (omitted)
[ 4] 1.00-2.00 sec 700 KBytes 5732 Kbits/sec 0 366 KBytes (omitted)
[ 4] 2.00-3.00 sec 1.55 MBytes 13037 Kbits/sec 0 407 KBytes (omitted)
[ 4] 0.00-1.00 sec 891 KBytes 7291 Kbits/sec 0 448 KBytes
[ 4] 1.00-2.00 sec 954 KBytes 7821 Kbits/sec 0 491 KBytes
[ 4] 2.00-3.00 sec 1018 KBytes 8344 Kbits/sec 0 532 KBytes
[ 4] 3.00-4.00 sec 1.06 MBytes 8858 Kbits/sec 0 573 KBytes
[ 4] 4.00-5.00 sec 1.18 MBytes 9897 Kbits/sec 0 615 KBytes
[ 4] 5.00-6.00 sec 1.24 MBytes 10433 Kbits/sec 0 656 KBytes
[ 4] 6.00-7.00 sec 1.25 MBytes 10487 Kbits/sec 0 697 KBytes
[ 4] 7.00-8.00 sec 1.25 MBytes 10488 Kbits/sec 0 766 KBytes
[ 4] 8.00-9.00 sec 0.00 Bytes 0.00 Kbits/sec 0 899 KBytes
[ 4] 9.00-10.00 sec 1.25 MBytes 10485 Kbits/sec 0 1.03 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 10.0 MBytes 8410 Kbits/sec 0 sender
[ 4] 0.00-10.00 sec 10.2 MBytes 8532 Kbits/sec receiver
Environment:
Kubernetes 1.15.7
Calico v3.11.1
bandwidth plugin v0.8.0
tc iproute 4.11.0-14.el7

Unfortunately current implementation of bandwidth control does not support limiting burst for pod. I had the same results testing this. I also looked at the cni code on kubernetes github and found out that there are
only annotations visible are for ingress bandwidth and egress bandwidth.
bandwidthAnnotation := make(map[string]string)
bandwidthAnnotation["kubernetes.io/ingress-bandwidth"] = "1M"
bandwidthAnnotation["kubernetes.io/egress-bandwidth"] = "1M"
Since network shaping is still in alpha stage you could raise a request on github and ask
for this functionality.

Related

How do I use Get-Content to retrieve specific line and column from text file

Example from the iperf3.txt file created.
[SUM] 0.00-30.00 sec 1.09 GBytes 312 Mbits/sec sender
[SUM] 0.00-30.00 sec 1.09 GBytes 312 Mbits/sec receiver
I would like to be able to "extract" the 312 Mbits/sec-text and leave the rest alone.
Previous I've used this line of code to "present" the numbers from the iPerf-speedtest after the PS-script was completed, but the entire line is displayed like the example, I put in above.
Get-Content -Path $iperf3.txt | Where-Object {$_ -like'*[SUM]*0.00-15.00*'}
I've been looking at this post Get filtered content from text file with Get-Content and wondered if that is what is needed to do what I want.
I would like to end result to be something like this:
Your speedtest result is 123 Mbits/sec for download.
Your speedtest result is 321 Mbits/sec for upload.
Any input or feedback would be appreciated. Thanks!
EDITED/UPDATED
More text from the text-file.
[ ID] Interval Transfer Bandwidth Retr
[ 5] 0.00-30.00 sec 116 MBytes 32.5 Mbits/sec 91 sender
[ 5] 0.00-30.00 sec 116 MBytes 32.3 Mbits/sec receiver
[ 7] 0.00-30.00 sec 116 MBytes 32.6 Mbits/sec 94 sender
[ 7] 0.00-30.00 sec 116 MBytes 32.4 Mbits/sec receiver
[ 9] 0.00-30.00 sec 109 MBytes 30.5 Mbits/sec 107 sender
[ 9] 0.00-30.00 sec 108 MBytes 30.2 Mbits/sec receiver
[ 11] 0.00-30.00 sec 120 MBytes 33.5 Mbits/sec 98 sender
[ 11] 0.00-30.00 sec 119 MBytes 33.3 Mbits/sec receiver
[ 13] 0.00-30.00 sec 108 MBytes 30.2 Mbits/sec 101 sender
[ 13] 0.00-30.00 sec 107 MBytes 29.9 Mbits/sec receiver
[ 15] 0.00-30.00 sec 123 MBytes 34.3 Mbits/sec 104 sender
[ 15] 0.00-30.00 sec 122 MBytes 34.0 Mbits/sec receiver
[ 17] 0.00-30.00 sec 102 MBytes 28.5 Mbits/sec 104 sender
[ 17] 0.00-30.00 sec 101 MBytes 28.3 Mbits/sec receiver
[ 19] 0.00-30.00 sec 108 MBytes 30.2 Mbits/sec 108 sender
[ 19] 0.00-30.00 sec 107 MBytes 30.0 Mbits/sec receiver
[ 21] 0.00-30.00 sec 103 MBytes 28.8 Mbits/sec 105 sender
[ 21] 0.00-30.00 sec 102 MBytes 28.6 Mbits/sec receiver
[ 23] 0.00-30.00 sec 125 MBytes 34.9 Mbits/sec 96 sender
[ 23] 0.00-30.00 sec 124 MBytes 34.6 Mbits/sec receiver
[SUM] 0.00-30.00 sec 1.10 GBytes 316 Mbits/sec 1008 sender
[SUM] 0.00-30.00 sec 1.10 GBytes 314 Mbits/sec receiver
For each speedtest iPerf will add the above content to the text-file, so in theory it will could be added two-three times. Hope this helps.
Not knowing if there is more text in that file and how large it may be, here's two options for you:
Get-Content -Path 'D:\Test\iperf3.txt'|
Where-Object { $_ -match '\[SUM].*\s(\d+ Mbits/sec) (\w+)$'} |
ForEach-Object {
# define the text. if 'sender' then 'upload', otherwise 'download'
$updown = if ($matches[2] -eq 'sender') { 'upload' } else { 'download' }
'Your speedtest result is {0} for {1}' -f $matches[1], $updown
}
If the file is huge, this would work faster and consumes less memory:
switch -Regex -File 'D:\Test\iperf3.txt' {
'\[SUM].*\s(\d+ Mbits/sec) (\w+)$' {
# define the text. if 'sender' then 'upload', otherwise 'download'
$updown = if ($matches[2] -eq 'sender') { 'upload' } else { 'download' }
'Your speedtest result is {0} for {1}' -f $matches[1], $updown
}
}
Regex details:
\[ Match the character “[” literally
SUM] Match the characters “SUM]” literally
. Match any single character that is not a line break character
* Between zero and unlimited times, as many times as possible, giving back as needed (greedy)
\s Match a single character that is a “whitespace character” (spaces, tabs, line breaks, etc.)
( Match the regular expression below and capture its match into backreference number 1
\d Match a single digit 0..9
+ Between one and unlimited times, as many times as possible, giving back as needed (greedy)
\ Mbits/sec Match the characters “ Mbits/sec” literally
)
\ Match the character “ ” literally
( Match the regular expression below and capture its match into backreference number 2
\w Match a single character that is a “word character” (letters, digits, etc.)
+ Between one and unlimited times, as many times as possible, giving back as needed (greedy)
)
$ Assert position at the end of the string (or before the line break at the end of the string, if any)
Now that you have shown more of the file, I noticed that the lines sender all have an extra numeric value in front of the word sender, which wasn't there in your earlier example.
Because of that, we need to adjust the regex.
This new code should ten work:
switch -Regex -File 'D:\Test\iperf3.txt' {
'\[SUM].*\s(\d+ Mbits/sec)\s(?:[^\s]+\s)?(\w+)$' {
# define the text. if 'sender' then 'upload', otherwise 'download'
$updown = if ($matches[2] -eq 'sender') { 'upload' } else { 'download' }
'Your speedtest result is {0} for {1}' -f $matches[1], $updown
}
}
Regex details:
\[ Match the character “[” literally
SUM] Match the characters “SUM]” literally
. Match any single character that is not a line break character
* Between zero and unlimited times, as many times as possible, giving back as needed (greedy)
\s Match a single character that is a “whitespace character” (spaces, tabs, line breaks, etc.)
( Match the regular expression below and capture its match into backreference number 1
\d Match a single digit 0..9
+ Between one and unlimited times, as many times as possible, giving back as needed (greedy)
\ Mbits/sec Match the characters “ Mbits/sec” literally
)
\s Match a single character that is a “whitespace character” (spaces, tabs, line breaks, etc.)
(?: Match the regular expression below
[^\s] Match any character that is NOT a “A whitespace character (spaces, tabs, line breaks, etc.)”
+ Between one and unlimited times, as many times as possible, giving back as needed (greedy)
\s Match a single character that is a “whitespace character” (spaces, tabs, line breaks, etc.)
)? Between zero and one times, as many times as possible, giving back as needed (greedy)
( Match the regular expression below and capture its match into backreference number 2
\w Match a single character that is a “word character” (letters, digits, etc.)
+ Between one and unlimited times, as many times as possible, giving back as needed (greedy)
)
$ Assert position at the end of the string (or before the line break at the end of the string, if any)

Trace and Watch (wt) on breakpoint in WinDbg

I'd like to get a trace of function calls inside comctl32.dll beginning when the left mouse button is pressed on a tree control item and while the mouse button is held down.
I can set a breakpoint on comctl32!TV_ButtonDown and then use wt when the breakpoint is hit but this requires me to release the mouse button and interact with WinDbg. When I try to use a command string for my breakpoint like this: bp comctl32!TV_ButtonDown "wt -m comctl32", the tracing stops immediately after starting upon hitting the breakpoint:
Tracing COMCTL32!TV_ButtonDown to return address 00007ffd`57a48f1d
0 instructions were executed in 0 events (0 from other threads)
Function Name Invocations MinInst MaxInst AvgInst
0 system calls were executed
COMCTL32!TV_ButtonDown+0x5:
00007ffd`57b03bd9 48896c2418 mov qword ptr [rsp+18h],rbp ss:000000b7`746f8b00=0000000000000201
Is what I am attempting possible? Are there any alternatives?
not 64 bit but 32 bit
supply the end address
( top of stack or return address is what i give #$ra and don't release the mouse
it is not mandatory that you give #$ra but you should be sure that you will reach the end address
eventually without releasing the mouse lsft button)
0:000> bl
0 e Disable Clear 6e57a2ee 0001 (0001) 0:**** COMCTL32!TV_ButtonDown "wt -m comctl32 #$ra"
0:000> g
17 0 [ 0] COMCTL32!TV_ButtonDown
10 0 [ 1] COMCTL32!GetMessagePosClient
3 0 [ 2] USER32!GetMessagePos
18 3 [ 1] COMCTL32!GetMessagePosClient
17 0 [ 2] USER32!ScreenToClient
25 20 [ 1] COMCTL32!GetMessagePosClient
20 45 [ 0] COMCTL32!TV_ButtonDown
22 0 [ 1] COMCTL32!TV_DismissEdit
14 0 [ 2] USER32!IsWindowVisible
26 14 [ 1] COMCTL32!TV_DismissEdit
10 0 [ 2] USER32!GetDlgCtrlID
33 24 [ 1] COMCTL32!TV_DismissEdit
10 0 [ 2] USER32!SetWindowLongW
48 34 [ 1] COMCTL32!TV_DismissEdit
16 0 [ 2] COMCTL32!TV_InvalidateItem
40 0 [ 3] COMCTL32!TV_GetItemRect
24 40 [ 2] COMCTL32!TV_InvalidateItem
4 0 [ 3] USER32!NtUserRedrawWindow
27 44 [ 2] COMCTL32!TV_InvalidateItem
52 105 [ 1] COMCTL32!TV_DismissEdit
4 0 [ 2] USER32!NtUserShowWindow
58 109 [ 1] COMCTL32!TV_DismissEdit
34 0 [ 2] COMCTL32!CCSendNotify
25 0 [ 3] USER32!GetParent
40 25 [ 2] COMCTL32!CCSendNotify
18 0 [ 3] USER32!GetWindow
44 43 [ 2] COMCTL32!CCSendNotify
10 0 [ 3] USER32!GetDlgCtrlID
57 53 [ 2] COMCTL32!CCSendNotify
24 0 [ 3] USER32!GetWindowThreadProcessId
60 77 [ 2] COMCTL32!CCSendNotify
1 0 [ 3] kernel32!GetCurrentProcessIdStub
1 0 [ 3] kernel32!GetCurrentProcessId
3 0 [ 3] KERNELBASE!GetCurrentProcessId
87 82 [ 2] COMCTL32!CCSendNotify
24 0 [ 3] USER32!SendMessageW
109 106 [ 2] COMCTL32!CCSendNotify
16 0 [ 3] COMCTL32!InOutAtoW
118 122 [ 2] COMCTL32!CCSendNotify
3 0 [ 3] COMCTL32!__security_check_cookie
120 125 [ 2] COMCTL32!CCSendNotify
67 354 [ 1] COMCTL32!TV_DismissEdit
4 0 [ 2] USER32!NtUserDestroyWindow
75 358 [ 1] COMCTL32!TV_DismissEdit
3 0 [ 2] COMCTL32!__security_check_cookie
77 361 [ 1] COMCTL32!TV_DismissEdit
27 483 [ 0] COMCTL32!TV_ButtonDown
3 0 [ 1] COMCTL32!__security_check_cookie
29 486 [ 0] COMCTL32!TV_ButtonDown
515 instructions were executed in 514 events (0 from other threads)
Function Name Invocations MinInst MaxInst AvgInst
COMCTL32!CCSendNotify 1 120 120 120
COMCTL32!GetMessagePosClient 1 25 25 25
COMCTL32!InOutAtoW 1 16 16 16
COMCTL32!TV_ButtonDown 1 29 29 29
COMCTL32!TV_DismissEdit 1 77 77 77
COMCTL32!TV_GetItemRect 1 40 40 40
COMCTL32!TV_InvalidateItem 1 27 27 27
COMCTL32!__security_check_cookie 3 3 3 3
KERNELBASE!GetCurrentProcessId 1 3 3 3
USER32!GetDlgCtrlID 2 10 10 10
USER32!GetMessagePos 1 3 3 3
USER32!GetParent 1 25 25 25
USER32!GetWindow 1 18 18 18
USER32!GetWindowThreadProcessId 1 24 24 24
USER32!IsWindowVisible 1 14 14 14
USER32!NtUserDestroyWindow 1 4 4 4
USER32!NtUserRedrawWindow 1 4 4 4
USER32!NtUserShowWindow 1 4 4 4
USER32!ScreenToClient 1 17 17 17
USER32!SendMessageW 1 24 24 24
USER32!SetWindowLongW 1 10 10 10
kernel32!GetCurrentProcessId 1 1 1 1
kernel32!GetCurrentProcessIdStub 1 1 1 1
0 system calls were executed
eax=00000000 ebx=00000201 ecx=422f0fd7 edx=77a370f4 esi=002d9590 edi=00000200
eip=6e542888 esp=0012fcc4 ebp=0012fd00 iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
COMCTL32!TV_WndProc+0x577:
6e542888 e90a060000 jmp COMCTL32!TV_WndProc+0x5de (6e542e97)

Iperf3 Jitter Value Way Too High

I was running an UDP test and I noticed that the jitter value was way too high, is something not initialized properly in iperf3 source code? The connection between client and server is very good.
Maybe the reason for high jitter was that prev_transit is not initialized to zero, but I am not sure.
How jitter should work:
http://toncar.cz/Tutorials/VoIP/VoIP_Basics_Jitter.html
Client:
[ 4] local 10.131.136.133 port 49402 connected to 10.131.138.232 port 5201
[ ID] Interval Transfer Bandwidth Total Datagrams
[ 4] 0.00-1.00 sec 16.0 KBytes 131 Kbits/sec 2
[ 4] 1.00-2.00 sec 8.00 KBytes 65.5 Kbits/sec 1
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Jitter Lost/Total
Datagrams [ 4]
0.00-2.00 sec 24.0 KBytes 98.2 Kbits/sec 63.064 ms 0/3 (0%)
[ 4] Sent 3 datagrams
iperf Done.
Server:
Starting Test: protocol: UDP, 1 streams, 8192 byte blocks, omitting 0 seconds, 2 second test
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 5] 0.00-1.00 sec 16.0 KBytes 131 Kbits/sec 67.261 ms 0/2 (0%)
[ 5] 1.00-2.00 sec 8.00 KBytes 65.5 Kbits/sec 63.064 ms 0/1 (0%)
[ 5] 2.00-2.04 sec 0.00 Bytes 0.00 bits/sec 63.064 ms 0/0 (-nan%)
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 5] 0.00-2.04 sec 0.00 Bytes 0.00 bits/sec 63.064 ms 0/3 (0%)
CPU Utilization: local/receiver 0.0% (0.0%u/0.0%s), remote/sender 1.9% (0.3%u/1.8%s)
iperf 3.1
I'm guessing you are the same person who filed this issue in the iperf3 issue tracker because the wording of this question and the one in the issue tracker are almost identical:
https://github.com/esnet/iperf/issues/672
I answered there that you had too few packets per measurement interval to actually compute the jitter in a meaningful way. I suggested that you send at a higher bitrate to get more data points to measure the jitter. Also you should use a version of iperf3 that is 3.2 or newer because of improvements in the timing of sending packets.

20 seconts difference betwen ntp-synced servers

I've got several CentOS 6 servers, synced to pool.ntp.org time-servers.
But sometimes time on them is out of sync, which make difference for 20-30 seconds, which causes errors in my app.
What can be the cause of this, and where should I look for it?
Config
tinker panic 1000 allan 1500 dispersion 15 step 0.128 stepout 900
statsdir /var/log/ntpstats/
leapfile /etc/ntp.leapseconds
driftfile /var/lib/ntp/ntp.drift
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
disable monitor
server 0.pool.ntp.org iburst minpoll 6 maxpoll 10
restrict 0.pool.ntp.org nomodify notrap noquery
server 1.pool.ntp.org iburst minpoll 6 maxpoll 10
restrict 1.pool.ntp.org nomodify notrap noquery
server 2.pool.ntp.org iburst minpoll 6 maxpoll 10
restrict 2.pool.ntp.org nomodify notrap noquery
server 3.pool.ntp.org iburst minpoll 6 maxpoll 10
restrict 3.pool.ntp.org nomodify notrap noquery
restrict default kod notrap nomodify nopeer noquery
restrict 127.0.0.1 nomodify
restrict -6 default kod notrap nomodify nopeer noquery
restrict -6 ::1 nomodify
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
srv1
remote refid st t when poll reach delay offset jitter
==============================================================================
server01.coloce .STEP. 16 u 11d 1024 0 0.000 0.000 0.000
+mt4.raqxs.net 193.190.230.66 2 u 510 1024 377 6.367 5.984 7.433
+16-164-ftth.ons 193.79.237.14 2 u 217 1024 375 11.339 -0.028 4.564
*services.freshd 213.136.0.252 2 u 419 1024 377 6.735 2.048 4.321
LOCAL(0) .LOCL. 10 l - 64 0 0.000 0.000 0.000
srv2
remote refid st t when poll reach delay offset jitter
==============================================================================
+ntp2.edutel.nl 80.94.65.10 2 u 527 1024 377 11.924 1.469 0.753
-95.211.224.12 193.67.79.202 2 u 364 1024 377 12.989 4.930 0.628
+app.kingsquare. 193.79.237.14 2 u 339 1024 377 5.485 0.493 0.591
*ntp.bserved.nl 193.67.79.202 2 u 206 1024 377 7.007 0.539 0.420
LOCAL(0) .LOCL. 10 l - 64 0 0.000 0.000 0.000

What read throughput should be expected out of google cloud storage from a compute engine instance?

I am trying to get a feel for what I should expect in terms of performance from cloud storage.
I just ran the gsutil perfdiag from a compute engine instance in the same location (US) and the same project as my cloud storage bucket.
For nearline storage, I get a 25 Mibit/s read and 353 Mibit/s write, is that low / high / average, why such discrepancy between read and write ?
==============================================================================
DIAGNOSTIC RESULTS
==============================================================================
------------------------------------------------------------------------------
Latency
------------------------------------------------------------------------------
Operation Size Trials Mean (ms) Std Dev (ms) Median (ms) 90th % (ms)
========= ========= ====== ========= ============ =========== ===========
Delete 0 B 5 112.0 52.9 78.2 173.6
Delete 1 KiB 5 94.1 17.5 90.8 115.0
Delete 100 KiB 5 80.4 2.5 79.9 83.4
Delete 1 MiB 5 86.7 3.7 88.2 90.4
Download 0 B 5 58.1 3.8 57.8 62.2
Download 1 KiB 5 2892.4 1071.5 2589.1 4111.9
Download 100 KiB 5 1955.0 711.3 1764.9 2814.3
Download 1 MiB 5 2679.4 976.2 2216.2 3869.9
Metadata 0 B 5 69.1 57.0 42.8 129.3
Metadata 1 KiB 5 37.4 1.5 37.1 39.0
Metadata 100 KiB 5 64.2 47.7 40.9 113.0
Metadata 1 MiB 5 45.7 9.1 49.4 55.1
Upload 0 B 5 138.3 21.0 122.5 164.8
Upload 1 KiB 5 170.6 61.5 139.4 242.0
Upload 100 KiB 5 387.2 294.5 245.8 706.1
Upload 1 MiB 5 257.4 51.3 228.4 319.7
------------------------------------------------------------------------------
Write Throughput
------------------------------------------------------------------------------
Copied a 1 GiB file 5 times for a total transfer size of 5 GiB.
Write throughput: 353.13 Mibit/s.
------------------------------------------------------------------------------
Read Throughput
------------------------------------------------------------------------------
Copied a 1 GiB file 5 times for a total transfer size of 5 GiB.
Read throughput: 25.16 Mibit/s.
------------------------------------------------------------------------------
System Information
------------------------------------------------------------------------------
IP Address:
##.###.###.##
Temporary Directory:
/tmp
Bucket URI:
gs://pl_twitter/
gsutil Version:
4.12
boto Version:
2.30.0
Measurement time:
2015-05-11 07:03:26 PM
Google Server:
Google Server IP Addresses:
##.###.###.###
Google Server Hostnames:
Google DNS thinks your IP is:
CPU Count:
4
CPU Load Average:
[0.16, 0.05, 0.06]
Total Memory:
14.38 GiB
Free Memory:
11.34 GiB
TCP segments sent during test:
5592296
TCP segments received during test:
2417850
TCP segments retransmit during test:
3794
Disk Counter Deltas:
disk reads writes rbytes wbytes rtime wtime
sda1 31 5775 126976 1091674112 856 1603544
TCP /proc values:
wmem_default = 212992
wmem_max = 212992
rmem_default = 212992
tcp_timestamps = 1
tcp_window_scaling = 1
tcp_sack = 1
rmem_max = 212992
Boto HTTPS Enabled:
True
Requests routed through proxy:
False
Latency of the DNS lookup for Google Storage server (ms):
2.5
Latencies connecting to Google Storage server IPs (ms):
##.###.###.### = 1.1
------------------------------------------------------------------------------
In-Process HTTP Statistics
------------------------------------------------------------------------------
Total HTTP requests made: 94
HTTP 5xx errors: 0
HTTP connections broken: 0
Availability: 100%
For standard storage I get:
==============================================================================
DIAGNOSTIC RESULTS
==============================================================================
------------------------------------------------------------------------------
Latency
------------------------------------------------------------------------------
Operation Size Trials Mean (ms) Std Dev (ms) Median (ms) 90th % (ms)
========= ========= ====== ========= ============ =========== ===========
Delete 0 B 5 121.9 34.8 105.1 158.9
Delete 1 KiB 5 159.3 58.2 126.0 232.3
Delete 100 KiB 5 106.8 17.0 103.3 125.7
Delete 1 MiB 5 167.0 77.3 145.1 251.0
Download 0 B 5 87.2 10.3 81.1 100.0
Download 1 KiB 5 95.5 18.0 92.4 115.6
Download 100 KiB 5 156.7 20.5 155.8 179.6
Download 1 MiB 5 219.6 11.7 213.4 232.6
Metadata 0 B 5 59.7 4.5 57.8 64.4
Metadata 1 KiB 5 61.0 21.8 49.6 85.4
Metadata 100 KiB 5 55.3 10.4 50.7 67.7
Metadata 1 MiB 5 75.6 27.8 67.4 109.0
Upload 0 B 5 162.7 37.0 139.0 207.7
Upload 1 KiB 5 165.2 23.6 152.3 194.1
Upload 100 KiB 5 392.1 235.0 268.7 643.0
Upload 1 MiB 5 387.0 79.5 340.9 486.1
------------------------------------------------------------------------------
Write Throughput
------------------------------------------------------------------------------
Copied a 1 GiB file 5 times for a total transfer size of 5 GiB.
Write throughput: 515.63 Mibit/s.
------------------------------------------------------------------------------
Read Throughput
------------------------------------------------------------------------------
Copied a 1 GiB file 5 times for a total transfer size of 5 GiB.
Read throughput: 123.14 Mibit/s.
------------------------------------------------------------------------------
System Information
------------------------------------------------------------------------------
IP Address:
10.240.133.190
Temporary Directory:
/tmp
Bucket URI:
gs://test_throughput_standard/
gsutil Version:
4.12
boto Version:
2.30.0
Measurement time:
2015-05-21 11:08:50 AM
Google Server:
Google Server IP Addresses:
##.###.##.###
Google Server Hostnames:
Google DNS thinks your IP is:
CPU Count:
8
CPU Load Average:
[0.28, 0.18, 0.08]
Total Memory:
Upload 1 MiB 5 387.0 79.5 340.9 486.1
49.91 GiB
Free Memory:
47.9 GiB
TCP segments sent during test:
5165461
TCP segments received during test:
1881727
TCP segments retransmit during test:
3423
Disk Counter Deltas:
disk reads writes rbytes wbytes rtime wtime
dm-0 0 0 0 0 0 0
loop0 0 0 0 0 0 0
loop1 0 0 0 0 0 0
sda1 0 4229 0 1080618496 0 1605286
TCP /proc values:
wmem_default = 212992
wmem_max = 212992
rmem_default = 212992
tcp_timestamps = 1
tcp_window_scaling = 1
tcp_sack = 1
rmem_max = 212992
Boto HTTPS Enabled:
True
Requests routed through proxy:
False
Latency of the DNS lookup for Google Storage server (ms):
1.2
Latencies connecting to Google Storage server IPs (ms):
##.###.##.### = 1.3
------------------------------------------------------------------------------
In-Process HTTP Statistics
------------------------------------------------------------------------------
Total HTTP requests made: 94
HTTP 5xx errors: 0
HTTP connections broken: 0
Availability: 100%
==============================================================================
DIAGNOSTIC RESULTS
==============================================================================
------------------------------------------------------------------------------
Latency
------------------------------------------------------------------------------
Operation Size Trials Mean (ms) Std Dev (ms) Median (ms) 90th % (ms)
========= ========= ====== ========= ============ =========== ===========
Delete 0 B 5 145.1 59.4 117.8 215.2
Delete 1 KiB 5 178.0 51.4 190.6 224.3
Delete 100 KiB 5 98.3 5.0 96.6 104.3
Delete 1 MiB 5 117.7 19.2 112.0 140.2
Download 0 B 5 109.4 38.9 91.9 156.5
Download 1 KiB 5 149.5 41.0 141.9 192.5
Download 100 KiB 5 106.9 20.3 108.6 127.8
Download 1 MiB 5 121.1 16.0 112.2 140.9
Metadata 0 B 5 70.0 10.8 76.8 79.9
Metadata 1 KiB 5 113.8 36.6 124.0 148.7
Metadata 100 KiB 5 63.1 20.2 55.7 86.5
Metadata 1 MiB 5 59.2 4.9 61.3 62.9
Upload 0 B 5 127.5 22.6 117.4 153.6
Upload 1 KiB 5 215.2 54.8 221.4 270.4
Upload 100 KiB 5 229.8 79.2 171.6 329.8
Upload 1 MiB 5 489.8 412.3 295.3 915.4
------------------------------------------------------------------------------
Write Throughput
------------------------------------------------------------------------------
Copied a 1 GiB file 5 times for a total transfer size of 5 GiB.
Write throughput: 503 Mibit/s.
------------------------------------------------------------------------------
Read Throughput
------------------------------------------------------------------------------
Copied a 1 GiB file 5 times for a total transfer size of 5 GiB.
Read throughput: 1.05 Gibit/s.
------------------------------------------------------------------------------
System Information
------------------------------------------------------------------------------
IP Address:
################
Temporary Directory:
/tmp
Bucket URI:
gs://test_throughput_standard/
gsutil Version:
4.12
boto Version:
2.30.0
Measurement time:
2015-05-21 06:20:49 PM
Google Server:
Google Server IP Addresses:
#############
Google Server Hostnames:
Google DNS thinks your IP is:
CPU Count:
8
CPU Load Average:
[0.08, 0.03, 0.05]
Total Memory:
49.91 GiB
Free Memory:
47.95 GiB
TCP segments sent during test:
4958020
TCP segments received during test:
2326124
TCP segments retransmit during test:
2163
Disk Counter Deltas:
disk reads writes rbytes wbytes rtime wtime
dm-0 0 0 0 0 0 0
loop0 0 0 0 0 0 0
loop1 0 0 0 0 0 0
sda1 0 4202 0 1080475136 0 1610000
TCP /proc values:
wmem_default = 212992
wmem_max = 212992
rmem_default = 212992
tcp_timestamps = 1
tcp_window_scaling = 1
tcp_sack = 1
rmem_max = 212992
Boto HTTPS Enabled:
True
Requests routed through proxy:
False
Latency of the DNS lookup for Google Storage server (ms):
1.6
Latencies connecting to Google Storage server IPs (ms):
############ = 1.3
2nd Run:
==============================================================================
DIAGNOSTIC RESULTS
==============================================================================
------------------------------------------------------------------------------
Latency
------------------------------------------------------------------------------
Operation Size Trials Mean (ms) Std Dev (ms) Median (ms) 90th % (ms)
========= ========= ====== ========= ============ =========== ===========
Delete 0 B 5 91.5 14.0 85.1 106.0
Delete 1 KiB 5 125.4 76.2 91.7 203.3
Delete 100 KiB 5 104.4 15.9 99.0 123.2
Delete 1 MiB 5 128.2 36.0 116.4 170.7
Download 0 B 5 60.2 8.3 63.0 68.7
Download 1 KiB 5 62.6 11.3 61.6 74.8
Download 100 KiB 5 103.2 21.3 110.7 123.8
Download 1 MiB 5 137.1 18.5 130.3 159.8
Metadata 0 B 5 73.4 35.9 62.3 114.2
Metadata 1 KiB 5 55.9 18.1 55.3 75.6
Metadata 100 KiB 5 45.7 11.0 42.5 59.1
Metadata 1 MiB 5 49.9 7.9 49.2 58.8
Upload 0 B 5 128.2 24.6 115.5 158.8
Upload 1 KiB 5 153.5 44.1 132.4 206.4
Upload 100 KiB 5 176.8 26.8 165.1 209.7
Upload 1 MiB 5 277.9 80.2 214.7 378.5
------------------------------------------------------------------------------
Write Throughput
------------------------------------------------------------------------------
Copied a 1 GiB file 5 times for a total transfer size of 5 GiB.
Write throughput: 463.76 Mibit/s.
------------------------------------------------------------------------------
Read Throughput
------------------------------------------------------------------------------
Copied a 1 GiB file 5 times for a total transfer size of 5 GiB.
Read throughput: 184.96 Mibit/s.
------------------------------------------------------------------------------
System Information
------------------------------------------------------------------------------
IP Address:
#################
Temporary Directory:
/tmp
Bucket URI:
gs://test_throughput_standard/
gsutil Version:
4.12
boto Version:
2.30.0
Measurement time:
2015-05-21 06:24:31 PM
Google Server:
Google Server IP Addresses:
####################
Google Server Hostnames:
Google DNS thinks your IP is:
CPU Count:
8
CPU Load Average:
[0.19, 0.17, 0.11]
Total Memory:
49.91 GiB
Free Memory:
47.9 GiB
TCP segments sent during test:
5180256
TCP segments received during test:
2034323
TCP segments retransmit during test:
2883
Disk Counter Deltas:
disk reads writes rbytes wbytes rtime wtime
dm-0 0 0 0 0 0 0
loop0 0 0 0 0 0 0
loop1 0 0 0 0 0 0
sda1 0 4209 0 1080480768 0 1604066
TCP /proc values:
wmem_default = 212992
wmem_max = 212992
rmem_default = 212992
tcp_timestamps = 1
tcp_window_scaling = 1
tcp_sack = 1
rmem_max = 212992
Boto HTTPS Enabled:
True
Requests routed through proxy:
False
Latency of the DNS lookup for Google Storage server (ms):
3.5
Latencies connecting to Google Storage server IPs (ms):
################ = 1.1
------------------------------------------------------------------------------
In-Process HTTP Statistics
------------------------------------------------------------------------------
Total HTTP requests made: 94
HTTP 5xx errors: 0
HTTP connections broken: 0
Availability: 100%
3rd run
==============================================================================
DIAGNOSTIC RESULTS
==============================================================================
------------------------------------------------------------------------------
Latency
------------------------------------------------------------------------------
Operation Size Trials Mean (ms) Std Dev (ms) Median (ms) 90th % (ms)
========= ========= ====== ========= ============ =========== ===========
Delete 0 B 5 157.0 78.3 101.5 254.9
Delete 1 KiB 5 153.5 49.1 178.3 202.5
Delete 100 KiB 5 152.9 47.5 168.0 202.6
Delete 1 MiB 5 110.6 20.4 105.7 134.5
Download 0 B 5 104.4 50.5 66.8 167.6
Download 1 KiB 5 68.1 11.1 68.7 79.2
Download 100 KiB 5 85.5 5.8 86.0 90.8
Download 1 MiB 5 126.6 40.1 100.5 175.0
Metadata 0 B 5 67.9 16.2 61.0 86.6
Metadata 1 KiB 5 49.3 8.6 44.9 59.5
Metadata 100 KiB 5 66.6 35.4 44.2 107.8
Metadata 1 MiB 5 53.9 13.2 52.1 69.4
Upload 0 B 5 136.7 37.1 114.4 183.5
Upload 1 KiB 5 145.5 58.3 116.8 208.2
Upload 100 KiB 5 227.3 37.6 233.3 259.3
Upload 1 MiB 5 274.8 45.2 261.8 328.5
------------------------------------------------------------------------------
Write Throughput
------------------------------------------------------------------------------
Copied a 1 GiB file 5 times for a total transfer size of 5 GiB.
Write throughput: 407.03 Mibit/s.
------------------------------------------------------------------------------
Read Throughput
------------------------------------------------------------------------------
Copied a 1 GiB file 5 times for a total transfer size of 5 GiB.
Read throughput: 629.07 Mibit/s.
------------------------------------------------------------------------------
System Information
------------------------------------------------------------------------------
IP Address:
###############
Temporary Directory:
/tmp
Bucket URI:
gs://test_throughput_standard/
gsutil Version:
4.12
boto Version:
2.30.0
Measurement time:
2015-05-21 06:32:48 PM
Google Server:
Google Server IP Addresses:
################
Google Server Hostnames:
Google DNS thinks your IP is:
CPU Count:
8
CPU Load Average:
[0.11, 0.13, 0.13]
Total Memory:
49.91 GiB
Free Memory:
47.94 GiB
TCP segments sent during test:
5603925
TCP segments received during test:
2438425
TCP segments retransmit during test:
4586
Disk Counter Deltas:
disk reads writes rbytes wbytes rtime wtime
dm-0 0 0 0 0 0 0
loop0 0 0 0 0 0 0
loop1 0 0 0 0 0 0
sda1 0 4185 0 1080353792 0 1603851
TCP /proc values:
wmem_default = 212992
wmem_max = 212992
rmem_default = 212992
tcp_timestamps = 1
tcp_window_scaling = 1
tcp_sack = 1
rmem_max = 212992
Boto HTTPS Enabled:
True
Requests routed through proxy:
False
Latency of the DNS lookup for Google Storage server (ms):
2.2
Latencies connecting to Google Storage server IPs (ms):
############## = 1.6
All things being equal, write performance is generally higher for modern storage systems because of presence of a caching layer between the application disks, that said, what you are seeing is within the expected range for "nearline" storage.
I have observed far superior throughput results when using "standard" storage buckets. Though latency did not improve much. Consider using a "Standard" bucket if your application requires high throughput. If your application is sensitive to latency, then using local storage as a cache (or scratch space) may be the only option.
Here is a snippet from one my experiments on "Standard" buckets:
------------------------------------------------------------------------------
Latency
------------------------------------------------------------------------------
Operation Size Trials Mean (ms) Std Dev (ms) Median (ms) 90th % (ms)
========= ========= ====== ========= ============ =========== ===========
Delete 0 B 10 91.5 12.4 89.0 98.5
Delete 1 KiB 10 96.4 9.1 95.6 105.6
Delete 100 KiB 10 92.9 22.8 85.3 102.4
Delete 1 MiB 10 86.4 9.1 84.1 93.2
Download 0 B 10 54.2 5.1 55.4 58.8
Download 1 KiB 10 83.3 18.7 78.4 94.9
Download 100 KiB 10 75.2 14.5 68.6 92.6
Download 1 MiB 10 95.0 19.7 86.3 126.7
Metadata 0 B 10 33.5 7.9 31.1 44.8
Metadata 1 KiB 10 36.3 7.2 35.8 46.8
Metadata 100 KiB 10 37.7 9.2 36.6 44.1
Metadata 1 MiB 10 116.1 231.3 36.6 136.1
Upload 0 B 10 151.4 67.5 122.9 195.9
Upload 1 KiB 10 134.2 22.4 127.9 149.3
Upload 100 KiB 10 168.8 20.5 168.6 188.6
Upload 1 MiB 10 213.3 37.6 200.2 262.5
------------------------------------------------------------------------------
Write Throughput
------------------------------------------------------------------------------
Copied 5 1 GiB file(s) for a total transfer size of 10 GiB.
Write throughput: 3.46 Gibit/s.
Parallelism strategy: both
------------------------------------------------------------------------------
Write Throughput With File I/O
------------------------------------------------------------------------------
Copied 5 1 GiB file(s) for a total transfer size of 10 GiB.
Write throughput: 3.9 Gibit/s.
Parallelism strategy: both
------------------------------------------------------------------------------
Read Throughput
------------------------------------------------------------------------------
Copied 5 1 GiB file(s) for a total transfer size of 10 GiB.
Read throughput: 7.04 Gibit/s.
Parallelism strategy: both
------------------------------------------------------------------------------
Read Throughput With File I/O
------------------------------------------------------------------------------
Copied 5 1 GiB file(s) for a total transfer size of 10 GiB.
Read throughput: 1.64 Gibit/s.
Parallelism strategy: both
Hope that is helpful.