radosgw swift interface put object response 500 error? - ceph

I startup a radosgw (Civetweb embedded), i want use swift interface to put/get oject;
my radosgw's ceph config:
[global]
fsid = 584ba99d-0665-4465-b693-6c78ae25576f
mon_initial_members = n6-0**-0**, n6-0**-0**, n6-0**-0**
mon_host = 10.6.**.**, 10.5.**.**, 10.4.**.**
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd pool default size = 1 # Write an object 3 times.
osd pool default min size = 1 # Allow writing one copy in a degraded state.
osd pool default pg num = 2000
osd pool default pgp num = 2000
keyring = /etc/ceph/ceph.client.admin.keyring
#debug ms = 1
#debug rgw = 20
[client.radosgw.gateway]
host = in**-**
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw_frontends = "civetweb port=9980"
rgw socket path = ""
log file = /var/log/radosgw/client.bootstrap-rgw.log
radosgw.keyring
ceph# vi ceph.client.radosgw.keyring
1 [client.radosgw.gateway]
2 key = AQDY5pFXyvELDRAAvl6HCERMwpwfHIKaA23rlw==
start radosgw:
/etc/init.d/radosgw start
secondly, i create a swift user:
radosgw-admin user create --uid="dockeruser" --display-name="Docker User"
radosgw-admin subuser create --uid=dockeruser --subuser=dockeruser:swift --access=full
radosgw-admin key create --subuser=dockeruser:swift --key-type=swift --gen-secret
{
"user_id": "dockeruser",
"display_name": "Docker User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "dockeruser:swift",
"permissions": "full-control"
}
],
"keys": [
{
"user": "dockeruser",
"access_key": "9LJMM0BAMI7XU1ZAB0BG",
"secret_key": "8uF7OLlLsqCaCsfE08sOKiCb9gIrYEQICoH475Xw"
}
],
"swift_keys": [
{
"user": "dockeruser:swift",
"secret_key": "oY894WlkjlyUAxHacYNMAyR8dpR3ZzlRoBJbt3xW"
}
],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"temp_url_keys": []
}
and then use swiftclient to communicate:
swiftclient$ python shell.py -A http://10.4.**.**:9980/auth/1.0 -U dockeruser:swift -K 'oY894WlkjlyUAxHacYNMAyR8dpR3ZzlRoBJbt3xW' stat
Account: v1
Containers: 5
Objects: 0
Bytes: 0
X-Timestamp: 1469590199.89180
X-Account-Bytes-Used-Actual: 0
X-Trans-Id: tx0000000000000000012e8-0057982ab7-b215dd-default
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes
create a bucket(container):
swiftclient$ python shell.py -A http://10.4.**.**:9980/auth/1.0 -U dockeruser:swift -K 'oY894WlkjlyUAxHacYNMAyR8dpR3ZzlRoBJbt3xW' post test1
list container( everything works well):
swiftclient$ python shell.py -A http://10.4.**.**:9980/auth/1.0 -U dockeruser:swift -K 'oY894WlkjlyUAxHacYNMAyR8dpR3ZzlRoBJbt3xW' list
colin
my-new-bucket
registry
registry22
test1
but when i put object(use python scripts), i got error:
swiftclient$ ./put_object.py
Traceback (most recent call last):
File "./put_object.py", line 33, in <module>
with urllib.request.urlopen(req, binary_data) as f:
File "/usr/lib/python3.4/urllib/request.py", line 153, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.4/urllib/request.py", line 461, in open
response = meth(req, response)
File "/usr/lib/python3.4/urllib/request.py", line 571, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python3.4/urllib/request.py", line 499, in error
return self._call_chain(*args)
File "/usr/lib/python3.4/urllib/request.py", line 433, in _call_chain
result = func(*args)
File "/usr/lib/python3.4/urllib/request.py", line 579, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 500: Internal Server Error
radosgw's log:
2016-07-27 14:06:32.032983 7f5f0cff9700 2 req 1:0.056814:swift:PUT /swift/v1/registry22/colin_key:put_obj:recalculating target
2016-07-27 14:06:32.032986 7f5f0cff9700 2 req 1:0.056817:swift:PUT /swift/v1/registry22/colin_key:put_obj:reading permissions
2016-07-27 14:06:32.032989 7f5f0cff9700 2 req 1:0.056820:swift:PUT /swift/v1/registry22/colin_key:put_obj:init op
2016-07-27 14:06:32.032996 7f5f0cff9700 2 req 1:0.056827:swift:PUT /swift/v1/registry22/colin_key:put_obj:verifying op mask
2016-07-27 14:06:32.032998 7f5f0cff9700 20 required_mask= 2 user.op_mask=7
2016-07-27 14:06:32.032999 7f5f0cff9700 2 req 1:0.056830:swift:PUT /swift/v1/registry22/colin_key:put_obj:verifying op permissions
2016-07-27 14:06:32.033002 7f5f0cff9700 5 Searching permissions for uid=dockeruser mask=50
2016-07-27 14:06:32.033003 7f5f0cff9700 5 Found permission: 15
2016-07-27 14:06:32.033004 7f5f0cff9700 5 Searching permissions for group=1 mask=50
2016-07-27 14:06:32.033005 7f5f0cff9700 5 Permissions for group not found
2016-07-27 14:06:32.033006 7f5f0cff9700 5 Searching permissions for group=2 mask=50
2016-07-27 14:06:32.033007 7f5f0cff9700 5 Permissions for group not found
2016-07-27 14:06:32.033007 7f5f0cff9700 5 Getting permissions id=dockeruser owner=dockeruser perm=2
2016-07-27 14:06:32.033008 7f5f0cff9700 10 uid=dockeruser requested perm (type)=2, policy perm=2, user_perm_mask=2, acl perm=2
2016-07-27 14:06:32.033010 7f5f0cff9700 2 req 1:0.056841:swift:PUT /swift/v1/registry22/colin_key:put_obj:verifying op params
2016-07-27 14:06:32.033011 7f5f0cff9700 2 req 1:0.056842:swift:PUT /swift/v1/registry22/colin_key:put_obj:pre-executing
2016-07-27 14:06:32.033013 7f5f0cff9700 2 req 1:0.056844:swift:PUT /swift/v1/registry22/colin_key:put_obj:executing
2016-07-27 14:06:32.033131 7f5f0cff9700 20 get_obj_state: rctx=0x7f5f0cff2ff0 obj=registry22:colin_key state=0x7f5eac01df08 s->prefetch_data=0
2016-07-27 14:06:32.033199 7f5f0cff9700 1 -- 10.4.24.158:0/3995324651 --> 10.6.16.213:6808/2093 -- osd_op(client.7883846.0:666 19.6090fa40 a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1_colin_key [getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e2967) v7 -- ?+0 0x7f5eac021e20 con 0x7f5eac020a20
2016-07-27 14:06:32.041907 7f5f0c7f8700 1 -- 10.4.24.158:0/3995324651 <== osd.57 10.6.16.213:6808/2093 1 ==== osd_op_reply(666 a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1_colin_key [getxattrs,stat] v0'0 uv0 ack = -2 ((2) No such file or directory)) v6 ==== 266+0+0 (1562684643 0 0) 0x7f5cb4001ac0 con 0x7f5eac020a20
2016-07-27 14:06:32.041994 7f5f0cff9700 20 get_obj_state: rctx=0x7f5f0cff2ff0 obj=registry22:colin_key state=0x7f5eac01df08 s->prefetch_data=0
2016-07-27 14:06:32.042016 7f5f0cff9700 10 setting object write_tag=a9da190c-5df5-4978-8fc6-d411b1d5ddb3.7883846.1
2016-07-27 14:06:32.042082 7f5f0cff9700 20 reading from default.rgw.data.root:.bucket.meta.registry22:a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1
2016-07-27 14:06:32.042091 7f5f0cff9700 20 get_system_obj_state: rctx=0x7f5f0cff1c00 obj=default.rgw.data.root:.bucket.meta.registry22:a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1 state=0x7f5eac02eb48 s->prefetch_data=0
2016-07-27 14:06:32.042098 7f5f0cff9700 10 cache get: name=default.rgw.data.root+.bucket.meta.registry22:a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1 : type miss (requested=22, cached=19)
2016-07-27 14:06:32.042123 7f5f0cff9700 1 -- 10.4.24.158:0/3995324651 --> 10.6.16.212:6810/96264 -- osd_op(client.7883846.0:667 11.3988fae5 .bucket.meta.registry22:a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1 [call version.read,getxattrs,stat] snapc 0=[] ack+read+known_if_redirected e2967) v7 -- ?+0 0x7f5eac030af0 con 0x7f5f5800b750
2016-07-27 14:06:32.047598 7f5f644f5700 1 -- 10.4.24.158:0/3995324651 <== osd.43 10.6.16.212:6810/96264 48 ==== osd_op_reply(667 .bucket.meta.registry22:a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1 [call,getxattrs,stat] v0'0 uv2 ondisk = 0) v6 ==== 322+0+318 (1672081398 0 832274712) 0x7f5cc4005be0 con 0x7f5f5800b750
2016-07-27 14:06:32.047631 7f5f0cff9700 10 cache put: name=default.rgw.data.root+.bucket.meta.registry22:a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1 info.flags=22
2016-07-27 14:06:32.047639 7f5f0cff9700 10 moving default.rgw.data.root+.bucket.meta.registry22:a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1 to cache LRU end
2016-07-27 14:06:32.047641 7f5f0cff9700 10 updating xattr: name=user.rgw.acl bl.length()=159
2016-07-27 14:06:32.047644 7f5f0cff9700 20 get_system_obj_state: s->obj_tag was set empty
2016-07-27 14:06:32.047647 7f5f0cff9700 10 cache get: name=default.rgw.data.root+.bucket.meta.registry22:a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1 : hit (requested=17, cached=23)
2016-07-27 14:06:32.047659 7f5f0cff9700 20 bucket index object: .dir.a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1
2016-07-27 14:06:32.047727 7f5f0cff9700 1 -- 10.4.24.158:0/3995324651 --> 10.6.16.213:6812/2271 -- osd_op(client.7883846.0:668 18.29a991bc .dir.a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1 [call rgw.bucket_prepare_op] snapc 0=[] ondisk+write+known_if_redirected e2967) v7 -- ?+0 0x7f5eac034520 con 0x7f5eac02edb0
2016-07-27 14:06:32.118719 7f5f0c5f6700 1 -- 10.4.24.158:0/3995324651 <== osd.53 10.6.16.213:6812/2271 1 ==== osd_op_reply(668 .dir.a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1 [call] v2967'20 uv20 ondisk = 0) v6 ==== 219+0+0 (1279524478 0 0) 0x7f5cbc001ad0 con 0x7f5eac02edb0
2016-07-27 14:06:32.118789 7f5f0cff9700 1 -- 10.4.24.158:0/3995324651 --> 10.6.16.213:6808/2093 -- osd_op(client.7883846.0:669 19.6090fa40 a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1_colin_key [create 0~0 [excl],setxattr user.rgw.idtag (47),writefull 0~27,setxattr user.rgw.manifest (593),setxattr user.rgw.acl (159),setxattr user.rgw.content_type (34),setxattr user.rgw.etag (33),call rgw.obj_store_pg_ver,setxattr user.rgw.source_zone (4)] snapc 0=[] ondisk+write+known_if_redirected e2967) v7 -- ?+0 0x7f5eac031520 con 0x7f5eac020a20
2016-07-27 14:06:32.120500 7f5f0c7f8700 1 -- 10.4.24.158:0/3995324651 <== osd.57 10.6.16.213:6808/2093 2 ==== osd_op_reply(669 a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1_colin_key [create 0~0 [excl],setxattr (47),writefull 0~27,setxattr (593),setxattr (159),setxattr (34),setxattr (33),call,setxattr (4)] v0'0 uv0 ondisk = -95 ((95) Operation not supported)) v6 ==== 560+0+0 (2893445330 0 0) 0x7f5cb4000cb0 con 0x7f5eac020a20
2016-07-27 14:06:32.120586 7f5f0cff9700 1 -- 10.4.24.158:0/3995324651 --> 10.6.16.213:6812/2271 -- osd_op(client.7883846.0:670 18.29a991bc .dir.a9da190c-5df5-4978-8fc6-d411b1d5ddb3.11666571.1 [call rgw.bucket_complete_op] snapc 0=[] ack+ondisk+write+known_if_redirected e2967) v7 -- ?+0 0x7f5eac0314d0 con 0x7f5eac02edb0
2016-07-27 14:06:32.120617 7f5f0cff9700 2 req 1:0.144447:swift:PUT /swift/v1/registry22/colin_key:put_obj:completing
2016-07-27 14:06:32.120633 7f5f0cff9700 0 WARNING: set_req_state_err err_no=95 resorting to 500
2016-07-27 14:06:32.120683 7f5f0cff9700 2 req 1:0.144514:swift:PUT /swift/v1/registry22/colin_key:put_obj:op status=-95
2016-07-27 14:06:32.120687 7f5f0cff9700 2 req 1:0.144518:swift:PUT /swift/v1/registry22/colin_key:put_obj:http status=500
2016-07-27 14:06:32.120694 7f5f0cff9700 1 ====== req done req=0x7f5f0cff38a0 op status=-95 http_status=500 ======
2016-07-27 14:06:32.120707 7f5f0cff9700 20 process_request() returned -95
2016-07-27 14:06:32.120728 7f5f0cff9700 1 civetweb: 0x7f5eac0008c0: 10.6.26.137 - - [27/Jul/2016:14:06:31 +0800] "PUT /swift/v1/registry22/colin_key HTTP/1.1" 500 0 - Python-urllib/3.4
put_object's scrips:
req = Request('http://10.4.**.**:9980/swift/v1/registry22/colin_key',
method = 'PUT')
timestr = datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
req.add_header('Host', '10.4.**.**')
req.add_header('Date', timestr)
#req.add_header('x-amz-acl', 'public-read-write')
req.add_header('X-Auth-Token', key)
values = {'user':'colin', 'data': 'colin_value'}
data = urllib.parse.urlencode(values)
binary_data = data.encode('utf-8')
with urllib.request.urlopen(req, binary_data) as f:
print(f.status)
print(f.read().decode('utf-8'))
i use the python scripts to list container, it works well:
swiftclient$ ./list_containers.py
200
colin
my-new-bucket
registry
registry22
test1
list_containers.py:
key = 'AUTH_rgwtk10000000646f636b6572757365723a73776966745eeb32c1f8ad71e4018f9857
4829d919ddbe49b3c2e5d3bc30ea38d48633cd8a492ba0ca'
req = Request('http://10.4.**.**:9980/swift/v1',
method = 'GET')
timestr = datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
req.add_header('Host', '10.4.**.**')
req.add_header('Date', timestr)
#req.add_header('x-amz-acl', 'public-read-write')
req.add_header('X-Auth-Token', key)
with urllib.request.urlopen(req) as f:
print(f.status)
print(f.read().decode('utf-8'))
I don't know how to solve it, please help me;
thanks a lot;

Related

NOSRV errors seen in haproxy logs

We have haproxy in front of 2 apache servers and every day for less than a minute I am getting NOSRV errors in haproxy logs. There are successful requests from the source IP so this is just intermittent. There is no entry of any error in the backend logs.
Below is the snippet from access logs:
Dec 22 20:21:25 proxy01 haproxy[3000561]: X.X.X.X:60872 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:43212 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:43206 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:60974 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32772 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 103 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32774 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 59 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32776 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 57 0
below is the HAproxy config file:
defaults
log global
timeout connect 15000
timeout check 5000
timeout client 30000
timeout server 30000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend Local_Server
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/
mode http
option httplog
cookie SRVNAME insert indirect nocache maxidle 8h maxlife 8h
#capture request header X-Forwarded-For len 15
#capture request header Host len 32
http-request capture req.hdrs len 512
log-format "%ci:%cp[%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
#log-format "%ci:%cp %ft %b/%s %Tw/%Tc/%Tr/ %ST %B %rc %bq %hr %hs %{+Q}r %Tt %Ta"
option dontlognull
option http-keep-alive
#declare whitelists for urls
acl xx_whitelist src -f /etc/haproxy/xx_whitelist.lst
acl is-blocked-ip src -f /etc/haproxy/badactors-list.txt
http-request silent-drop if is-blocked-ip
acl all src 0.0.0.0
######### ANTI BAD GUYS STUFF ###########################################
#anti DDOS sticktable - sends a 500 after 5s when requests from IP over 120 per
#frontend for stick table see backend "st_src_global" also
#Restrict number of requests in last 10 secs
# TO MONTOR RUN " watch -n 1 'echo "show table st_src_global" | socat unix:/run/haproxy/admin.sock -' " ON CLI.
#ZZZ THIS MAY NEED DISABLEING FOR LOAD TESTS ZZZZ
# Table definition
http-request track-sc0 src table st_src_global #<- defines tracking stick table
stick-table type ip size 100k expire 10s store http_req_rate(50000s) #<- sets the limit for and time to store IP
http-request silent-drop if { sc_http_req_rate(0) gt 50000 } # drops if requests are greater the 5000 in 5 secs
# Allow clean known IPs to bypass the filter
tcp-request connection accept if { src -f /etc/haproxy/xx_whitelist.lst }
#Slowlorris protection -send 408 if http request not completed in 5secs
timeout http-request 10s
option http-buffer-request
# Block Specific Requests
#http-request deny if HTTP_1.0
http-request deny if { req.hdr(user-agent) -i -m sub phantomjs slimerjs }
#traffic shape
#xxxx.xxxx.xx.xx
acl xxxxx.xxxxx.xx.xx hdr(host) -i xxxx.xxxx.xx.xx
use_backend xxxx.xxxx.xx.xx if xxxx.xxxx.xx.xx xx_whitelist #update from proxys
#sticktable for dos protection
backend st_src_global
stick-table type ip size 1m expire 10s store http_req_rate(50000s)
backend xxxxxxx.xxxxx.xx.xx
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server web01-http x.x.x.x:80 check maxconn 100
server web03-http x.x.x.x.:80 check maxconn 100

How does maxRequestsPerConnection of istio work?

everyone.
I have been learning istio and to understand how maxRequestsPerConnection works, I applied the manifest below.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
trafficPolicy:
connectionPool:
http:
maxRequestsPerConnection: 1
httpbin is a sample service of istio.
I thought maxRequestsPerConnection means how many http requests are allowed per one TCP Connection, and istio would close tcp connection after pod received one http request in this case.
After applying, I sent some http requests using telnet. I thought istio would accept the request once and then close the TCP connection, but istio didn't.
$ telnet httpbin 8000
Trying 10.76.12.133...
Connected to httpbin.default.svc.cluster.local.
Escape character is '^]'.
GET /get HTTP/1.1
User-Agent: Telnet [ja] (Linux)
Host: httpbin
HTTP/1.1 200 OK
server: envoy
date: Sun, 07 Nov 2021 14:14:16 GMT
content-type: application/json
content-length: 579
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 9
{
"args": {},
"headers": {
"Host": "httpbin",
"User-Agent": "Telnet [ja] (Linux)",
"X-B3-Parentspanid": "b042ad708e2a47a2",
"X-B3-Sampled": "1",
"X-B3-Spanid": "b6a08d45e1a1e15e",
"X-B3-Traceid": "fc23863eafb0322db042ad708e2a47a2",
"X-Envoy-Attempt-Count": "1",
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=d9bb27f31fe44200f803dbe736419b4664b5b81045bb3811711119ca5ccf6a37;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
},
"origin": "127.0.0.6",
"url": "http://httpbin/get"
}
GET /get HTTP/1.1
User-Agent: Telnet [ja] (Linux)
Host: httpbin
HTTP/1.1 200 OK
server: envoy
date: Sun, 07 Nov 2021 14:14:18 GMT
content-type: application/json
content-length: 579
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 3
{
"args": {},
"headers": {
"Host": "httpbin",
"User-Agent": "Telnet [ja] (Linux)",
"X-B3-Parentspanid": "85722c0d777e8537",
"X-B3-Sampled": "1",
"X-B3-Spanid": "31d2acc5348a6fc5",
"X-B3-Traceid": "d7ada94a092d681885722c0d777e8537",
"X-Envoy-Attempt-Count": "1",
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=d9bb27f31fe44200f803dbe736419b4664b5b81045bb3811711119ca5ccf6a37;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
},
"origin": "127.0.0.6",
"url": "http://httpbin/get"
}
After this, I sent http request ten times using fortio, and I got the same result.
$ kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 1 -qps 0 -n 10 -loglevel Warning http://httpbin:8000/get
14:22:56 I logger.go:127> Log level is now 3 Warning (was 2 Info)
Fortio 1.17.1 running at 0 queries per second, 2->2 procs, for 10 calls: http://httpbin:8000/get
Starting at max qps with 1 thread(s) [gomax 2] for exactly 10 calls (10 per thread + 0)
Ended after 106.50891ms : 10 calls. qps=93.889
Aggregated Function Time : count 10 avg 0.010648204 +/- 0.01639 min 0.003757335 max 0.059256801 sum 0.106482036
# range, mid point, percentile, count
>= 0.00375734 <= 0.004 , 0.00387867 , 30.00, 3
> 0.004 <= 0.005 , 0.0045 , 70.00, 4
> 0.005 <= 0.006 , 0.0055 , 80.00, 1
> 0.012 <= 0.014 , 0.013 , 90.00, 1
> 0.05 <= 0.0592568 , 0.0546284 , 100.00, 1
# target 50% 0.0045
# target 75% 0.0055
# target 90% 0.014
# target 99% 0.0583311
# target 99.9% 0.0591642
Sockets used: 1 (for perfect keepalive, would be 1)
Jitter: false
Code 200 : 10 (100.0 %)
Response Header Sizes : count 10 avg 230.1 +/- 0.3 min 230 max 231 sum 2301
Response Body/Total Sizes : count 10 avg 824.1 +/- 0.3 min 824 max 825 sum 8241
All done 10 calls (plus 0 warmup) 10.648 ms avg, 93.9 qps
$
In my understanding, the message Sockets used: 1 (for perfect keepalive, would be 1) means fortio used only one TCP connection.
I guessed clients used different tcp connection for each http requests first, but if it is true, telnet connection was not closed by foreign host and fortio used ten tcp connections.
Please teach me what the function of maxRequestsPerConnection is.

Vpn connect between iOS NEVPNManager and StrongSwan on Ubuntu 16.04

I am trying to create vpn connection in my app. On the sever side use IKEv2 VPN Server with StrongSwan on Ubuntu 16.04. Build by this guid (https://www.digitalocean.com/community/tutorials/how-to-set-up-an-ikev2-vpn-server-with-strongswan-on-ubuntu-16-04).
When I'm trying to connect.
Server send this logs:
- May 5 08:58:21 ip-2 charon: 05[NET] received packet: from 3[500] to 2[500] (432 bytes)
- May 5 08:58:21 ip-2 charon: 05[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(REDIR_SUP) N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) ]
- May 5 08:58:21 ip-2 charon: 05[IKE] 3 is initiating an IKE_SA
- May 5 08:58:21 ip-2 charon: 05[IKE] local host is behind NAT, sending keep alives
- May 5 08:58:21 ip-2 charon: 05[IKE] remote host is behind NAT
- May 5 08:58:21 ip-2 charon: 05[IKE] received proposals inacceptable
- May 5 08:58:21 ip-2 charon: 05[ENC] generating IKE_SA_INIT response 0 [ N(NO_PROP) ]
- May 5 08:58:21 ip-2 charon: 05[NET] sending packet: from 2[500] to 3[500] (36 bytes)
- May 5 08:58:22 ip-2 charon: 16[NET] received packet: from 3[500] to 2[500] (432 bytes)
- May 5 08:58:22 ip-2 charon: 16[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(REDIR_SUP) N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) ]
- May 5 08:58:22 ip-2 charon: 16[IKE] 3 is initiating an IKE_SA
- May 5 08:58:22 ip-2 charon: 16[IKE] local host is behind NAT, sending keep alives
- May 5 08:58:22 ip-2 charon: 16[IKE] remote host is behind NAT
- May 5 08:58:22 ip-2 charon: 16[IKE] received proposals inacceptable
- May 5 08:58:22 ip-2 charon: 16[ENC] generating IKE_SA_INIT response 0 [ N(NO_PROP) ]
- May 5 08:58:22 ip-2 charon: 16[NET] sending packet: from 2[500] to 3[500] (36 bytes)
I use this configuration on server:
config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no
conn ikev2-vpn
auto=add
compress=no
type=tunnel
keyexchange=ikev2
fragmentation=yes
forceencaps=yes
lifetime=8h
dpdaction=clear
dpddelay=300s
rekey=no
left=%any
leftid=<IP>
leftcert=server-cert.pem
leftsendcert=always
leftsubnet=0.0.0.0/0
right=%any
rightid=%any
rightauth=eap-mschapv2
rightsourceip=10.10.10.0/24
rightdns=8.8.8.8,8.8.4.4
rightsendcert=never
eap_identity=%identity
ike=aes256-sha1-modp1024,3des-sha1-modp1024!
esp=aes256-sha1,3des-sha1!
On iOS use this code:
class VpnManager {
let vpnManager = NEVPNManager.shared()
let info = VPNINFO()
func connectToVPN() {
vpnManager.loadFromPreferences { error in
guard error == nil else {
print(error)
return
}
let IKEv2Protocol = NEVPNProtocolIKEv2()
IKEv2Protocol.serverAddress = self.info.serverAddress
IKEv2Protocol.authenticationMethod = .certificate
let certificate = SecCertificateCreateWithData(nil, Data(base64Encoded: self.info.cert)! as CFData)!
let certificateData = SecCertificateCopyData(certificate) as Data
IKEv2Protocol.identityData = certificateData
self.vpnManager.protocolConfiguration = IKEv2Protocol
self.vpnManager.isEnabled = true
self.vpnManager.saveToPreferences { error in
guard error == nil else {
print(error)
return
}
do {
try self.vpnManager.connection.startVPNTunnel(
options: ([
NEVPNConnectionStartOptionUsername: "username",
NEVPNConnectionStartOptionPassword: KeychainWrapper.passwordRefForVPNID("MY_PASSWORD")
] as! [String: NSObject]))
} catch let error {
print(error)
}
}
}
}
}
Expected result:
Connected
Actual result:
Connection -> Disconnected
Last console logs:
Jun 4 15:44:51 charon: 06[NET] received packet: from <my ip>[500] to <server ip>[500] (304 bytes)
Jun 4 15:44:51 charon: 06[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(REDIR_SUP) N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) ]
Jun 4 15:44:51 charon: 06[IKE] <my ip> is initiating an IKE_SA
Jun 4 15:44:51 charon: 06[CFG] selected proposal: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024
Jun 4 15:44:51 charon: 06[IKE] local host is behind NAT, sending keep alives
Jun 4 15:44:51 charon: 06[IKE] remote host is behind NAT
Jun 4 15:44:51 charon: 06[ENC] generating IKE_SA_INIT response 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) N(CHDLESS_SUP) N(MULT_AUTH) ]
Jun 4 15:44:51 charon: 06[NET] sending packet: from <server ip>[500] to <my ip>[500] (328 bytes)
Jun 4 15:44:51 charon: 05[NET] received packet: from <my ip>[500] to <server ip>[500] (304 bytes)
Jun 4 15:44:51 charon: 05[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(REDIR_SUP) N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) ]
Jun 4 15:44:51 charon: 05[IKE] <my ip> is initiating an IKE_SA
Jun 4 15:44:51 charon: 05[CFG] selected proposal: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024
Jun 4 15:44:51 charon: 05[IKE] local host is behind NAT, sending keep alives
Jun 4 15:44:51 charon: 05[IKE] remote host is behind NAT
Jun 4 15:44:51 charon: 05[ENC] generating IKE_SA_INIT response 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) N(CHDLESS_SUP) N(MULT_AUTH) ]
Jun 4 15:44:51 charon: 05[NET] sending packet: from <server ip>[500] to <my ip>[500] (328 bytes)
Jun 4 15:45:11 charon: 08[IKE] sending keep alive to <my ip>[500]
Jun 4 15:45:11 charon: 09[IKE] sending keep alive to <my ip>[500]
Jun 4 15:45:21 charon: 10[JOB] deleting half open IKE_SA with <my ip> after timeout
Jun 4 15:45:21 charon: 11[JOB] deleting half open IKE_SA with <my ip> after timeout
Your strongswan server is configured with the following encryption algorithm.
ike=aes256-sha1-modp1024,3des-sha1-modp1024!
esp=aes256-sha1,3des-sha1!
Solution
You need to specify the Cipher in NEVPNProtocolIKEv2 instance that is supported by VPN Server.
IKEv2Protocol.ikeSecurityAssociationParameters.encryptionAlgorithm = .algorithmAES256
IKEv2Protocol.ikeSecurityAssociationParameters.integrityAlgorithm = .SHA96
IKEv2Protocol.ikeSecurityAssociationParameters.diffieHellmanGroup = .group2
IKEv2Protocol.ikeSecurityAssociationParameters.lifetimeMinutes = 480
IKEv2Protocol.childSecurityAssociationParameters.encryptionAlgorithm = .algorithmAES256
IKEv2Protocol.childSecurityAssociationParameters.integrityAlgorithm = .SHA96
IKEv2Protocol.childSecurityAssociationParameters.diffieHellmanGroup = .group2
IKEv2Protocol.childSecurityAssociationParameters.lifetimeMinutes = 60

Connecting to gtalk in irssi errors with 301

I have irssi and the xmpp plugin configured:
{
address = "talk.google.com";
chatnet = "Gtalk";
autoconnect = "yes";
port = "5223";
#use_ssl = "yes";
#ssl_verify = "yes";
ssl_capath = "/etc/ssl/certs";
}
and
Gtalk = { type = "XMPP"; nick = "neilhwatson#gmail.com"; };
This error is returned:
09:09 [Gtalk] -!- HTTP/1.1 301 Moved Permanently
09:09 [Gtalk] -!- Location: http://www.google.com/hangouts/
09:09 [Gtalk] -!- Content-Type: text/html
09:09 [Gtalk] -!- Content-Length: 178
Is there some other host or port combination that will work?
Using DNS SRV:
$ dig SRV _xmpp-client._tcp.gmail.com
;; ANSWER SECTION:
_xmpp-client._tcp.gmail.com. 337 IN SRV 20 0 5222 alt2.xmpp.l.google.com.
_xmpp-client._tcp.gmail.com. 337 IN SRV 20 0 5222 alt3.xmpp.l.google.com.
_xmpp-client._tcp.gmail.com. 337 IN SRV 5 0 5222 xmpp.l.google.com.
_xmpp-client._tcp.gmail.com. 337 IN SRV 20 0 5222 alt1.xmpp.l.google.com.
_xmpp-client._tcp.gmail.com. 337 IN SRV 20 0 5222 alt4.xmpp.l.google.com.
You could try using xmpp.l.google.com. My XMPP client (pidgin) seems to do this automatically when I tell it that the domain is "gmail.com"

uwsgi long timeouts

I am using ubuntu 12, nginx, uwsgi 1.9 with socket, django 1.5.
Config:
[uwsgi]
base_path = /home/someuser/web/
module = server.manage_uwsgi
uid = www-data
gid = www-data
virtualenv = /home/someuser
master = true
vacuum = true
harakiri = 20
harakiri-verbose = true
log-x-forwarded-for = true
profiler = true
no-orphans = true
max-requests = 10000
cpu-affinity = 1
workers = 4
reload-on-as = 512
listen = 3000
Client tests from Windows7:
C:\Users\user>C:\AppServ\Apache2.2\bin\ab.exe -c 255 -n 5000 http://www.someweb.com/about/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/
Benchmarking www.someweb.com (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Finished 5000 requests
Server Software: nginx
Server Hostname: www.someweb.com
Server Port: 80
Document Path: /about/
Document Length: 1881 bytes
Concurrency Level: 255
Time taken for tests: 66.669814 seconds
Complete requests: 5000
Failed requests: 1
(Connect: 1, Length: 0, Exceptions: 0)
Write errors: 0
Total transferred: 10285000 bytes
HTML transferred: 9405000 bytes
Requests per second: 75.00 [#/sec] (mean)
Time per request: 3400.161 [ms] (mean)
Time per request: 13.334 [ms] (mean, across all concurrent requests)
Transfer rate: 150.64 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 8 207.8 1 9007
Processing: 10 3380 11480.5 440 54421
Waiting: 6 1060 3396.5 271 48424
Total: 11 3389 11498.5 441 54423
Percentage of the requests served within a certain time (ms)
50% 441
66% 466
75% 499
80% 519
90% 3415
95% 36440
98% 54407
99% 54413
100% 54423 (longest request)
I have set following options too:
echo 3000 > /proc/sys/net/core/netdev_max_backlog
echo 3000 > /proc/sys/net/core/somaxconn
So,
1) I make first 3000 requests super fast. I see progress in ab and in uwsgi requests logs -
[pid: 5056|app: 0|req: 518/4997] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
[pid: 5052|app: 0|req: 512/4998] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
[pid: 5054|app: 0|req: 353/4999] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
I dont have any broken pipes or worker respawns.
2) Next requests are running very slow or with some timeout. Looks like that some buffer becomes full and I am waiting before it becomes empty.
3) Some buffer becomes empty.
4) ~500 requests are processed super fast.
5) Some timeout.
6) see Nr. 4
7) see Nr. 5
8) see Nr. 4
9) see Nr. 5
....
....
Need your help
check with netstat and dmesg. You have probably exhausted ephemeral ports or filled the conntrack table.