I have a GKE cluster (gke v1.13.6) and using istio (v1.1.7) with several services deployed and working successfully except one of them which always responds with HTTP 503 when calling through the gateway : upstream connect error or disconnect/reset before headers. reset reason: connection failure.
I've tried calling the pod directly from another pod with curl enabled and it ends up in 503 as well :
$ kubectl exec sleep-754684654f-4mccn -c sleep -- curl -v d-vine-machine-dev:8080/d-vine-machine/swagger-ui.html
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.3.254.3...
* TCP_NODELAY set
* Connected to d-vine-machine-dev (10.3.254.3) port 8080 (#0)
> GET /d-vine-machine/swagger-ui.html HTTP/1.1
> Host: d-vine-machine-dev:8080
> User-Agent: curl/7.60.0
> Accept: */*
>
upstream connect error or disconnect/reset before headers. reset reason: connection failure< HTTP/1.1 503 Service Unavailable
< content-length: 91
< content-type: text/plain
< date: Thu, 04 Jul 2019 08:13:52 GMT
< server: envoy
< x-envoy-upstream-service-time: 60
<
{ [91 bytes data]
100 91 100 91 0 0 1338 0 --:--:-- --:--:-- --:--:-- 1378
* Connection #0 to host d-vine-machine-dev left intact
Setting the log level to TRACE at the istio-proxy level :
$ kubectl exec -it -c istio-proxy d-vine-machine-dev-b8df755d6-bpjwl -- curl -X POST http://localhost:15000/logging?level=trace
I looked into the logs of the injected sidecar istio-proxy and found this :
[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:381] [C119][S9661729384515860777] router decoding headers:
':authority', 'api-dev.d-vine.tech'
':path', '/d-vine-machine/swagger-ui.html'
':method', 'GET'
':scheme', 'http'
'cache-control', 'max-age=0'
'upgrade-insecure-requests', '1'
'user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3'
'accept-encoding', 'gzip, deflate'
'accept-language', 'fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7'
'x-forwarded-for', '10.0.0.1'
'x-forwarded-proto', 'http'
'x-request-id', 'e38a257a-1356-4545-984a-109500cb71c4'
'content-length', '0'
'x-envoy-internal', 'true'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/default/sa/default;Hash=8b6afba64efe1035daa23b004cc255e0772a8bd23c8d6ed49ebc8dabde05d8cf;Subject="O=";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account;DNS=istio-ingressgateway.istio-system'
'x-b3-traceid', 'f749afe8b0a76435192332bfe2f769df'
'x-b3-spanid', 'bfc4618c5cda978c'
'x-b3-parentspanid', '192332bfe2f769df'
'x-b3-sampled', '0'
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:88] creating a new connection
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C121] connecting
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:644] [C121] connecting to 127.0.0.1:8080
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:653] [C121] connection in progress
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:811] [C119][S9661729384515860777] decode headers called: filter=0x4f118b0 status=1
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:384] [C119] parsed 1272 bytes
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:282] [C119] readDisable: enabled=true disable=true
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C121] socket event: 3
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C121] write ready
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:526] [C121] delayed connection error: 111
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:183] [C121] closing socket: 0
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:82] [C121] disconnect. resetting 0 pending requests
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:129] [C121] client disconnected, failure reason:
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:164] [C121] purge pending, failure reason:
[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:644] [C119][S9661729384515860777] upstream reset: reset reason connection failure
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0e5f0 status=0
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0edc0 status=0
[2019-07-04 07:30:41.353][24][debug][filter] [src/envoy/http/mixer/filter.cc:133] Called Mixer::Filter : encodeHeaders 2
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0f0e0 status=0
[2019-07-04 07:30:41.353][24][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1305] [C119][S9661729384515860777] encoding headers via codec (end_stream=false):
':status', '503'
'content-length', '91'
'content-type', 'text/plain'
'date', 'Thu, 04 Jul 2019 07:30:41 GMT'
'server', 'istio-envoy'
Has anyone encountered such an issue ? If you need more info about the configuration, I can provide.
Thanks for your answer Manvar. There was no problem with the curl enabled pod but thanks for the insight. It was a misconfiguration of our tomcat port that was not matching the service/virtualService config.
When pod with an istio side car is started, the follwing things happen
an init container changes the iptables rules so that all the outgoing tcp traffic is routed to the sidecar istio-proxy on port 15001.
the containers of the pod are started in parallel (curl and istio-proxy)
If your curl container is executed before istio-proxy listens on port 15001, you get the error.
Related
We are using Anthos service mesh on GKE and for one of the API endpoints we are getting the Below error, Any help would be really appreciated, I Tried providing a port name to the service as mentioned in another post. but nothing solved the problem.
< HTTP/2 502
< content-length: 87
< content-type: text/plain
< date: Fri, 23 Sep 2022 15:45:08 GMT
< server: istio-envoy
< x-envoy-upstream-service-time: 52
<
* Connection #0 to host example.com left intact
upstream connect error or disconnect/reset before headers. reset reason: protocol error
I use Go-micro(v2) to deploy services inside docker-compose
user-service:
build:
context: ./user-service
restart: always
ports:
- "8086:8086"
deploy:
mode: replicated
replicas: 1
environment:....
See the service configuration
srv = micro.NewService(
micro.Name("my.user"),
micro.Address("127.0.0.1:8086"))
when running docker-compose, the container logs show
2022-07-31 05:43:53 file=v2#v2.9.1/service.go:200 level=info Starting [service] my.user
2022-07-31 05:43:53 file=grpc/grpc.go:864 level=info Server [grpc] Listening on [::]:8086
2022-07-31 05:43:53 file=grpc/grpc.go:697 level=info Registry [mdns] Registering node: my.user-00ee4795-06df-47f1-a07a-cc362e135864
All looks good.
But when I want to query some handlers using curl or postman(for development purpose), It doesn't work,
see an exemple of failed request with postman
GET http://127.0.0.1:8086/my.user/Get
Error: Parse Error: Expected HTTP/
Request Headers
Content-Type: application/json
User-Agent: PostmanRuntime/7.29.2
Accept: */*
Postman-Token: b5ab718a-341b-40ff-81fa-37c66fd4d9f2
Host: 127.0.0.1:8086
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Request Body
GET http://127.0.0.1:8086/my.user/userService/Get // same error
with curl it is not better
curl --header "Content-Type:application/json" --http0.9 --output GET http://localhost:8086/my.user/Get
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15 0 15 0 0 10638 0 --:--:-- --:--:-- --:--:-- 15000
curl --header "Content-Type:application/json" --http0.9 --output GET http://localhost:8086/my.user/userService/Get
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 15 0 15 0 0 13550 0 --:--:-- --:--:-- --:--:-- 15000
Any idea how to query locally some go-micro services ? Thank you.
ps: Note that the 'Get' handler is working
Using HAProxy 1.8, I want to slow down certain traffic. This all works when testing over HTTP 1.1. However as soon as http/2 (h2) is enabled in HAProxy, the 10s delay is no longer taking effect. How can I delay h2 requests?
frontend web
bind [...] alpn h2,http/1.1
tcp-request inspect-delay 10s
tcp-request content accept if WAIT_END
[...]
I'm testing using curl:
time curl -I 'https://[url]/' -v
* Trying 10.233.1.97...
* TCP_NODELAY set
* Connected to [url] (10.233.1.97) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
[...]
* ALPN, server accepted to use h2
[...]
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fd3f5808200)
> GET / HTTP/2
> Host: [...]
> User-Agent: curl/7.64.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 411
HTTP/2 411
< content-type: text/html; charset=us-ascii
content-type: text/html; charset=us-ascii
< server: Microsoft-HTTPAPI/2.0
server: Microsoft-HTTPAPI/2.0
< date: Thu, 02 Apr 2020 19:18:22 GMT
date: Thu, 02 Apr 2020 19:18:22 GMT
< content-length: 344
content-length: 344
<
* Excess found in a non pipelined read: excess = 344 url = / (zero-length body)
* Connection #0 to host app.cloudbilling.nl left intact
* Closing connection 0
curl -I 'https://[url]/' -v 0.02s user 0.01s system 28% cpu 0.101 total
when we follow tcp stream using command:
tshark -q -r test.pcap -z follow,tcp,ascii,0
will get the following output with TCP length in the middle of the streamed output.
How to eliminate the tcp.len? do we have any tshark command to print only TCP stream output not the tcp.len
Follow: tcp,ascii
Filter: tcp.stream eq 0
Node 0: 10.10.30.50:57887
Node 1: 10.10.30.95:4902
**1448** ---> this is tcp length
POST /pushnotification/v1.0/message HTTP/1.1
Accept: */*
Host: 10.10.30.95:4902
Connection: Close
Content-Type: application/json
Authorization: Basic QWxhZGRpbjpraHVsamFzaW1zaW0=
Content-Length: 1277
{"push-message":{"serviceName":"Sync App","TTL":"600","recipients":[{"uri":"sip:919880018501#lab.t-mobile.com"}],"channel":"","pns-type":"RCSPage","pns-subtype":"Chat","nmsEventList":{"nmsEvent":[{"changedObject":{"parentFolder":"https:///oemclient/nms/v1/ums/tel%3a%2b1234567890/folders/97d38f52-bed0-4046-8784-bb110e3b0ea3","flags":{"flag":["\\RECENT"]},"resourceURL":"https://resourceurl","correlationId":"75114622-099d-4503-8166-e84bd1b620dc","message":{"id":"1","store":"RCSMessageStore/Chat","objectURL":"https://data1","direction":"In","message-time":"2016-05-19T08:46:49-08:00","status":"RECENT","sender":"sip:1234","recipients":[{"uri":"sip:2345"}],"imdn-message-id":"75114622-099d-4503-8166-e84bd1b620dc","content":[{"rcs-data":{"sip-call-id":"005056884776-4d72-eb161700-1e2-571fa736-a0e46","feature-tag":"urn:urn-7:3gpp-service.ims.icsi.oma.cpm.msg.group","p-asserted-service":"urn:urn-7:3gpp-service.ims.icsi.oma.cpm.msg.group","contribution-id":"e0a1029e-a48b-4ca6-b185-299dada439be","conversation-id":"2dbc584e-
**38** ---> this is tcp length
fc46-4a37-9a56-c2b93246d788"}}]}}}]}}}
**17**
HTTP/1.0 200 OK
**35**
Server: BaseHTTP/0.3 Python/2.6.6
**37**
Date: Mon, 12 Feb 2018 19:14:17 GMT
**2**
**9**
Thread-1
I'm trying to set up a 'Swift All In One' system on a Ubuntu 12.04 VM by the link:http://docs.openstack.org/developer/swift/development_saio.html.
I use VMware WorkStation 12 Pro on Win7 64bit system and use 'Host-only' network mode.The VM ip address is "192.168.137.200".
When I run the command on the VM:
curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://192.168.137.200/auth/v1.0
It works well.
But when I run the command on the host machine(Win7 platform), It fails and returns:
* Could not resolve host: test:tester'; Host not found
* Closing connection #0
curl: (6) Could not resolve host: test:tester'; Host not found
* Could not resolve host: testing'; Host not found
* Closing connection #0
curl: (6) Could not resolve host: testing'; Host not found
* About to connect() to 192.168.137.200 port 80 (#0)
* Trying 192.168.137.200... connected
* Connected to 192.168.137.200 (192.168.137.200) port 80 (#0)
> GET /auth/v1.0 HTTP/1.1
> User-Agent: curl/7.20.1 (amd64-pc-win32) libcurl/7.20.1 OpenSSL/0.9.8n zlib/1.
2.3
> Host: 192.168.137.200
> Accept: */*
>
< HTTP/1.1 401 Unauthorized
< Date: Fri, 25 Mar 2016 05:57:24 GMT
< Content-Length: 131
< Content-Type: text/html; charset=UTF-8
< Www-Authenticate: Swift realm="unknown"
< X-Trans-Id: tx081d67bec35b457bb4cb8-0056f4d343
< Vary: Accept-Encoding
<
<html><h1>Unauthorized</h1><p>This server could not verify that you are authoriz
ed to access the document you requested.</p></html>* Connection #0 to host 192.1
68.137.200 left intact
* Closing connection #0
Then I make another Ubuntu 12.04 VM and try to run the command above on the second VM, it works well.
Try to use X-Auth-User and X-Auth-Key headers instead.https://swiftstack.com/docs/cookbooks/swift_usage/auth.html