Go micro dashboard does not register my service when running inside docker-compose - docker-compose

I'm new to go-micro, tried to have micro server running as another service in my docker-compose setup however my first micro-service is not getting show in Micro Web (the Go Micro dashboard). If I run micro server outside docker-compose I can see several go-micro-related micro-services in the dashboard, but inside docker-compose none is displayed.
I have put together an example code that reproduces what I experience, I generated the service using micro new and changed nothing in it, the docker-compose.yml reflects how I have it setup in my project where I began experiencing this:
https://github.com/shackra/go-micro-docker-compose-bug
What am I doing wrong and how do I get the Micro dashboard to work as expected inside docker-compose?
EDIT
I have added a network in docker compose without any luck, also made my example service compilable inside docker.
EDIT 2
if I restart the service micro complains about it:
micro_1 | 2020-07-02 03:25:36 file=auth/wrapper.go:84 level=error service=web none available
micro_1 | 172.21.0.1 - - [02/Jul/2020:03:25:36 +0000] "GET /services HTTP/1.1" 500 15 "http://localhost:8082/" "Mozilla/5.0 (X11; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0"
micro_1 | 2020-07-02 03:25:39 file=auth/wrapper.go:84 level=error service=web none available
micro_1 | 172.21.0.1 - - [02/Jul/2020:03:25:39 +0000] "GET /client HTTP/1.1" 500 15 "http://localhost:8082/" "Mozilla/5.0 (X11; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0"
micro_1 | 2020-07-02 03:28:34 file=auth/wrapper.go:84 level=error service=web none available
micro_1 | 172.21.0.1 - - [02/Jul/2020:03:28:34 +0000] "GET /client HTTP/1.1" 500 15 "http://localhost:8082/" "Mozilla/5.0 (X11; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0"
micro_1 | 2020-07-02 03:28:38 file=auth/wrapper.go:84 level=error service=web none available
micro_1 | 172.21.0.1 - - [02/Jul/2020:03:28:37 +0000] "GET /services HTTP/1.1" 500 15 "http://localhost:8082/" "Mozilla/5.0 (X11; Linux x86_64; rv:77.0) Gecko/20100101 Firefox/77.0"
example_1 | 2020-07-02 03:30:55 file=grpc/grpc.go:791 level=info Deregistering node: go.micro.service.example-58b408cc-eec2-41a3-8d7f-5ac1751ad795
example_1 | 2020-07-02 03:30:55 file=grpc/grpc.go:814 level=info Unsubscribing from topic: go.micro.service.example
example_1 | 2020-07-02 03:30:55 file=grpc/grpc.go:959 level=info Broker [http] Disconnected from 127.0.0.1:45863
micro_1 | 2020-07-02 03:30:55 file=registry/registry.go:102 level=error service=api unable to get go.micro.service.example service: service not found
go-micro-docker-compose-bug_example_1 exited with code 0
micro_1 | 2020-07-02 03:30:58 file=handler/handler.go:227 level=error service=debug Error calling go.micro.service.example#172.21.0.2:38625 ({"id":"go.micro.client","code":500,"detail":"connection error: desc = \"transport: Error while dialing dial tcp 172.21.0.2:38625: connect: connection refused\"","status":"Internal Server Error"})
micro_1 | 2020-07-02 03:31:08 file=handler/handler.go:227 level=error service=debug Error calling go.micro.service.example#172.21.0.2:38625 ({"id":"go.micro.client","code":500,"detail":"connection error: desc = \"transport: Error while dialing dial tcp 172.21.0.2:38625: connect: connection refused\"","status":"Internal Server Error"})
micro_1 | 2020-07-02 03:31:18 file=handler/handler.go:227 level=error service=debug Error calling go.micro.service.example#172.21.0.2:38625 ({"id":"go.micro.client","code":500,"detail":"connection error: desc = \"transport: Error while dialing dial tcp 172.21.0.2:38625: connect: connection refused\"","status":"Internal Server Error"})
I don't really get it, why would it report no services nor clients were found but still being able to connect with my service?

it seems that despite micro's dashboard showing nothing, if I issue micro list services my example service is listed
I then removed the network from the docker-compose.yml, stopped docker-compose and put it up again, issued micro list services again and my example service was included in the listing. So, this seems to be a bug with the dashboard itself.

Related

Istio envoy upstream reset: reset reason connection failure

I have a GKE cluster (gke v1.13.6) and using istio (v1.1.7) with several services deployed and working successfully except one of them which always responds with HTTP 503 when calling through the gateway : upstream connect error or disconnect/reset before headers. reset reason: connection failure.
I've tried calling the pod directly from another pod with curl enabled and it ends up in 503 as well :
$ kubectl exec sleep-754684654f-4mccn -c sleep -- curl -v d-vine-machine-dev:8080/d-vine-machine/swagger-ui.html
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 10.3.254.3...
* TCP_NODELAY set
* Connected to d-vine-machine-dev (10.3.254.3) port 8080 (#0)
> GET /d-vine-machine/swagger-ui.html HTTP/1.1
> Host: d-vine-machine-dev:8080
> User-Agent: curl/7.60.0
> Accept: */*
>
upstream connect error or disconnect/reset before headers. reset reason: connection failure< HTTP/1.1 503 Service Unavailable
< content-length: 91
< content-type: text/plain
< date: Thu, 04 Jul 2019 08:13:52 GMT
< server: envoy
< x-envoy-upstream-service-time: 60
<
{ [91 bytes data]
100 91 100 91 0 0 1338 0 --:--:-- --:--:-- --:--:-- 1378
* Connection #0 to host d-vine-machine-dev left intact
Setting the log level to TRACE at the istio-proxy level :
$ kubectl exec -it -c istio-proxy d-vine-machine-dev-b8df755d6-bpjwl -- curl -X POST http://localhost:15000/logging?level=trace
I looked into the logs of the injected sidecar istio-proxy and found this :
[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:381] [C119][S9661729384515860777] router decoding headers:
':authority', 'api-dev.d-vine.tech'
':path', '/d-vine-machine/swagger-ui.html'
':method', 'GET'
':scheme', 'http'
'cache-control', 'max-age=0'
'upgrade-insecure-requests', '1'
'user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3'
'accept-encoding', 'gzip, deflate'
'accept-language', 'fr-FR,fr;q=0.9,en-US;q=0.8,en;q=0.7'
'x-forwarded-for', '10.0.0.1'
'x-forwarded-proto', 'http'
'x-request-id', 'e38a257a-1356-4545-984a-109500cb71c4'
'content-length', '0'
'x-envoy-internal', 'true'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/default/sa/default;Hash=8b6afba64efe1035daa23b004cc255e0772a8bd23c8d6ed49ebc8dabde05d8cf;Subject="O=";URI=spiffe://cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account;DNS=istio-ingressgateway.istio-system'
'x-b3-traceid', 'f749afe8b0a76435192332bfe2f769df'
'x-b3-spanid', 'bfc4618c5cda978c'
'x-b3-parentspanid', '192332bfe2f769df'
'x-b3-sampled', '0'
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:88] creating a new connection
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C121] connecting
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:644] [C121] connecting to 127.0.0.1:8080
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:653] [C121] connection in progress
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:811] [C119][S9661729384515860777] decode headers called: filter=0x4f118b0 status=1
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:384] [C119] parsed 1272 bytes
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:282] [C119] readDisable: enabled=true disable=true
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C121] socket event: 3
[2019-07-04 07:30:41.353][24][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C121] write ready
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:526] [C121] delayed connection error: 111
[2019-07-04 07:30:41.353][24][debug][connection] [external/envoy/source/common/network/connection_impl.cc:183] [C121] closing socket: 0
[2019-07-04 07:30:41.353][24][debug][client] [external/envoy/source/common/http/codec_client.cc:82] [C121] disconnect. resetting 0 pending requests
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:129] [C121] client disconnected, failure reason:
[2019-07-04 07:30:41.353][24][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:164] [C121] purge pending, failure reason:
[2019-07-04 07:30:41.353][24][debug][router] [external/envoy/source/common/router/router.cc:644] [C119][S9661729384515860777] upstream reset: reset reason connection failure
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0e5f0 status=0
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0edc0 status=0
[2019-07-04 07:30:41.353][24][debug][filter] [src/envoy/http/mixer/filter.cc:133] Called Mixer::Filter : encodeHeaders 2
[2019-07-04 07:30:41.353][24][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C119][S9661729384515860777] encode headers called: filter=0x4f0f0e0 status=0
[2019-07-04 07:30:41.353][24][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1305] [C119][S9661729384515860777] encoding headers via codec (end_stream=false):
':status', '503'
'content-length', '91'
'content-type', 'text/plain'
'date', 'Thu, 04 Jul 2019 07:30:41 GMT'
'server', 'istio-envoy'
Has anyone encountered such an issue ? If you need more info about the configuration, I can provide.
Thanks for your answer Manvar. There was no problem with the curl enabled pod but thanks for the insight. It was a misconfiguration of our tomcat port that was not matching the service/virtualService config.
When pod with an istio side car is started, the follwing things happen
an init container changes the iptables rules so that all the outgoing tcp traffic is routed to the sidecar istio-proxy on port 15001.
the containers of the pod are started in parallel (curl and istio-proxy)
If your curl container is executed before istio-proxy listens on port 15001, you get the error.

Open Websocket connection using http4s

I'm running http4s WS example from:
https://github.com/http4s/http4s/blob/master/examples/blaze/src/main/scala/com/example/http4s/blaze/BlazeWebSocketExample.scala
And I'm trying to connect to it from google chrome console:
var ws = new WebSocket("ws://localhost:8080/http4s/wsecho")
ws.send("Hi")
But actually I see no messages. In console log I see
Uncaught DOMException: Failed to execute 'send' on 'WebSocket': Still in CONNECTING state.
at <anonymous>:1:4
(anonymous) # VM204:1
VM186:1 WebSocket connection to 'ws://localhost:8080/http4s/wsecho' failed: Connection closed before receiving a handshake response
And server logs contain no errors:
23:12:43 [blaze-nio1-acceptor] INFO o.h.blaze.channel.ServerChannelGroup - Connection to /127.0.0.1:59777 accepted at Tue Mar 20 23:12:43 GST 2018.
23:12:43 [blaze-nio-fixed-selector-pool-3] DEBUG org.http4s.blaze.pipeline.Stage - SocketChannelHead starting up at Tue Mar 20 23:12:43 GST 2018
23:12:43 [blaze-nio-fixed-selector-pool-3] DEBUG org.http4s.blaze.pipeline.Stage - Stage SocketChannelHead sending inbound command: Connected
23:12:43 [blaze-nio-fixed-selector-pool-3] DEBUG org.http4s.blaze.pipeline.Stage - QuietTimeoutStage starting up at Tue Mar 20 23:12:43 GST 2018
23:12:43 [blaze-nio-fixed-selector-pool-3] DEBUG org.http4s.blaze.pipeline.Stage - Stage QuietTimeoutStage sending inbound command: Connected
23:12:43 [blaze-nio-fixed-selector-pool-3] DEBUG org.http4s.blaze.pipeline.Stage - Starting HTTP pipeline
23:12:43 [blaze-nio-fixed-selector-pool-3] DEBUG o.h.blaze.channel.nio1.SelectorLoop - Started channel.
23:12:43 [scala-execution-context-global-19] DEBUG org.http4s.blaze.pipeline.Stage - Websocket key: Some(WebSocketContext(Websocket(Stream(..),fs2.async.mutable.Queue$$Lambda$378/1284896246#5b314a6c),Headers(),IO$1816945727))
Request headers: Headers(Host: localhost:8080, Connection: Upgrade, Pragma: no-cache, Cache-Control: no-cache, Upgrade: websocket, Origin: https://www.google.ae, Sec-WebSocket-Version: 13, User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36, Accept-Encoding: gzip, deflate, br, Accept-Language: en-US,en;q=0.9, Cookie: _ga=GA1.1.1008064719.1515165520, Sec-WebSocket-Key: ozUxsxBzUk0MaQwHbk45ow==, Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits)
What do I miss?
You simply aren't waiting until the websocket connection is actually open. Try this instead:
var ws = new WebSocket("ws://localhost:8080/http4s/wsecho")
ws.onopen = function(e) { ws.send("Hi") }
If you want, you can change the WebSocketBuilder definition to this so that you have some more visibility of what's being sent:
WebSocketBuilder[F].build(d.observe1(wsf => F.delay{print(wsf)}), e)

Deployment fails with a 504

I'm trying to deploy an application to the Swisscom App Cloud from the console. It reports progress until at the end, an 504 without further explanation is reported:
Updating app helloclass-fe-develop in org UCID-Bern Team / space HELLOCLASS-TEST as christian.cueni#iterativ.ch...
OK
Uploading helloclass-fe-develop...
FAILED
Error processing app files: Error uploading application.
Server error, status code: 504, error code: 0, message:
The log of the app reports that the app has been updated:
2017-01-03 09:37:39 [RTR/0] OUT helloclass-develop.scapp.io - [03/01/2017:08:37:39.584 +0000] "GET / HTTP/1.1" 200 0 594 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.75 Safari/537.36 Google Favicon" 66.249.93.201:50868 10.0.18.35:64341 x_forwarded_for:"83.76.152.96" x_forwarded_proto:"https" vcap_request_id:8a8adcc7-9e97-4bd9-4492-68e92883ee3d response_time:0.001739219 app_id:310166b4-f3a6-4168-a9ac-530e45dbfb10 app_index:0
2017-01-03 09:37:39 [APP/PROC/WEB/0] OUT 83.76.152.96, 66.249.93.201, 66.249.93.201 - - - [03/Jan/2017:08:37:39 +0000] "GET / HTTP/1.1" 200 606
2017-01-03 10:05:50 [API/2] OUT Updated app with guid 310166b4-f3a6-4168-a9ac-530e45dbfb10 ({"name"=>"helloclass-fe-develop"})
2017-01-03 10:57:15 [API/1] OUT Updated app with guid 310166b4-f3a6-4168-a9ac-530e45dbfb10 ({"state"=>"STOPPED"})
2017-01-03 10:57:15 [CELL/0] OUT Exit status 0
2017-01-03 10:57:15 [APP/PROC/WEB/0] OUT Exit status 0
2017-01-03 10:57:15 [CELL/0] OUT Destroying container
2017-01-03 10:57:15 [CELL/0] OUT Successfully destroyed container
2017-01-03 10:57:16 [API/1] OUT Updated app with guid 310166b4-f3a6-4168-a9ac-530e45dbfb10 ({"state"=>"STARTED"})
2017-01-03 10:57:16 [CELL/0] OUT Creating container
2017-01-03 10:57:16 [CELL/0] OUT Successfully created container
2017-01-03 10:57:17 [CELL/0] OUT Starting health monitoring of container
2017-01-03 10:57:19 [CELL/0] OUT Container became healthy
In spite of those messages which would indicate that the app has been updated, I still see the old version of the app being served.
EDIT
After running the command with the -v parameter, I see that the reason for the failure is a gateway timeout:
RESPONSE: [2017-01-03T13:32:39+01:00]
HTTP/1.1 504 Gateway Timeout
Connection: close
Content-Length: 176
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: text/html
Date: Tue, 03 Jan 2017 12:32:39 GMT
Expires: 0
Pragma: no-cache
Strict-Transport-Security: max-age=15768000; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Vcap-Request-Id: 3ac831ef-e70b-4f4e-7c56-e308806f039e
X-Xss-Protection: 1; mode=block
<html>
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx</center>
</body>
</html>
FAILED
Error processing app files: Error uploading application.
Server error, status code: 504, error code: 0, message:
Is this something cloudfoundry specific or rather related to Swisscom AppCloud? Are there cloudfoundry inherent timeout limits?
You can run cf push with -v or enable CF_TRACE to see more of the interaction of the CLI with your CF endpoint.
The error message looks similar to https://github.com/cloudfoundry/cli/issues/1042: the Cloud Controller could not complete a request in time and the router that routed the API request to Cloud Controller did not wait any longer and returned the 504 (Gateway timeout) to the CLI.
The trace should tell you which API call timed out.
The CLI aborted the operation there, while the Cloud Controller may have completed the operation successfully, eventually.
I would have thought the operations the CLI would perform here are:
send a list of files in your app and their checksums for resource matching (so it can skip uploading unmodified app bits that the CC cached from a previous push)
upload app files
(re)start app (which includes staging)
poll & wait until an app instance returns that it's running
From your CLI output I assume the first operation timed out, so not clear how your app was restarted.

I am installing kubernetes on ubuntu 14.04 VM, I am unable to install

Hi I am trying to install kubernetes on a vm with ubuntu 14.04, using manual from http://kubernetes.io/docs/getting-started-guides/ubuntu/
I tried
KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
I am getting following error:
etcd start/pre-start, process 1557
Error: client: etcd cluster is unavailable or misconfigured
error #0: dial tcp 127.0.0.1:4001: getsockopt: connection refused
error #1: dial tcp 127.0.0.1:2379: getsockopt: connection refused
Following is the content of /var/log/upstart/etcd.log:
2016-10-24 13:28:54.269743 I | etcdmain: listening for peers on http://localhost:2380
2016-10-24 13:28:54.269852 I | etcdmain: listening for peers on http://localhost:7001
2016-10-24 13:28:54.269921 I | etcdmain: listening for client requests on http://127.0.0.1:4001
2016-10-24 13:28:54.269994 I | etcdmain: stopping listening for client requests on http://127.0.0.1:4001
2016-10-24 13:28:54.270017 I | etcdmain: stopping listening for peers on http://localhost:7001
2016-10-24 13:28:54.270052 I | etcdmain: stopping listening for peers on http://localhost:2380
I am using it behind corporate proxy, http_proxy, https_proxy and no_proxy has been set
I tried versions KUBE_VERSION=1.2.0, FLANNEL_VERSION=0.5.0, ETCD_VERSION=2.2.0
I even tried different version for KUBE 1.1.8, 1.3.0, 1.4.0, 1.4.4 But ended up is same error.
Kindly Help
I found out solution, Kindly refer this page https://github.com/kubernetes/kubernetes/issues/19235#issuecomment-255987755

grafana not showing in kubernetes heapster

I have tried to install heapster with grafana and influxdb on my kubernetes cluster. I cannot manage to see the page of grafana, it only shows me alert.title.
I think that I did everything right, all the logs seems good, but this is the last problem: If someone will be kind enough to show me what's happening I would be grateful.
Here is a pick of my log for:
2016/06/23 13:31:23 [I] Completed 172.17.77.1 - "GET /favicon.ico HTTP/1.1" 404 Not Found 2929 bytes in 1224us
2016/06/23 13:31:30 [I] Completed 172.17.77.1 - "GET /grafana HTTP/1.1" 404 Not Found 2929 bytes in 1154us
2016/06/23 13:31:30 [I] Completed 172.17.77.1 - "GET /api/v1/proxy/namespaces/default/services/monitoring-grafana/public/app/app.ca0ab6f9.js HTTP/1.1" 404 Not Found 23 bytes in 545us
2016/06/23 13:31:30 [I] Completed 172.17.77.1 - "GET /api/v1/proxy/namespaces/default/services/monitoring-grafana/public/css/grafana.dark.min.a95b3754.css HTTP/1.1" 404 Not Found 23 bytes in 786us
2016/06/23 13:31:40 [I] Completed 172.17.77.1 - "GET /monitoring-grafana HTTP/1.1" 404 Not Found 2929 bytes in 1409us
2016/06/23 13:31:40 [I] Completed 172.17.77.1 - "GET /api/v1/proxy/namespaces/default/services/monitoring-grafana/public/app/app.ca0ab6f9.js HTTP/1.1" 404 Not Found 23 bytes in 879us
2016/06/23 13:31:40 [I] Completed 172.17.77.1 - "GET /api/v1/proxy/namespaces/default/services/monitoring-grafana/public/css/grafana.dark.min.a95b3754.css HTTP/1.1" 404 Not Found 23 bytes in 1349us
2016/06/23 13:31:46 [I] Completed 172.17.77.1 - "GET /api/v1/proxy/namespaces/default/services/monitoring-grafana/public/app/app.ca0ab6f9.js HTTP/1.1" 404 Not Found 23 bytes in 837us
2016/06/23 13:31:46 [I] Completed 172.17.77.1 - "GET /api/v1/proxy/namespaces/default/services/monitoring-grafana/public/css/grafana.dark.min.a95b3754.css HTTP/1.1" 404 Not Found 23 bytes in 1181us
Update :
Ok I found something in the influxdb-grafana-controller.yaml I changed value : /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ to value: /
I dont know if it's a good solution but it's working.
Ok I found the solution, my cluster was flawed. I had to install flannel on the master too. With the option --iface=eth1 because of vagrant.
I followed this guide http://severalnines.com/blog/installing-kubernetes-cluster-minions-centos7-manage-pods-services but they didnt say to install flannel on the master.
You can remove NodePort from influxdb-grafana-controller.yaml and you can also put the value : api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ back.
Now everything is working.