Cant access consul webui docker compose - docker-compose

Im trying to standup consul dev webui for traning purposes using docker compose.
While consul claims to be running, when I try to visit localhost:8500/ui, the site is unreachable.
My docker compose file:
version: "3"
services:
cs1:
image: consul:1.4.2
ports:
- "8500:8500"
command: "agent -dev -ui"
The response from the console is
cs1_1_6d8d914aa536 | ==> Starting Consul agent...
cs1_1_6d8d914aa536 | ==> Consul agent running!
cs1_1_6d8d914aa536 | Version: 'v1.4.2'
cs1_1_6d8d914aa536 | Node ID: 'dfc6a0ce-3abc-a96d-718b-8b77155a2de6'
cs1_1_6d8d914aa536 | Node name: '1fde0528ab0c'
cs1_1_6d8d914aa536 | Datacenter: 'dc1' (Segment: '<all>')
cs1_1_6d8d914aa536 | Server: true (Bootstrap: false)
cs1_1_6d8d914aa536 | Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
cs1_1_6d8d914aa536 | Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
cs1_1_6d8d914aa536 | Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
I suspect the site is running on the docker container locahost but port 8500 hasnt been exposed properly.
Thanks for your help

Remove the line command: "agent -dev -ui" from your docker-compose.yml.
The default command is already running the dev agent, see the Dockerfile here:
https://github.com/hashicorp/docker-consul/blob/9bd2aa7ecf2414b8712e055f2374699148e8941c/0.X/Dockerfile

By default, the consul is bind to localhost
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
add bind to your command usually allowed to any IP (0.0.0.0)
agent -dev -ui -bind=0.0.0.0

Related

k3d multi cluster communication

Assuming having 2 separate k3d clusters (namely: vault, dev)
is there is a way to have a distinct URL for each cluster (preferably with https) for example: vault.cluster.internal and dev.cluster.internal
and allow apps deployed in dev.cluster.internal to lookup something or interact with apps in the vault.cluster.internal ?
The cluster definitions are as follows:
dev.yaml:
apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: dev
servers: 1
agents: 3
network: k3d-cluster
kubeAPI:
host: "dev.cluster.internal"
hostIP: "127.0.0.1"
image: rancher/k3s:v1.24.3-k3s1
ports:
- port: 3000:3000
nodeFilters:
- loadbalancer
options:
k3d:
wait: true
timeout: "60s"
k3s:
extraArgs:
- arg: --tls-san=dev.cluster.internal
nodeFilters:
- server:*
- arg: --disable=metrics-server
nodeFilters:
- server:*
- arg: --disable=traefik
nodeFilters:
- server:*
kubeconfig:
updateDefaultKubeconfig: true
switchCurrentContext: false
and the vault.yaml:
apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: vault
servers: 1
agents: 3
network: k3d-cluster
kubeAPI:
host: "vault.cluster.internal"
hostIP: "127.0.0.1"
image: rancher/k3s:v1.24.3-k3s1
ports:
- port: 8200:8200
nodeFilters:
- loadbalancer
options:
k3d:
wait: true
timeout: "60s"
k3s:
extraArgs:
- arg: --tls-san=vault.cluster.internal
nodeFilters:
- server:*
- arg: --disable=metrics-server
nodeFilters:
- server:*
- arg: --disable=traefik
nodeFilters:
- server:*
kubeconfig:
updateDefaultKubeconfig: true
switchCurrentContext: false
Can this be done without using service mesh?
Can I update the coredns in the clusters to allow resolving the other cluster host names, and how?
Can this be done with docker network configurations, and how?
This is basically to simulate real world clusters (but for local development)
I found 3 solutions for the problem.
The first solution is to add HostAliases section to the dev cluster definition, and make it point to the external IP of the vault cluster loadbalancer:
for example:
you can run the following command on the vault cluster after initializing it
$ kubectl --context k3d-vault --namespace vault get services
NAME TYPE CLUSTER-IP EXTERNAL-IP ...
...
vault LoadBalancer 10.43.34.131 172.24.0.3 ...
^^^^^^^^^^
...
dev.yaml would be
#...
ports:
- port: 3000:3000
nodeFilters:
- loadbalancer
hostAliases:
- ip: 172.24.0.3
hostnames:
- vault.cluster.internal
#...
# (alternatively, this can be automated using the following command without editing `dev.yaml` file)
$ KMS_IP=$(kubectl --context k3d-vault --namespace vault get services | grep LoadBalancer | awk -F " " '{ print $4 }')
$ k3d cluster create --config dev.yaml --host-alias $KMS_IP:vault.cluster.internal
this solution allow resolving of hostname (as you would expect in a production cluster)...
The second solution works similarly but using docker network inspect k3d-cluster (where k3d-cluster is the docker network name in cluster definition)
Similarly, run docker network inspect k3d-cluster and note down the IP of the loadbalancer subnet defined by docker:
...
"cad3f3XXXXXX": {
"Name": "k3d-vault-serverlb",
"EndpointID": "47d5XXXX"
"MacAddress": "02:42:ac:18:00:04",
"IPv4Address": "172.24.0.4/16", #<<< This IP can be used in dev cluster HostAliases
"IPv6Address": ""
}
...
The last solution is simpler but less flexible.
it uses host.k3d.internal as the name for the other cluster (allowing to resolve it) but you have to take care of port mapping as all of the clusters would be resolving to use the same URL for the services (which isn't ideal, but easy enough to test multi-cluster communication/bugs/etc).
In other words, configure the dev cluster VAULT_ADDR to be host.k3d.internal:8200 instead of vault.cluster.internal:8200
This is not flexible with TLS/HTTPS (AFAIK).

'No healthy upstream' error when Envoy proxy is set up manually

I have a very simple environment with a client, a server and an envoy proxy, each running on a separate docker, communicating over http.
When I set it using docker-compose it works.
However, when I set up the dockers and the network manually (with docker network create, setting the aliases, etc.), I get a "503 - no healthy upstream" message when the client tries to send requests to the server. curl to the network alias works from the envoy container. Any idea what is the difference between using docker-compose and setting up the network and containers manually?
envoy.yaml:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: service }
http_filters:
- name: envoy.filters.http.router
typed_config: {}
clusters:
- name: service
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: round_robin
load_assignment:
cluster_name: service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: server-stub
port_value: 5000
admin:
access_log_path: "/tmp/envoy.log"
address:
socket_address:
address: 0.0.0.0
port_value: 9901
The docker-compose file that worked (but I don't want to use docker-compose, I am using scripts that set up each docker separately):
version: "3.8"
services:
envoy:
image: envoyproxy/envoy:v1.16-latest
ports:
- "10000:10000"
- "9901:9901"
volumes:
- ./envoy.yaml:/etc/envoy/envoy.yaml
server-stub:
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
I can't reproduce this. It works fine with your docker-compose file, and it works fine manually. Here are the manual steps I took:
$ docker network create test-net
$ docker container run --network test-net --name envoy -p 10000:10000 -p 9901:9901 --mount type=bind,src=/home/john/projects/tester/envoy.yaml,dst=/etc/envoy/envoy.yaml envoyproxy/envoy:v1.16-latest
$ docker run --network test-net --name server-stub johnharris85/simple-hostname-reporter:3
My sample app also listens on port 5000. I used your exact envoy config. Using Docker 20.10.8 if relevant.

Docker container communication with other container on diffirent host/server

I am having two servers (CentOS8).
On server1 I have mysql-server container and on server2 I have zabbix-front-end i.e zabbix-web-apache-mysql (container name zabbixfrontend).
I am trying to connect to mysql-server from zabbixfrontend container. Getting error
bash-4.4$ mysql -h <MYSQL_SERVER_IP> -P 3306 -uroot -p
Enter password:
ERROR 2002 (HY000): Can't connect to MySQL server on '<MYSQL_SERVER_IP>' (115)
When I do nc from zabbixfrontend container to my mysql-server IP I get "No route to host." error message.
bash-4.4$ nc -zv <MYSQL_SERVER_IP> 3306
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: No route to host.
NOTE : I am successfully do nc from the host machine (server2) mysql-server container.
docker-compose.yml
version: '3.5'
services:
zabbix-web-apache-mysql:
image: zabbix/zabbix-web-apache-mysql:centos-8.0-latest
container_name: zabbixfrontend
#network_mode: host
ports:
- "80:8080"
- "443:8443"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/etc/ssl/apache2:/etc/ssl/apache2:ro
- ./usr/share/zabbix/:/usr/share/zabbix/
env_file:
- .env_db_mysql
- .env_web
secrets:
- MYSQL_USER
- MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD
# zbx_net_frontend:
sysctls:
- net.core.somaxconn=65535
secrets:
MYSQL_USER:
file: ./.MYSQL_USER
MYSQL_PASSWORD:
file: ./.MYSQL_PASSWORD
MYSQL_ROOT_PASSWORD:
file: ./.MYSQL_ROOT_PASSWORD
docker logs zabbixfrontend out as below
** Deploying Zabbix web-interface (Apache) with MySQL database
** Using MYSQL_USER variable from ENV
** Using MYSQL_PASSWORD variable from ENV
********************
* DB_SERVER_HOST: <MYSQL_SERVER_IP>
* DB_SERVER_PORT: 3306
* DB_SERVER_DBNAME: zabbix
********************
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
The nc message is telling the truth: No route to host.
This happens because when you deploy your front-end container in the docker bridge network, its IP address belongs to the 172.18.0.0/16 subnet and you a are trying to reach an the database via an IP address that belongs to a different subnet (10.0.0.0/16).
On the other hand, when you deploy your front-end container on the host network, you no longer face that problem, because now the IP is literally using the IP address of the host machine, 10.0.0.2 and there is no need for a route to be explicitly created to reach 10.0.0.3.
Now the problem you are facing is that you can no longer access the web-ui via the browser. This happens because I assume you kept the ports:" option in your docker-compose.yml and tried to access the service on localhost:80/443. The source and destination ports do not need to be specified if you run the container on the host network. The container will just listen directly on the host on the port that's opened inside the container.
Try to run the front-end container with this config and then access it on localhost:8080 and localhost:8443:
...
network_mode: host
# ports:
# - "80:8080"
# - "443:8443"
volumes:
...
Running containers on the host network is not something that I would usually recommend, but hence your setup is quite special, having one container running on one docker host and another container running in another independent docker host, I assume you don't want create an overlay network and eventually register the two docker hosts to a swarm.

docker: get container's external IP

Running several docker containers including postgres database remotely.
$docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------------------
fiware-cygnus /cygnus-entrypoint.sh Up (healthy) 0.0.0.0:5050->5050/tcp, 0.0.0.0:5080->5080/tcp
fiware-elasticsearch /docker-entrypoint.sh elas ... Up 9200/tcp, 9300/tcp
fiware-grafana /run.sh Up 0.0.0.0:53153->3000/tcp
fiware-iotagent pm2-runtime bin/lwm2mAgent ... Up (healthy) 0.0.0.0:4041->4041/tcp, 5684/tcp, 0.0.0.0:5684->5684/udp
fiware-memcached docker-entrypoint.sh memca ... Up 11211/tcp
fiware-mongo docker-entrypoint.sh --bin ... Up 0.0.0.0:27017->27017/tcp
fiware-nginx nginx -g daemon off; Up 0.0.0.0:53152->53152/tcp, 80/tcp
fiware-orion /usr/bin/contextBroker -fg ... Up (healthy) 0.0.0.0:1026->1026/tcp
fiware-postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
fiware-wirecloud /docker-entrypoint.sh Up (healthy) 8000/tcp
I want to get the external IP address of the fiware-postgres container so I can connect via pgAdmin, instead of managing db via postgres client. It appears the IPAddress of fiware-postgres is only accessible internally.
$docker inspect fiware-postgres | grep "IPAddress"
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "172.19.0.3",
pgAdmin error:
Unable to connect to server:
could not connect to server: Operation timed out
Is the server running on host "172.19.0.3" and accepting
TCP/IP connections on port 5432?
Is there a way to get it's external IP, or at least some way of connecting via pgAdmin (remember docker container run on a remote, accessed via ssh).
EDIT
The remote host is accessible via ssh root#193.136.x.x:2222 so the postgres port 5432 cannot be reached.
pgAdmin settings(Connection Tab):
Host: 193.136.xx.xx
Port: 5432
Maintenance database: postgres
Username: postgres
Password: password
pgAdmin error:
Unable to connect to server:
could not connect to server: Operation timed out
Is the server running on host "193.136.x.x" and accepting
TCP/IP connections on port 5432?
postgres service definition (in docker-compose):
postgres:
restart: always
image: postgres:10
hostname: postgres
container_name: fiware-postgres
expose:
- "5432"
ports:
- "5432:5432"
networks:
- default
environment:
- "POSTGRES_PASSWORD=password"
- "POSTGRES_USER=postgres"
- "POSTGRES_DB=postgres"
volumes:
- ./postgres-data:/var/lib/postgresql/data
build:
context: .
shm_size: '2gb'
Assuming that you have exposed the ports of your container to the underlying host, then the application is accessible via the IP address of the underlying host.
The 172.19.0.3 IP address is an IP address that exists inside the Docker network on the host: it's not available externally. If your remote host has IP 1.2.3.4 and your application port has been exposed, e.g. in your compose file you have something like:
ports:
- "5432:5432"
Then your application should be accessible via 1.2.3.4:5432.
EDIT:
If you are connecting from your local machine to the remote server using pgAdmin, then there should be no need to ssh: you should find that the service is still available at 1.2.3.4:5432. However, if you have some form of firewall in the way (e.g. you're sshing in to an AWS server), then access to 5432 on the remote host may well be blocked. In that case, consider using port forwarding to route connections on your local server to the remote server, via the ssh connection.
ssh -L 5432:193.136.x.x:5432 193.136.x.x:2222
Now connect using pgAdmin to localhost:5432 or 127.0.0.1:5432.
Here's the solution that works for me:
ssh -p 2222 root#193.136.xx.xx -L 5432:localhost:5432
postgres server finally accessible via pgAdmin

can not connect to docker container mapping port

I use mongo image in docker, but I can not connect to 20217 port.
docker#default:~$ docker ps
prot info show: 0.0.0.0:20217->20217/tcp, 27017/tcp
but,
gilbertdeMacBook-Pro:~ gilbert$ lsof -i tcp:20217
there is no PID,
gilbertdeMacBook-Pro:~ gilbert$ docker info
Containers: 3
Images: 43
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 50
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.13-boot2docker
Operating System: Boot2Docker 1.9.1 (TCL 6.4.1); master : cef800b - Fri Nov 20 19:33:59 UTC 2015
CPUs: 1
Total Memory: 1.956 GiB
Name: default
ID: MRAZ:ZG5E:HDMY:EJNQ:HFL4:PW6Y:AXIS:6JFL:PFI5:GBAY:5SMF:NYQR
Debug mode (server): true
File Descriptors: 25
Goroutines: 44
System Time: 2016-01-27T14:53:52.005531869Z
EventsListeners: 0
Init SHA1:
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
Username: gilbertgan
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
I found this is because on MAC docker-machine is running on VM,so we need add the VM IP when connect to container.
the ip can be show by: docker-machine ls
Your docker container maps port 20217 which isn't the MongoDB default port. The correct port is 27017. And gilbert_gan is right as well. When running docker on docker-machine the docker host is not localhost but rather the virtual machine under docker-machine control.