Using a docker container with a VPN - docker-compose

I'm kinda new to docker so maybe my question is stupid, however, I've been unable to find a solution for it for a while now and it's starting to bother me so I'm asking here:
I have a default bridge network inside which there are few containers, one of them is running gluetun which is a vpn client and the rest is what's known as apache guacamole which is used as a remote desktop gateway.
It looks something like this:
networks:
guacnetwork_compose:
driver: bridge
services:
#gluten
gluetun:
image: qmcgaw/gluetun
#trqbva da mu dadem net_admin inache openvpn ne raboti
cap_add:
- NET_ADMIN
ports:
- 8888:8888/tcp # HTTP proxy
- 8388:8388/tcp # Shadowsocks
- 8388:8388/udp # Shadowsocks
# - 4823:4822
# networks:
# enable_ipv6=false
volumes:
environment:
- VPNSP=custom
- VPN_TYPE=openvpn
# OpenVPN:
- OPENVPN_USER=
- OPENVPN_PASSWORD=
- OPENVPN_CUSTOM_CONFIG=
# Timezone for accurate log times
# - TZ=
#guacd
guacd:
container_name: guacd_compose
image: guacamole/guacd
network_mode: "service:gluetun"
# networks:
# guacnetwork_compose:
restart: always
volumes:
- ./drive:/drive:rw
- ./record:/record:rw
# ports:
# - 4823:4822
guacd-no-vpn:
container_name: guacd_compose_no_vpn
image: guacamole/guacd
networks:
- guacnetwork_compose
restart: always
volumes:
- ./drive:/drive:rw
- ./record:/record:rw
# guacamole
guacamole:
container_name: guacamole_compose
depends_on:
- guacd
- postgres
environment:
GUACD_HOSTNAME: guacd
POSTGRES_DATABASE:
POSTGRES_HOSTNAME:
POSTGRES_PASSWORD:
POSTGRES_USER:
image: guacamole/guacamole
links:
- gluetun
networks:
- guacnetwork_compose
ports:
## if not nginx
## - 8080:8080/tcp # Guacamole is on :8080/guacamole, not /.
- 8080/tcp
restart: always
Basically what I want to happen is for the guacd container to use the network of the VPN container and then communicate with the GUI which is the guacamole container. Currently, the guacd is using the gluetun network, however, I can not get it to communicate with the guacamole container despite my efforts. Could somebody tell me what am I doing wrong?

Related

Custom installation of docker nextcloud

I'm trying to configure my nextcloud on my digitalocean server (debian 11). Using nginx proxy manager and nextcloud under docker
I change the root directory of the docker-compose (since I was out of disk space, I added a volume and mounted it at /var/lib/docker/volumes/volume_nyc1_01)
I created a new folder called nextcloud. Inside that created docker-co
Version: "3"
volumes:
nextcloud-data:
nextcloud-db:
npm-data:
npm-ssl:
npm-db:
networks:
frontend:
# add this if the network is already existing!
# external: true
backend:
services:
nextcloud-app:
image: nextcloud
restart: always
volumes:
- nextcloud-data:/var/lib/docker/volumes/volume_nyc1_01/var/www/html
environment:
- MYSQL_PASSWORD=raspberrypi
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=nextcloud-db
networks:
- frontend
- backend
nextcloud-db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- nextcloud-db:/var/lib/docker/volumes/volume_nyc1_01/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=raspberrypi
- MYSQL_PASSWORD=raspberrypi
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
networks:
- backend
npm-app:
image: jc21/nginx-proxy-manager:latest
restart: always
ports:
- "80:80"
- "81:81"
- "443:443"
environment:
- DB_MYSQL_HOST=npm-db
- DB_MYSQL_PORT=3306
- DB_MYSQL_USER=npm
- DB_MYSQL_PASSWORD=raspberrypi
- DB_MYSQL_NAME=npm
volumes:
- npm-data:/var/lib/docker/volumes/volume_nyc1_01/data
- npm-ssl:/var/lib/docker/volumes/volume_nyc1_01/etc/letsencrypt
networks:
- frontend
- backend
npm-db:
image: jc21/mariadb-aria:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=raspberrypi
- MYSQL_DATABASE=npm
- MYSQL_USER=npm
- MYSQL_PASSWORD=raspberrypi
volumes:
- npm-db:/var/lib/docker/volumes/volume_nyc1_01/var/lib/mysql
networks:
- backend
As you could notice, I created through mkdir, each folder after volume_nyc1_01.
Finally I started the server, from /var/lib/docker/volumes/volume_nyc1_01/nextcloud, using docker-compose up -d
Once logged in the ip-addres-server:81, I created the proxy host with domain name mydomain.com and forward hostname/ip nextcloud-app port 80. Saved
When i check in the domain name, it just doesn't show anything. The same happens when tried to establish the ssl.
I know I'm missing something, but I searched a lot and couldn't find anything. I really appreciate any help or suggestion

Can't connect containers mariadb and phpmyadmin

I get the error "mysqli::real_connect(): (HY000/2002): No such file or directory" when trying to login to phpmyadmin. I verified I can connect to the DB container from the localhost using mysql -h 127.0.0.1 -P 3306 -u root -p. Below is my docker-compose file:
version: "3.7"
########################### SECRETS
secrets:
mysql_root_password:
file: $DOCKERDIR/secrets/mysql_root_password
########################### SERVICES
services:
# Portainer - WebUI for Containers
portainer:
container_name: portainer
image: portainer/portainer-ce:latest
restart: unless-stopped
command: -H unix:///var/run/docker.sock
security_opt:
- no-new-privileges:true
ports:
- "$PORTAINER_PORT:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- $DOCKERDIR/portainer/data:/data
environment:
- TZ=$TZ
# MariaDB - MySQL Database
db:
container_name: db
image: linuxserver/mariadb:latest
restart: always
security_opt:
- no-new-privileges:true
ports:
- "$MARIADB_PORT:3306"
volumes:
- $DOCKERDIR/mariadb/data:/config
environment:
- PUID=$PUID
- PGID=$PGID
- TZ=$TZ
- FILE__MYSQL_ROOT_PASSWORD=/run/secrets/mysql_root_password
secrets:
- mysql_root_password
# phpMyAdmin - Database management
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: unless-stopped
depends_on:
- db
security_opt:
- no-new-privileges:true
ports:
- "$PHPMYADMIN_PORT:80"
volumes:
- $DOCKERDIR/phpmyadmin:/etc/phpmyadmin
environment:
- PMA_HOST=db
#- PMA_ARBITRARY=1
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_password
secrets:
- mysql_root_password
# Dozzle - Real-time Docker Log Viewer
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
restart: unless-stopped
security_opt:
- no-new-privileges:true
ports:
- "$DOZZLE_PORT:8080"
environment:
DOZZLE_LEVEL: info
DOZZLE_TAILSIZE: 300
DOZZLE_FILTER: "status=running"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
For the life of me, I can't figure out what I am doing wrong to log into Phpmyadmin. Can someone explain my mistake or mistakes and point me in the right direction? Thanks
I figured the issue out, first was I had the network set on the pphpmyadmin section, and not db, once I added the network statement to db section, I was able to connect.

I am trying to stand up 2 ghost containers, with mysql on the back end with eeacms/haproxy as the load balancer in docker containers error 503

I have tried many configurations and scenarios based around this which is mostly a tutorial that stops at one ghost instance. I am trying to scale it to 2 with docker-deploy up -d --scale ghost=2. When I hit the individual IP;s of the ghost containers , they work but port 80 is 503.
version: "3.1"
volumes:
mysql-volume:
ghost-volume:
networks:
ghost-network:
services:
mysql:
image: mysql:5.7
container_name: mysql
volumes:
- mysql-volume:/var/lib/mysql
networks:
- ghost-network
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: db
MYSQL_USER: blog-user
MYSQL_PASSWORD: supersecret
ghost:
build: ./ghost
image: laminar/ghost:3.0
volumes:
- ghost-volume:/var/lib/ghost/content
networks:
- ghost-network
restart: always
ports:
- "2368"
environment:
database__client: mysql
database__connection__host: mysql
database__connection__user: blog-user
database__connection__password: supersecret
database__connection__database: db
depends_on:
- mysql
entrypoint: ["wait-for-it.sh", "mysql", "--", "docker-entrypoint.sh"]
command: ["node", "current/index.js"]
haproxy:
image: eeacms/haproxy
depends_on:
- ghost
ports:
- "80:5000"
- "1936:1936"
environment:
BACKENDS: "ghost"
DNS_ENABLED: "true"
LOG_LEVEL: "info"
What I get on localhost:80 is a 503 error the particular eeacms/haproxy image is supposed to be self-configuring any help appreciated
I needed to add a backend URL to the environment and also tell ghost it was installed in an alternate location by adding URL: localhost:5050

Dockerized flask server is not respnsive from within the container, but is responsive when is run on host

I'm trying to setup a Flask <-> MongoDB <-> mongo_express application. I have 3 containers defined in my docker-compose.yml, and I start them successfully. However, while the Mongo part is OK (I can access the DB via the express api at localhost:8081), Flask can't access the DB.
What I'm looking for:
I want to be able to send requests from host machine (or any other
in the network) to Flask (running on 0.0.0.0:5000), from Flask to DB
(running on localhost:27017, accessed using pymongo wrapper).
Also, I want to be able to have access to mongo_express (on
localhost:8081) and from it to the DB [This part is already
working!]
In order to debug it, I removed the Flask container from the docker-compose.yml, restarted, and run it locally, and voila! everything works (meaning my pytest runs are ok - I have a test where I send a request to the Flask server, which in turn, uses the pymongo wrapper to access the DB, and return data from it to the client).
I guess my network configuration is flawed, but I don't understand where.
Here is my docker-compose.yml (Flask is commented out, since it is currently running locally):
version: "3.8"
services:
# mgmt_server:
# build: ./server # Dockerfile just copies py files, installs pip requirements, and runs "python server.py"
# container_name: mgmt_server
# restart: always
# networks:
# - backend # Connect to flask
# ports:
# - "5000:5000"
mongo:
image: mongo:latest
container_name: mongodb
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=pass
volumes:
- /data/db:/data/db
networks:
- backend # Connect to flask
- frontend # Connect to mongo_express
ports:
- "27017:27017"
mongo-express:
image: mongo-express:latest
container_name: mongo_express
restart: always
environment:
- ME_CONFIG_MONGODB_SERVER=mongo
- ME_CONFIG_MONGODB_PORT=27017
- ME_CONFIG_MONGODB_ENABLE_ADMIN=true
- ME_CONFIG_MONGODB_AUTH_DATABASE=admin
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=pass
# Uncomment if a secure login via browser is required
# - ME_CONFIG_BASICAUTH_USERNAME=root
# - ME_CONFIG_BASICAUTH_PASSWORD=pass
links:
- mongo
networks:
- frontend # Connect to mongo_express
ports:
- 8081:8081
networks:
backend:
driver: bridge
frontend:
driver: bridge
$ docker network ls # When mgmt_server is run locally
NETWORK ID NAME DRIVER SCOPE
10577560a149 bridge bridge local
a037a11e12bb host host local
b153eea9db12 node_mgmt_backend bridge local
c8e1b58ffb44 node_mgmt_frontend bridge local
4f7b75b5695a none null local
The problem was that Flask tried to access Mongo on localhost. When running Flask on the host, this is OK, but when Flask is containerized, it gets its own ip, and so does Mongo, and localhost just doesn't point to the right place.
Editing the docker-compose.yml networks, and redirecting pymongo client to the updated Mongo IP fixed the issue:
version: "3.8"
services:
mgmt_server:
build: ./server
container_name: mgmt_server
restart: always
ports:
- "5000:5000"
networks:
app_net:
ipv4_address: 172.16.238.2
mongo:
image: mongo:latest
container_name: mongodb
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=pass
volumes:
- /data/db:/data/db
ports:
- "27017:27017"
networks:
app_net:
ipv4_address: 172.16.238.3
mongo-express:
image: mongo-express:latest
container_name: mongo_express
restart: always
environment:
- ME_CONFIG_MONGODB_SERVER=mongo
- ME_CONFIG_MONGODB_PORT=27017
- ME_CONFIG_MONGODB_ENABLE_ADMIN=true
- ME_CONFIG_MONGODB_AUTH_DATABASE=admin
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=pass
# Uncomment if a secure login via browser is required
# - ME_CONFIG_BASICAUTH_USERNAME=root
# - ME_CONFIG_BASICAUTH_PASSWORD=pass
links:
- mongo
ports:
- 8081:8081
networks:
app_net:
ipv4_address: 172.16.238.4
networks:
app_net:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
This was helpful, but eventually I found this to be more informative.

Traefik 2 Gateway Timeout

So I have the following docker-compose.yml
version: "3.7"
services:
roundclinic-mysql:
image: mysql:5.7
networks:
- spring-boot-mysql-network
environment:
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
- MYSQL_ROOT_PASSWORD=
volumes:
- ./mysqldata:/var/lib/mysql:rw,delegated
ports:
- "3306:3306"
web-service:
image: roundclinic/roundclinic:latest
networks:
- spring-boot-mysql-network
- traefik-network
depends_on:
- roundclinic-mysql
ports:
- 8080:8080
environment:
- "SPRING_PROFILES_ACTIVE=dev"
links:
- roundclinic-mysql
labels:
- "--providers.docker.network=traefik_default"
- "traefik.enable=true"
- "traefik.http.routers.roundclinic.rule=Host(`api-dev.roundclinic.app`)"
- "traefik.http.routers.roundclinic.entrypoints=web"
- "traefik.http.services.cal.loadbalancer.server.port=8080"
traefik:
image: "traefik:v2.2"
container_name: "traefik"
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "traefik.docker.network=traefik-network"
ports:
- "80:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
traefik-network:
driver: bridge
external: true
spring-boot-mysql-network:
driver: bridge
volumes:
my-db:
Spring boot starts up fine and can connect to mysql.
When I connect to http://api-dev.roundclinic.app:8080/../ I can hit my application just fine
When I connect to http://api-dev.roundclinic.app/../ I get a gateway timeout. I can see in the traefik logs that it's forwarding the request to what seems to be the correct IP and port, but nothing hits the actual application. I'm not sure what's going on here. Any help?
When accessing port 8080 you are bypassing Traefik and directly access your application, correct?
Generally speaking the Traefik labels look good. Entrypoint, Port and Host are defined, router and service port are present. These are usually all the requirements for Docker-based setups.
One thing that I noticed is that the traefik container uses "traefik.docker.network=traefik-network", but your web app uses:
"--providers.docker.network=traefik_default".
I am not sure if traefik_default is something that traefik provides but that mismatch in network names might be the issue.
I can't test if that is the problem but that would be the first thing to check.
One way would be to simplify your config but just always using the networks key from docker compose instead of mixing it with labels and arguments.