Haproxy not bind to frontend on Ubuntu Server 18.04 - docker-compose

How to solve Haproxy not working on ubuntu server, did I missing something, need a guide here.
Below I do not have a problem on my local macbook docker compose,
stm-haproxy | listen stats
lstm-haproxy | bind :1936
lstm-haproxy | mode http
lstm-haproxy | stats enable
lstm-haproxy | timeout connect 10s
lstm-haproxy | timeout client 1m
lstm-haproxy | timeout server 1m
lstm-haproxy | stats hide-version
lstm-haproxy | stats realm Haproxy\ Statistics
lstm-haproxy | stats uri /
lstm-haproxy | stats auth stats:stats
lstm-haproxy | frontend default_port_80
lstm-haproxy | bind :80
lstm-haproxy | reqadd X-Forwarded-Proto:\ http
lstm-haproxy | maxconn 4096
lstm-haproxy | default_backend default_service
lstm-haproxy | backend default_service
lstm-haproxy | server lstm_lstm_1 lstm_lstm_1:8008 check inter 2000 rise 2 fall 3
lstm-haproxy | server lstm_lstm_2 lstm_lstm_2:8008 check inter 2000 rise 2 fall 3
lstm-haproxy | INFO:haproxy:Config check passed
lstm-haproxy | INFO:haproxy:Reloading HAProxy
lstm-haproxy | INFO:haproxy:Restarting HAProxy gracefully
lstm-haproxy | INFO:haproxy:HAProxy is reloading (new PID: 11)
lstm-haproxy | INFO:haproxy:===========END===========
But when I push to my staging server, ubuntu server 18.04
lstm-haproxy | listen stats
lstm-haproxy | bind :1936
lstm-haproxy | mode http
lstm-haproxy | stats enable
lstm-haproxy | timeout connect 10s
lstm-haproxy | timeout client 1m
lstm-haproxy | timeout server 1m
lstm-haproxy | stats hide-version
lstm-haproxy | stats realm Haproxy\ Statistics
lstm-haproxy | stats uri /
lstm-haproxy | stats auth stats:stats
lstm-haproxy | INFO:haproxy:Launching HAProxy
lstm-haproxy | INFO:haproxy:HAProxy has been launched(PID: 9)
My docker and docker-compose versions on macbook,
Docker version 18.09.1, build 4c52b90
docker-compose version 1.23.2, build 1110ad01
My docker and docker-compose versions on ubuntu server,
Docker version 18.09.1, build 4c52b90
docker-compose version 1.23.1, build b02f1306
This is my docker-compose.yml,
version: '3'
services:
lstm:
restart: always
build:
context: .
environment:
MAX_REQUEST: 100
NUM_WORKER: 5
BIND_ADDR: 0.0.0.0:8008
command: bash monkey-sync.sh
lstm-haproxy:
image: dockercloud/haproxy
links:
- lstm
ports:
- '8008:80'
container_name: lstm-haproxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
This is my dockerfile,
FROM python:3.6.1 AS base
RUN pip3 install blablabla
WORKDIR /app
COPY . /app
RUN echo
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
EXPOSE 8008
Any guides really help me a lot, thanks!

Solved! Need to set SERVICE_PORTS: 8008 on lstm environment.

Related

Port forward to postgres kubernetes pod fails with connection reset when executing certain commands via psql

I have a postgres deployment, whose configuration looks like this
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-13.5-0
postgresVersion: 13
users:
- name: hippo
databases: ["hippo"]
options: "CREATEDB"
instances:
- name: instance1
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.36-0
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
And I forward the local port 5432 to it, like so
DB_PORT=5432
PG_CLUSTER_PRIMARY_POD=$(microk8s kubectl get pod -o name \
-l postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/role=master)
microk8s kubectl port-forward "${PG_CLUSTER_PRIMARY_POD}" ${DB_PORT}:${DB_PORT}
And I can then connect via psql. I can list the databases and connect to the hippo database.
rob#rob-Vostro-5402:~$ psql postgresql://hippo:Zw%5EAQuPf%3D%3Bi%3B%3F2%3E1RRbLTLrT#localhost:5432/hippo
psql (13.7 (Ubuntu 13.7-1.pgdg20.04+1), server 13.5)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
hippo=> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
hippo | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres+
| | | | | hippo=CTc/postgres
postgres | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 |
template0 | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
hippo=> \c hippo
psql (13.7 (Ubuntu 13.7-1.pgdg20.04+1), server 13.5)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
You are now connected to database "hippo" as user "hippo".
However, when I run \dt, I get disconnected.
hippo=> \dt
SSL SYSCALL error: EOF detected
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!?>
And the terminal in which I was running the port-forwarding now shows an error.
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
Handling connection for 5432
Handling connection for 5432
E0625 15:59:16.859963 72918 portforward.go:406] an error occurred forwarding 5432 -> 5432: error forwarding port 5432 to pod 8f58bd2f87d0ef63b969725920c98793f0dd1f41a25dc04bfe1b06a0ad7b58fc, uid : failed to execute portforward in network namespace "/var/run/netns/cni-0f76b252-b44c-017f-e337-b0285117cc4e": read tcp4 127.0.0.1:46248->127.0.0.1:5432: read: connection reset by peer
E0625 15:59:16.860854 72918 portforward.go:234] lost connection to pod
Any help would be much appreciated. Thanks
I am used to the same brittle behavior of port forwarding to Postgres and resorted to a simple reconnect as a workable solution:
while true; do kubectl port-forward "$path" -n "$namespace" "$ports"; done

Error timeout reached before the port went into state "inuse"

Faced a problem with connection of postgresql to keycloak. In keycloak I tried to use 5433 and 5432 port in "KEYCLOAK_DATABASE_PORT", but none of them is not working
docker compose
version: "3.8"
volumes:
postgres_data:
driver: local
services:
postgres:
container_name: postgres.beltweet.com
image: library/postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: beltweet
ports:
- "5433:5432"
restart: unless-stopped
keycloak:
image: docker.io/bitnami/keycloak:latest
container_name: keycloak.beltweet.com
environment:
KEYCLOAK_ADMIN_USER: admin
KEYCLOAK_ADMIN_PASSWORD: 260
KEYCLOAK_MANAGEMENT_PASSWORD: 260
KEYCLOAK_DATABASE_PORT: 5432
KEYCLOAK_DATABASE_HOST: postgres
KEYCLOAK_DATABASE_NAME: beltweet
KEYCLOAK_CREATE_ADMIN_USER: 'true'
KEYCLOAK_DATABASE_USER: postgres
KEYCLOAK_DATABASE_PASSWORD: postgres
KEYCLOAK_HTTP_PORT: 3033
KEYCLOAK_HTTPS_PORT: 3034
KEYCLOAK_JGROUPS_DISCOVERY_PROTOCOL: JDBC_PING
KEYCLOAK_CACHE_OWNERS_COUNT: 3
KEYCLOAK_AUTH_CACHE_OWNERS_COUNT: 3
depends_on:
postgres:
condition: service_started
links:
- "postgres:postgres"
error
Starting postgres.beltweet.com ... done
Recreating keycloak.beltweet.com ... done
Attaching to postgres.beltweet.com, keycloak.beltweet.com
postgres.beltweet.com |
postgres.beltweet.com | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres.beltweet.com |
postgres.beltweet.com | 2022-03-17 14:33:35.516 UTC [1] LOG: starting PostgreSQL 14.2 (Debian 14.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
keycloak.beltweet.com | keycloak 14:33:35.99
keycloak.beltweet.com | keycloak 14:33:35.99 Welcome to the Bitnami keycloak container
postgres.beltweet.com | 2022-03-17 14:33:35.516 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres.beltweet.com | 2022-03-17 14:33:35.516 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres.beltweet.com | 2022-03-17 14:33:35.517 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres.beltweet.com | 2022-03-17 14:33:35.520 UTC [26] LOG: database system was shut down at 2022-03-17 14:33:30 UTC
postgres.beltweet.com | 2022-03-17 14:33:35.523 UTC [1] LOG: database system is ready to accept connections
keycloak.beltweet.com | keycloak 14:33:35.99 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-keycloak
keycloak.beltweet.com | keycloak 14:33:36.00 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-keycloak/issues
keycloak.beltweet.com | keycloak 14:33:36.00
keycloak.beltweet.com | keycloak 14:33:36.00 INFO ==> ** Starting keycloak setup **
keycloak.beltweet.com | keycloak 14:33:36.01 INFO ==> Validating settings in KEYCLOAK_* env vars...
keycloak.beltweet.com | keycloak 14:33:36.02 INFO ==> Trying to connect to PostgreSQL server postgres...
keycloak.beltweet.com | timeout reached before the port went into state "inuse"
keycloak.beltweet.com | timeout reached before the port went into state "inuse"
keycloak.beltweet.com | timeout reached before the port went into state "inuse"
keycloak.beltweet.com | timeout reached before the port went into state "inuse"
keycloak.beltweet.com | timeout reached before the port went into state "inuse"
keycloak.beltweet.com | timeout reached before the port went into state "inuse"
I do not understand, why you let postgres expose port 5433 and configure keycloak to connect on 5432.
I took your docker-compose example and fixed the port assignment. Additionally I exposed port 8080 for keycloak server:
version: "3.8"
volumes:
postgres_data:
driver: local
services:
postgres:
container_name: postgres.beltweet.com
image: library/postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: beltweet
ports:
- "5432:5432"
restart: unless-stopped
keycloak:
image: docker.io/bitnami/keycloak:latest
container_name: keycloak.beltweet.com
ports:
- "8080:8080"
environment:
KEYCLOAK_ADMIN_USER: admin
KEYCLOAK_ADMIN_PASSWORD: 260
KEYCLOAK_MANAGEMENT_PASSWORD: 260
KEYCLOAK_DATABASE_PORT: 5432
KEYCLOAK_DATABASE_HOST: postgres
KEYCLOAK_DATABASE_NAME: beltweet
KEYCLOAK_CREATE_ADMIN_USER: 'true'
KEYCLOAK_DATABASE_USER: postgres
KEYCLOAK_DATABASE_PASSWORD: postgres
KEYCLOAK_HTTP_PORT: 3033
KEYCLOAK_HTTPS_PORT: 3034
KEYCLOAK_JGROUPS_DISCOVERY_PROTOCOL: JDBC_PING
KEYCLOAK_CACHE_OWNERS_COUNT: 3
KEYCLOAK_AUTH_CACHE_OWNERS_COUNT: 3
depends_on:
postgres:
condition: service_started
links:
- "postgres:postgres"
It works:
keycloak.beltweet.com | keycloak 12:47:54.29 INFO ==> Trying to connect to PostgreSQL server postgres...
keycloak.beltweet.com | keycloak 12:47:54.29 INFO ==> Found PostgreSQL server listening at postgres:5432
keycloak.beltweet.com | keycloak 12:47:54.30 INFO ==> Configuring database settings
keycloak.beltweet.com | keycloak 12:48:05.96 INFO ==> Configuring jgroups settings
keycloak.beltweet.com | keycloak 12:48:10.89 INFO ==> Configuring cache count
keycloak.beltweet.com | keycloak 12:48:15.94 INFO ==> Configuring authentication cache count
keycloak.beltweet.com | keycloak 12:48:22.87 INFO ==> Configuring log level
keycloak.beltweet.com | keycloak 12:48:27.71 INFO ==> Configuring proxy address forwarding
keycloak.beltweet.com | keycloak 12:48:32.68 INFO ==> Configuring node identifier
keycloak.beltweet.com |
keycloak.beltweet.com | keycloak 12:48:37.33 INFO ==> ** keycloak setup finished! **
keycloak.beltweet.com | keycloak 12:48:37.35 INFO ==> ** Starting keycloak **

Docker-compose service with traefik and internal network

I want to use both internal and an external traefik network in my container.
Problem: when I define an internal network, traefik loses communication with my service.
traefik_webgateway internal network
+----------+ +--------------------------+
| traefik | | +------+ +-----+ |
| +--------------+ | app | | api | |
| | | | | +------+ +-----+ |
| | | proxy | | |
| | | | | +-----+ +------+ |
| | | | | |auth | |worker| |
| +--------------+ +-----+ +------+ |
| | | |
+----------+ +--------------------------+
docker-compose.traefik.yml:
services:
traefik:
image: traefik:v2.4
restart: unless-stopped
ports:
- 80:80
- 8080:8080
- 443:443
networks:
- webgateway
networks:
webgateway:
driver: bridge
docker-compose.yml:
services:
proxy:
networks:
- internal # <=== this causes traefik point the healthcheck to the 172 IP instead of the 192 IP (see Edit below)
- traefik
labels:
- traefik.http....
app:
networks:
- internal
api:
networks:
- internal
auth:
networks:
- internal
worker:
networks:
- internal
networks:
internal:
traefik:
external:
name: traefik_webgateway
I don't want my services to use the external traefik network because I want my services to be namespaced for my green/blue deployment.
I was wondering why this is happening and if there's a solution.
Thanks in advance!
EDIT:
I obtained the proxy container's network:
docker inspect -f '{{range.NetworkSettings.Networks}} {{.IPAddress}}{{end}}' <container id>
And received 2 IPS:
172.18.A.A 192.168.B.B
I have a healthcheck that pings /health. On the traefik dashboard:
http://172.18.A.A:8000
Traefik is unable to communicate with this IP. Sometimes it's able to when it picks the other IP: 192.168.B.B.
From within the proxy container, I'm able to ping proxy (it uses the "B" IP)
PING proxy (192.168.B.B): 56 data bytes
64 bytes from 192.168.16.4: seq=0 ttl=64 time=0.090 ms
64 bytes from 192.168.16.4: seq=1 ttl=64 time=0.080 ms
64 bytes from 192.168.16.4: seq=2 ttl=64 time=0.065 ms
64 bytes from 192.168.16.4: seq=3 ttl=64 time=0.069 ms
I am able to ping: ping 192.168.B.B
I am unable to ping: ping 172.18.A.A

Cant access consul webui docker compose

Im trying to standup consul dev webui for traning purposes using docker compose.
While consul claims to be running, when I try to visit localhost:8500/ui, the site is unreachable.
My docker compose file:
version: "3"
services:
cs1:
image: consul:1.4.2
ports:
- "8500:8500"
command: "agent -dev -ui"
The response from the console is
cs1_1_6d8d914aa536 | ==> Starting Consul agent...
cs1_1_6d8d914aa536 | ==> Consul agent running!
cs1_1_6d8d914aa536 | Version: 'v1.4.2'
cs1_1_6d8d914aa536 | Node ID: 'dfc6a0ce-3abc-a96d-718b-8b77155a2de6'
cs1_1_6d8d914aa536 | Node name: '1fde0528ab0c'
cs1_1_6d8d914aa536 | Datacenter: 'dc1' (Segment: '<all>')
cs1_1_6d8d914aa536 | Server: true (Bootstrap: false)
cs1_1_6d8d914aa536 | Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
cs1_1_6d8d914aa536 | Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
cs1_1_6d8d914aa536 | Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
I suspect the site is running on the docker container locahost but port 8500 hasnt been exposed properly.
Thanks for your help
Remove the line command: "agent -dev -ui" from your docker-compose.yml.
The default command is already running the dev agent, see the Dockerfile here:
https://github.com/hashicorp/docker-consul/blob/9bd2aa7ecf2414b8712e055f2374699148e8941c/0.X/Dockerfile
By default, the consul is bind to localhost
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
add bind to your command usually allowed to any IP (0.0.0.0)
agent -dev -ui -bind=0.0.0.0

Seed mongodb replica set

I want to create replica set of 3 nodes using docker-compose and seed initial data to them. If I remove --replSet and seed data without specifying hosts I have no problems.
docker-compose.yml
master:
image: 'mongo:3.4'
ports:
- '50000:27017'
volumes:
- './restaurants.json:/restaurants.json'
- './setup.js:/docker-entrypoint-initdb.d/00_setup.js'
- './seed.sh:/docker-entrypoint-initdb.d/01_seed.sh'
command: '--replSet rs'
slave1:
image: 'mongo:3.4'
ports:
- '50001:27017'
command: '--replSet rs'
slave2:
image: 'mongo:3.4'
ports:
- '50002:27017'
command: '--replSet rs'
seed.sh
# ...
_wait "slave1"
_wait "slave2"
echo "Starting to import data..."
mongoimport --host="rs/master:27017,slave1:27017,slave2:27017" --db db --collection restaurants --drop --file /restaurants.json > /dev/null
echo "Done."
Log
master_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/01_seed.sh
master_1 | Waiting for slave1...
master_1 | .
master_1 | Done.
master_1 | Waiting for slave2...
master_1 | Done.
master_1 | Starting to import data...
master_1 | 2017-11-26T16:06:39.148+0000 [........................] db.restaurants 0B/11.3MB (0.0%)
master_1 | 2017-11-26T16:06:39.653+0000 [........................] db.restaurants 0B/11.3MB (0.0%)
master_1 | 2017-11-26T16:06:39.653+0000 Failed: error connecting to db server: no reachable servers
master_1 | 2017-11-26T16:06:39.653+0000 imported 0 documents
mongoreplication_master_1 exited with code 1
This question is old but i ran into the same issue recently, it's worth noting that the mongo docker-entrypoint.sh script will strip the --replicaSet argument during the initDb phase, see:
https://github.com/docker-library/mongo/blob/master/3.6/docker-entrypoint.sh#L237
So you can't connect to the host that is running the init scripts, you can create another container with the sole purpose of initializing the replicaset however and override the docker-entrypoint.sh