My postgresql docker container is using all the ram and acting weird - postgresql

I am using a docker compose on a 2GO digitalOcean server to deploy my app, but I noticed that the postgresql container was using all the ram available for him !
This is not normal and I wanted to know how to fix this problem..?
So I go in the logs of the container (docker logs postgres) and I found this:
postgresql container logs
I didn't expect to have logs after 'database is ready to accept connections' logs are like if I didn't have package installed in the container, but I am using the official image so I think it should work...
To help you to help me haha:
my docker-compose file:
version: "3"
services:
monapp:
image: registry.gitlab.com/touretchar/workhouse-api-bdd/master:latest
container_name: monapp
depends_on:
- postgres
ports:
- "3000:3000"
command: "npm run builded-test"
restart: always
deploy:
resources:
limits:
cpus: 0.25
memory: 500M
reservations:
memory: 150M
postgres:
image: postgres:13.1
container_name: postgres
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_datas:/var/lib/postgresql/data/
- postgres_dumps:/home/dumps/test
ports:
- "5432:5432"
restart: always
deploy:
resources:
limits:
cpus: 0.25
memory: 500M
reservations:
memory: 150M
volumes:
postgres_datas:
driver: local
driver_opts:
type: none
device: $PWD/util/databases/pgDatas
o: bind
postgres_dumps:
driver: local
driver_opts:
type: none
device: $PWD/util/databases/test
o: bind
and output of docker stats there:
enter image description here
If you have ideas ! thanks by advance :)

I finally found a solution, it was because my container was compromised!
Indeed my container with postgres had an open port on 5432 to internet, so everyone could connect to it using the digitalocean droplet ip and port (:port), and I think someone was hacking my container and was using all my Ram/cpu allow to the container!
I am sure about this beaucause to correct the problem, I blocked access to the container from outside of my droplet by adding a firewall rule with iptables (you should add the rule in chain DOCKER-USER), and since I add the rule, ram consumption of the container is back to normal, and I Don t have the weird logs I published in my question anymore!
Conclusion: be careful of your Docker container security when they are running on web!
Thanks hope it helps someone :)

Related

Docker compose read connection reset by peer error on pipeline

when running a docker compose in a pipeline I'm getting this error when the tests on the pipleine are making use of mycontainer's API.
panic: Get "http://localhost:8080/api/admin/folder": read tcp 127.0.0.1:60066->127.0.0.1:8080: read: connection reset by peer [recovered]
panic: Get "http://localhost:8080/api/admin/folder": read tcp 127.0.0.1:60066->127.0.0.1:8080: read: connection reset by peer
This is my docker copose file:
version: "3"
volumes:
postgres_vol:
driver: local
driver_opts:
o: size=1000m
device: tmpfs
type: tmpfs
networks:
mynetwork:
driver: bridge
services:
postgres:
image: postgres:14
container_name: postgres
restart: always
environment:
- POSTGRES_USER=xxx
- POSTGRES_PASSWORD=xxx
- POSTGRES_DB=newdatabase
volumes:
#- ./postgres-init-db.sql:/docker-entrypoint-initdb.d/postgres-init-db.sql
- "postgres_vol:/var/lib/postgresql/data"
ports:
- 5432:5432
networks:
- mynetwork
mycontainer:
image: myprivaterepo/mycontainer-image:1.0.0
container_name: mycontainer
restart: always
environment:
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
- DATABASE_NAME=newdatabase
- DATABASE_USERNAME=xxx
- DATABASE_PASSWORD=xxx
- DATABASE_SSL=false
depends_on:
- postgres
ports:
- 8080:8080
networks:
- mynetwork
mycontainer is listening on port 8080 and locally everything works fine.
However, when I run the pipeline which is initiating this docker compose is where I'm getting the error.
Basically, I'm running some tests in the pipeline that make use of mycontainer API (http://localhost:8080/api/admin/folder).
If I run the docker compose locally and I reproduce the steps followed on my pipeline to make use of the API everything is working fine. I can comunicate locally with both containers through localhost.
Also, I tried using healthchecks on the containers and 127.0.0.1:8080:8080 on mycontainer & 127.0.0.1:5432:5432 in postgres (including 0.0.0.0:8080:8080 & 0.0.0.0:5342:5432 just in case).
Any idea about that?
I was able to reproduce your error in a pipeline.
Make sure that you are not catching anything (e.g the code that's interacting with your container's API).
You did not mention anything related to your pipeline but just in case, delete the catching on your pipeline.

postgres container exited cannot restart because 'chmod: changing permissions of '/var/lib/postgresql/data': Permission denied'

If anyone could help me would be much appreciated. I'm running postgres container on centos machine where it runs with no problem for over a year already, and it suddenly exited for whatever reason. I tried to re start the container with docker start but it exited immediately with error chmod: changing permissions of '/var/lib/postgresql/data': Permission denied. I can't lose the data, so what would be the best way to solve this issue?
Here's my docker-compose
version: '2'
services:
app:
container_name: garvan_rems_app
environment:
AUTHENTICATION: :oidc
DATABASE_URL: postgresql://db:5432/rems?user=rems&password=*****
PORT: 3000
PUBLIC_URL: <-URL->
oidc-client-id: ***********************
oidc-client-secret: *****************
oidc-domain: *****.au.auth0.com
image: garvan_rems_app
mem_limit: 500m
mem_reservation: 200m
ports:
- 0.0.0.0:3000:3000/tcp
db:
container_name: garvan_rems_db
image: postgres:9.6
environment:
POSTGRES_USER: ***
POSTGRES_PASSWORD: ******
mem_reservation: 30m
mem_limit: 150m
ports:
- "127.0.0.1:5432:5432"
auth0:
container_name: garvan_rems_auth0
image: <auth0 image name>
ports:
- "0.0.0.0:3333:3333"
Could you please send your docker-compose.yml or dockerfile.yml with command which you have started your container

Nestjs with postgres and redis on docker connection refused

I've dockerize nestjs app with postgres and redis.
How can I fix this issue?
Postgres and redis are refusing tcp connections.
This is docker-compose.yml and console result.
I am using typeorm and #nestjs/bull for redis.
Hope much help.
Thanks
When using docker-compose, the containers do not necessarily need to specify the network they are in since they can reach each other from the container name (e.g. postgres, redis in your case) because they are in the same network. See Networking in Compose for more info.
Also, expose doesn't do any operation and it is redundant. Since you specify ports, it should be more than enough to tell Docker which ports of the container are exposed and bound to which ports of the host.
For redis-alpine, the startup command is redis-server and it is not necessary to specify it again. Also, the environment you specified is redundant in this case.
Try the following docker-compose.yaml with all of the above suggestions added:
version: "3"
services:
postgres:
image: postgres:alpine
restart: always
ports:
- 5432:5432
volumes:
- ./db_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=lighthouse
redis:
image: redis:alpine
restart: always
ports:
- 6379:6379
Hope this helps. Cheers! 🍻

How to access postgres-docker container other docker container without ip address

How to access postgres-docker container other docker container without ip address?
I want to store data in postgres by using myweb. in jar given host like localhost:5432/db..
Here my compose file:
version: "3"
services:
myweb:
build: ./myweb
container_name: app
ports:
- "8080:8080"
- "9090:9090"
networks:
- front-tier
- back-tier
depends_on:
- "postgresdb"
postgresdb:
build: ./mydb
image: ppk:postgres9.5
volumes:
- dbdata:/var/lib/postgresql
ports:
- "5432:5432"
networks:
- back-tier
volumes:
dbdata: {}
networks:
front-tier:
back-tier:
Instead of localhost:5432/db.. use postgresdb:5432/db.. connection string.
By default the container has the same hostname as the service name.
Here is my minimal working example, which is connecting a java client (boxfuse/flyway) with postgres server. The most important part is the heath check, which is delaying the start of the myweb container to the time when postgres is ready to accept connections.
Note that this can be directly executed by docker-compose up, it dosen't have any other dependencies. Both the images are from docker hub.
version: '2.1'
services:
myweb:
image: boxfuse/flyway
command: -url=jdbc:postgresql://postgresdb/postgres -user=postgres -password=123 info
depends_on:
postgresdb:
condition: service_healthy
postgresdb:
image: postgres
environment:
- POSTGRES_PASSWORD=123
healthcheck:
test: "pg_isready -q -U postgres"
That is the Docker Networking problem. The solution is to use postgresdb:5432/db in place of localhost:5432/db because the two service is in the same network named back-tier and docker deamon will use name service like a DNS name to make communication between the two container. I think that my solution will help you so.

Cannot connect from inside docker swarm cluster to external mongodb service

If I run single docker container of my backend, it runs well and connects to mongodb which is running on host. But when I run my backend using docker-compose, it doesn't connect to mongodb and prints to console:
MongoError: failed to connect to server [12.345.678.912:27017] on first connect [MongoError: connection 0 to 12.345.678.912:27017 timed out]
docker-compose.yml contents:
version: "3"
services:
web:
image: __BE-IMAGE__
deploy:
replicas: 1
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 2048M
ports:
- "1337:8080"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "1340:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
how I run single docker container:
docker run -p 1337:8080 BE-IMAGE
you need to link the mongo port since localhost is not the same from inside versus outside the containers
ports:
- "1337:8080"
- "27017:27017"
On port definitions left hand side is outside, right side is internal to your container ... Your error says internal to your container it cannot see port 27017 ... above is just linking that mongo port so the container can access that port outside of docker