How to save file outside docker container with multiple running instance? - docker-compose

I have a nodejs backend and react frontend that is running with 5 instance with docker-compose.yml which is hosted in Google cloud platform.
docker-compose up --build --scale nodeserver=5
In my nodejs app, I am saving some files in the local filesystem. When I run docker with one container I can load the files but if I run multiple containers then it seems like file is getting saved into each containers.
to be more specific, I am saving user profile images in a folder "userProfilepictures" which is located inside my backend folder. I am saving all uploaded images from react to here and also loading them from here.
How can I save the files in the system filesystem and access them from each docker container?
my current docker-compose.yml
version: "3.8"
services:
nodeserver:
build:
context: ./backend
restart: on-failure
frontend:
build:
context: ./frontend
container_name: nginx
hostname: nginx
ports:
- "80:80"
- "443:443"

Related

Docker with postgresql in flask web application (part 2)

I am building a Flask application in Python. I'm using SQLAlchemy to connect to PostgreSQL.
In the flask application, I'm using this to connect SQLAlchemy to PostgreSQL
engine = create_engine('postgresql://postgres:[mypassword]#db:5432/employee-manager-db')
And this is my docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
links:
- db:db
depends_on:
- pgadmin
db:
image: postgres:14.5
restart: always
volumes:
- .dbdata:/var/lib/postgresql
hostname: postgres
environment:
POSTGRES_PASSWORD: [mypassword]
POSTGRES_DB: employee-manager-db
pgadmin:
image: 'dpage/pgadmin4'
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: [myemail]
PGADMIN_DEFAULT_PASSWORD: [mypassword]
ports:
- "5050:80"
depends_on:
- db
I can do "docker build -t employee-manager ." to build the image. However, when I do "docker run -p 5000:5000 employee-manager" to run the image, I get an error saying
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "db" to address: Try again
Does anybody know how to fix this? Thank you so much for your help
Your containers are on different networks and that is why they don't see each other.
When you run docker-compose up, docker-compose creates a separate network and puts all the services defined inside docker-compose.yml on that network. You can see that with docker network ls.
When you run a container with docker run, it is attached to the default bridge network, which is isolated from other networks.
There are several ways to fix this, but this one will serve you in many other scenarios:
Run docker container ls and identify the name or ID of the db container that was started with docker-compose
Then run your container with:
# ID_or_name from the previous point
docker run -p 5000:5000 --network container:<ID_or_name> employee-manager
This attached the new container to the same network as your database container.
Other ways include creating a network manually and defining that network as default in the docker-compose.yml. Then you can use docker run --network <network_name> ... to attach other containers to that network.
docker run doesn't read any of the information in the docker-compose.yml file, and it doesn't see things like the Docker network that Compose automatically creates.
In your case you already have the service fully-defined in the docker-compose.yml file, so you can use Compose commands to build and restart it
docker-compose build
docker-compose up -d # will delete and recreate changed containers
(If the name of the image is important to you – maybe you're pushing to a registry – you can specify image: alongside build:. links: are obsolete and you should remove them. I'd also avoid replacing the image's content with volumes:, since this misses any setup or modification that's done in the Dockerfile and it means you're running untested code if you ever deploy the image without the mount.)

docker-compose - NestJS container cannot access RethinkDB container

Problem
I am trying to containerize a full stack app. For now, I am putting the front-end aside, so I am trying to set up only three containers :
PostgreSQL
RethinkDB
NestJS
But when I try to run my containers with
docker-compose up
the NestJS container can't access the RethinkDB container.
Code
docker-compose.yaml
version: "3.9"
services:
opm_postgres:
container_name: opm_postgres_1
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: *******
POSTGRES_USER: postgres
volumes:
- 'opm_postgres:/var/lib/postgresql/data'
opm_adminer:
container_name: opm_adminer_1
image: adminer
restart: always
ports:
- 8085:8080
opm_rethink:
container_name: opm_rethink_1
image: rethinkdb
restart: always
ports:
- 28016:28015
- 8084:8080
volumes:
- 'opm_rethink:/data'
opm_back:
container_name: opm_back_1
build: ../OPM-back
restart: always
ports:
- "3000:3000"
volumes:
opm_postgres:
opm_rethink:
NestJS Dockerfile (coming from : Ultimate Guide: NestJS Dockerfile For Production [2022])
# Base image
FROM node:14
# Create app directory
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
# Creates a "dist" folder with the production build
RUN npm run build
# Start the server using the production build
CMD [ "node", "dist/main.js" ]
Logs
docker-compose up
docker ps
Additional info
I used the containers names as DB hosts, both for RethinkDB and PostgreSQL.
Also, when I comment the rethink part in my docker-compose.yaml, everything works fine, I can call a route on my NestJS API and it queries correctly in my PostgreSQL db. The problem seems to be specific to RethinkDB.

Volumes in docker are not longer created

I working with a simple docker-compose file (node alpine), i got three anon volumens, this already work in the pass but now, not longer created.
I delete the folder from the host side (Windows), to try if docker creates against the folders and put inside the files, but nothing is happend.
version: "3.3"
services:
api:
#restart: always
build:
context: .
image: foo-foo-platform:1.1.0.0
#container_name: foo-foo-platform
env_file: docker-compose-debug.env
labels:
- "traefik.enable=false"
- "traefik.http.routers.api-gw.rule=PathPrefix(`/`)"
- "traefik.http.services.api-gw.loadbalancer.server.port=8090"
networks:
- internal
volumes:
- /mnt/logs:/mnt/logs
- /mnt/cc:/mnt/cc
ports:
- "8084:8084"
networks:
internal:
I have tried to prune volumes with docker volume prune, anyway noone of volumes listed is from this docker.
Al tried "docker-compose -f docker-compose-debug.yml up --build --force-recreate --renew-anon-volumes"
Note: "/mnt/logs:/mnt/logs" this notation works in windows.

How to communicate multiple containers with each other in Docker?

I'm trying to containerize my application. I use mongodb and 2 more micro services.
As you can see in the docker compose file below, I have some problems.
My requirements:
main_image needs to connect to MongoDB.
gui_image needs to connect to MongoDB.
gui_image needs to show its GUI on port 8080 (Can use another port as well)
gui_image has to read and write to a file inside my computer.
MongoDB has to access a volume inside my computer.
main_image needs access to the internet.
Here is my questions:
1- Does exposing ports in docker file and docker-compose the same thing?
2- How do I mount a volume to mongodb as best practice?
3- How to accomplish the requirements above with the diagram below in docker-compose?
Here is my docker-compose file:
version: "3"
services:
mongo:
image: mongo:latest
ports:
- 27017:27017
main_image:
build:
context: .
dockerfile: .\my_project\dockerfile
depends_on:
- mongo
gui_image:
build:
context: .
dockerfile: .\my_gui\dockerfile
ports:
- 8080:8080
- 27017:27017
depends_on:
- mongo
Here is my dockerfile under my_gui directory:
FROM continuumio/miniconda3
WORKDIR /app
COPY . .
RUN pip install dash
EXPOSE 8080
EXPOSE 27017
ENTRYPOINT [ "python","gui_script.py"]
And lastly, here is my dockerfile under my_project directory:
FROM continuumio/miniconda3
WORKDIR /app
COPY . .
EXPOSE 27017
ENTRYPOINT [ "python","main_script.py"]
1.The EXPOSE instruction in Dockerfile informs Docker that the container listens on the specified network ports at runtime(like when using docker run -p command).
However using ports in compose is a dynamic way of specifying these ports. So images like nginx or apache which are always supposed to run on port 80 inside the container will use EXPOSE in Dockerfile itself.
While an image which has dynamic port which may be controlled using an environment variable will then use expose in docker run or compose file.
some_webapi:
environment:
- ASPNETCORE_URLS=http://*:80
build:
context: .
dockerfile: ./Dockerfile
2.As documented on the docker hub page for mongo image (https://hub.docker.com/_/mongo/) you can use
volumes:
- '/path/to/your/pc/folder:/path/inside/docker'
3.And for the last question you might wanna use Networking in Compose.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Services can join networks like this
gui_image:
build:
context: .
dockerfile: .\my_gui\dockerfile
ports:
- 8080:8080
- 27017:27017
depends_on:
- mongo
networks:
- gui
And also you have to define all the networks used by services in global scope of compose file
version: '3'
services:
networks:
gui:
After that containers will be able to see each other even by their container_name which you can define in services
mongo:
image: mongo:latest
ports:
- 27017:27017
container_name: gui_mogno
then you will be able to connect to mongo with a connection string like this mongodb://gui_mogno:27017/
You can get more information about networking here

How to enable MongoDB access control using a Docker container?

I'm using a Dockerfile in combination with a docker-compose.yml to start two services:
My app service
A MongoDB service
My docker-compose.yml:
web:
build: .
ports:
- "80:3000"
environment:
NODE_ENV: production
links:
- mongo
mongo:
image: mongo
command: --smallfiles
ports:
- "27017:27017"
I can't seem to figure out how to control access to the MongoDB container (like with the --auth flag), and how to have external access (say a GUI) using a username/password.
The two services get redeployed via Tutum by a webhook after a Docker Automated Build. In other words, I don't want to manually configure the database every time.
How do I control access a.k.a. set a root/admin user to secure my MongoDB database using the Dockerfile or the docker-compose.yml file?