docker-compose - NestJS container cannot access RethinkDB container - postgresql

Problem
I am trying to containerize a full stack app. For now, I am putting the front-end aside, so I am trying to set up only three containers :
PostgreSQL
RethinkDB
NestJS
But when I try to run my containers with
docker-compose up
the NestJS container can't access the RethinkDB container.
Code
docker-compose.yaml
version: "3.9"
services:
opm_postgres:
container_name: opm_postgres_1
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: *******
POSTGRES_USER: postgres
volumes:
- 'opm_postgres:/var/lib/postgresql/data'
opm_adminer:
container_name: opm_adminer_1
image: adminer
restart: always
ports:
- 8085:8080
opm_rethink:
container_name: opm_rethink_1
image: rethinkdb
restart: always
ports:
- 28016:28015
- 8084:8080
volumes:
- 'opm_rethink:/data'
opm_back:
container_name: opm_back_1
build: ../OPM-back
restart: always
ports:
- "3000:3000"
volumes:
opm_postgres:
opm_rethink:
NestJS Dockerfile (coming from : Ultimate Guide: NestJS Dockerfile For Production [2022])
# Base image
FROM node:14
# Create app directory
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
# Creates a "dist" folder with the production build
RUN npm run build
# Start the server using the production build
CMD [ "node", "dist/main.js" ]
Logs
docker-compose up
docker ps
Additional info
I used the containers names as DB hosts, both for RethinkDB and PostgreSQL.
Also, when I comment the rethink part in my docker-compose.yaml, everything works fine, I can call a route on my NestJS API and it queries correctly in my PostgreSQL db. The problem seems to be specific to RethinkDB.

Related

docker-entrypoint-initdb.d not executing scripts

I'm using docker-compose version 1.25.5, build 8a1c60f6 &
Docker version 19.03.8, build afacb8b
I'm trying to initialise database with Users in MongoDb container using docker-entrypoint-initdb.d but,
js/scripts aren't being executed when container starts.
I know /data/db must be empty for it execute so I've been deleting it every time before start. But still doesn't execute. Going into the container and manually executing mongo mongo-init.js works.
Not sure why it's not working when it should.
docker-compose.yml:
version: '3'
services:
mongodb:
container_name: "mongodb"
image: tonyh308/cv-mongodb:1
build: ./mongo
container_name: mongodb
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_DATABASE: test
volumes:
- ./docker-entrypoint-initdb.d/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- ./mongo/mongodb:/data/db
labels:
com.qa.description: "MongoDb Container for application."
# other services ...
Dockerfile:
FROM mongo:3.6.18-xenial
RUN apt-get update
COPY ./Customers /Customers
COPY ./test /test
WORKDIR /data
ENTRYPOINT ["mongod", "--bind_ip_all"]
EXPOSE 27017
Feb 26 2021: still no solution

Docker DB Migration/Deployment to DigitalOcean

Warning: I am fairly new to docker and cloud hosting, this is likely a dumb question.
I have a local web app which has 3 images associated with it, the app itself, the db and a phpmyadmin image. All works well locally, and if I transfer all the files to my digital ocean droplet and bring up my containers it works fine there as well, but this is not how I want to deploy having every file from every library residing in my droplet.
I have been experimenting with creating a docker-machine on my droplet and deploying my containers remotely to it. This seems to work fine other than the fact that my db image does not reference my database and is simply an empty db. I tried to migrate the db in this fashion which I saw in a tutorial:
docker-compose run --rm web db:create db:migrate
But got the following error, I assume this is because my dev machine is running Windows 10 not Linux, but I cannot find anywhere what the equivalent command would be for a Windows machine.
Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"db:create\": executable file not found in $PATH": unknown
I know I am probably missing something really stupid and easy but I am having difficulties figuring out how to migrate the data for my db image. Thanks in advance.
UPDATE:
As requested here is my docker-compose:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ./data:/docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
volumes:
data:
UPDATE #2:
transfered db file to /docker-entrypoint-initdb.d (I tried this yesterday too but couldn't get it working) and created a new production docker-compose-prod.yml I must be missing something still though as the DB is still empty. Below is my new docker-compose-prod.yml:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- /docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
Your strategy is sound.
Actually, you can take it a further step by automating the Droplet provisioning to e.g. use a container-oriented OS and access your Compose file. But that's not this question ;-)
I think it is not relevant that you're using Windows and probably makes little difference; it may require some answer tweaks but that's about it.
The challenge is that you need to move (or recreate) the database state on the remote machine. There are several ways that the DB state could be persisted: in-container (not ideal); using volume mounts (good), other.
Each is "moveable" but it would help if you could add your Compose file to your question so that we may see which approach is being used.
In full-disclosure Im not familiar with the approach that you referencesd but that does not mean that it's inaccurate; I'm just not familiar with it.
Update: docker-entrypoint-initdb.d
See: "Initializing a fresh instance" on MySQL
So, any files within that directory are run to initialize the database container when it's created from the image.
In your Compose file you mount your host's ./data directory into this file. Presumably that directory contains >=1 file that performs your intended initialization.
NB The section volumes: data: at the end of the Compose file appears redundant. You're actually using a host-mounted directory ./data not this volume.
When you run the Compose file on the Droplet, those files aren't present and you'll need to copy them.
The simplest way to do this is to use scp and this provides 2 alternatives:
Either retain the data directory:
IP=[DROPLET-IP]
scp -r ./data root#${IP}:/data
NB The remote destination is /data not ./data. You will need to revise the Compose file on the Droplet (!) too:volumes: - /data:/docker-entrypoint-initdb.d
Or move the files directly to the Droplet's /docker-entrypoint-initdb.d:
scp -r ./data root#${IP}/docker-entrypointy-initdb.d
NB Now there's no need for the volume mapping. You may remove: volumes: - ./data:/docker-entrypoint-initdb.d
Update: repro (works)
I used a tweaked docker-compose.yaml but it's essentially the same:
version: "3.4"
services:
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ${PWD}/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
restart: always
adminer:
image: adminer
restart: always
ports:
- 8080:8080
Then mkdir ${PWD}/docker-entrypoint-initdb.d and created a file in it called freddie.sql:
create database if not exists frederik;
use frederik;
create table treats (
TreatID INT NOT NULL AUTO_INCREMENT,
TreatName VARCHAR(255) NOT NULL,
PRIMARY KEY (TreatId));
insert into treats (TreatName)
values
("Dried Salmon"),
("Meatballs");
Then docker-compose rm --force && docker-compose up
I was able to browse the adminer UI (:8080), login (root|mypass) and browse the database frederik:

Knex Migration with Docker Compose Psql

I have a problem migrating using Knex js inside my docker-compose container.
the problem is that npm run db (knex migrate:rollback && knex migrate:latest && knex seed:run) would run right before the database is even created. Is there anyway to say that I would only like to run npm run db after the database has been created?
NOTE : if I do this npm commands on the docker terminal after it has been built everything works fine. just fyi
here is my docker-compose.yml
version: '3.6'
services:
#Backend api
server:
container_name: server
build: ./
command: npm run db
working_dir: /user/src/server
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
# PostgreSQL database
postgres:
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: interapp
POSTGRES_HOST: postgres
image: postgres
ports:
- "5432:5432"
and here is my Dockerfile
FROM node:10.14.0
WORKDIR /user/src/server
COPY ./ ./
RUN npm install
CMD ["/bin/bash"]
on the docker-compose.yml file, using sh (bash) for a contained environment context for your command to run in. ie. sh -c 'npm run db'
your docker-compose file would now be
secondly, use the depends_on step to wait for the database to start
services:
#Backend api
server:
container_name: server
build: ./
command: sh -c 'npm run db'
working_dir: /user/src/server
depends_on:
-postgres
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
Simply adding depends_on to server service should do the trick here.
services:
server:
depends_on:
- postgres
...
This will cause docker-compose to start postgres container before the server container. It will not however wait for postgres to be ready. In this case it shouldn't be problem, because postgres starts really quickly.
If you want something more solid, or depends_on doesn't do the trick, you can add entrypoint wrapping script to your container. See https://docs.docker.com/compose/startup-order/, where you can read more about it. There are also links to tools, so you don't have to write your own script from scratch.

Why I don't lose postgresql data when rebuild docker image?

version: '3'
services:
db:
image: postgres
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Why I don't lose data when running docker-compose build --force-em --no-cache. If this is normal, why do we need to create volume for data folder ?
When running the command docker-compose build --force-em --no-cache, this will only build the web Docker image from the Dockerfile which in your case is in the same directory.
This command will not stop the containers that you have previously started using this compose file, thus you want lose any data when running this command.
However, as soon as you remove the containers using docker-compose down or when containers are stopped docker-compose rm, you won't find the postgres data when you restart the container.
If you want to persist the data, and make the container pick it up when it is recreated, you need to give the postgres data volume a name as such.
version: '3'
services:
db:
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Now the postgres data won't be lost when the containers are recreated.

Trying to connect a NodeJS app to MongoDB via Docker Compose

I'm following a MongoDB + NodeJS tutorial with my app. Everything works without Docker.. and I can in fact get the app to work up until it needs to connect to MongoDB.
If my app doesn't see MongoDB, it will print out an error and halt.
Here's my files
.env
NODE_VIEWS_PATH=../
NODE_PUBLIC_PATH=../
MONGODB_URI='mongodb://127.0.0.1:27017/myappsdb'
...
Dockerfile
FROM node:carbon
# Create app directory
WORKDIR /usr/src/mahrio
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
COPY . .
EXPOSE 6085
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
app:
container_name: someappname
restart: always
build: .
ports:
- "6085:6085"
links:
- mongo
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./tmp:/data/db
ports:
- "27017:27017"
When using docker-compose, for a container to connect to another container it can use the service name as a hostname to connect.
In your case, the node app needs to connect to mongo:27017 rather than localhost:27017, since localhost from the respective of the app container will refer to itself and not to your machine.
Therefore, change the mongo url to MONGODB_URI='mongodb://mongo:27017/myappsdb'. Also make sure that you consume the env file by adding:
app:
...
env_file:
- file.env