docker compose how to mount entrypoint and then use it - docker-compose

I have a service in the docker compose as follows.
nginx:
image: nginx:1.18.0-alpine
ports:
- 8000:80
volumes:
- ./nginx/localhost/conf.d:/etc/nginx/conf.d
- ./entrypoint.sh:/entrypoint.sh
entrypoint: ./entrypoint.sh
depends_on:
- webapp
networks:
- nginx_network
I get the error
Cannot start service nginx: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/entrypoint.sh": permission denied: unknown
ERROR: for nginx Cannot start service nginx: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/entrypoint.sh": permission denied: unknown
ERROR: Encountered errors while bringing up the project.
How can i do this

I have to use ["/bin/sh","/entrypoint.sh"]
nginx:
image: nginx:1.18.0-alpine
ports:
- 8000:80
volumes:
- ./nginx/localhost/conf.d:/etc/nginx/conf.d
- ./entrypoint.sh:/entrypoint.sh
entrypoint: ["/bin/sh","/entrypoint.sh"]
depends_on:
- webapp
networks:
- nginx_network

I don't have enough information about your entrypoint script, config file and the web app minimal information.
Sometimes when you create a file on your host machine it lacks some permissions to be executed by some users in your docker container, so Iwould think about checking the permissions for that file by running:
ls -l
that will give you all permissions about that file, and maybe that's why it's telling you permission is denied try adding execution permission running chmod +x ./entrypoint.sh.
I usually do that using a dockerfile when working on a script from my context.
dockerfile
FROM nginx:1.18.0-alpine
COPY /entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT [ "./entrypoint.sh" ]:
docker-compose.yaml
version: "3.8"
services:
nginx:
build:
context: ./
ports:
- 8000:80
volumes:
- ./nginx/localhost/conf.d:/etc/nginx/conf.d
depends_on:
- webapp
networks:
- nginx_network
Then try to run:
docker-compose up --build
make sure both files are in the same directory.

Related

ERROR: for rca-org2 Cannot start service rca-org2: OCI runtime create failed: container_linux.go:345:

ERROR: for rca-org2 Cannot start service rca-org2: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown
ERROR: Encountered errors while bringing up the project.
I am getting this error when i try to run docker-compose file which looks like this :-
rca-org2:
container_name: rca-org2
image: hyperledger/fabric-ca:latest
command: /bin/bash -c 'fabric-ca-server start -d -b rca-org2-admin:rca-org2-adminpw --port 7055'
environment:
- FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/crypto
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_CSR_CN=rca-org2
- FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- /tmp/hyperledger/org2/ca:/tmp/hyperledger/fabric-ca
networks:
- fabric-ca
ports:
- 7055:7055
-
After Research I found that the container rca-org2 doesn't have bash installed but it has sh so running the container with (replace /bin/bash with sh) worked for me. so now the new compose file looks like this.
rca-org2:
container_name: rca-org2
image: hyperledger/fabric-ca:latest
command: sh -c 'fabric-ca-server start -d -b rca-org2-admin:rca-org2-adminpw --port 7055'
environment:
- FABRIC_CA_SERVER_HOME=/tmp/hyperledger/fabric-ca/crypto
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_CSR_CN=rca-org2
- FABRIC_CA_SERVER_CSR_HOSTS=0.0.0.0
- FABRIC_CA_SERVER_DEBUG=true
volumes:
- /tmp/hyperledger/org2/ca:/tmp/hyperledger/fabric-ca
networks:
- fabric-ca
ports:
- 7055:7055

Why my dockerized app data will be deleted after a while?

I have a NestJs App, when I deploy the app on my ubuntu server using docker compose up --build, after a few hours, data will be deleted automatically.
Dockerfile:
FROM node:14.16.0-alpine3.13
RUN addgroup app && adduser -S -G app app
RUN mkdir /app && chown app:app /app
USER app
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "start:prod"]
And my docker-compose file:
version: '3'
services:
api:
depends_on:
- db
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
ports:
- '3000:3000'
environment:
DB_URL: mongodb://db_myapp
JWP_PRIVATE_KEY: test
PORT: 3000
db:
image: mongo:xenial
container_name: db_myapp
volumes:
- myapp:/data/db
ports:
- '27017:27017'
volumes:
myapp:
I tried attaching the volume to a local directory in my server:
db:
image: mongo:xenial
container_name: db_myapp
volumes:
- ../myapp_db_data:/data/db
ports:
- '27017:27017'
But I faced this error when running the app:
MongooseServerSelectionError: getaddrinfo EAI_AGAIN db_myapp
I'm looking for a way to have persistent data of my app after docker compose up --build, because I need to update the backend app on production

Docker compose: Error: role "hleb" does not exist

Kindly ask you to help with docker and Postgres.
I have a local Postgres database and a project on NestJS.
I killed 5432 port.
My Dockerfile
FROM node:16.13.1
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
COPY . .
COPY ./dist ./dist
CMD ["yarn", "start:dev"]
My docker-compose.yml
version: '3.0'
services:
main:
container_name: main
build:
context: .
env_file:
- .env
volumes:
- .:/app
- /app/node_modules
ports:
- 4000:4000
- 9229:9229
command: yarn start:dev
depends_on:
- postgres
restart: always
postgres:
container_name: postgres
image: postgres:12
env_file:
- .env
environment:
PG_DATA: /var/lib/postgresql/data
POSTGRES_HOST_AUTH_METHOD: 'trust'
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
.env
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USERNAME=hleb
DB_NAME=artwine
DB_PASSWORD=Mypassword
running sudo docker-compose build - NO ERRORS
running sudo docker-compose up --force-recreate - ERROR
ERROR [ExceptionHandler] role "hleb" does not exist.
I've tried multiple suggestions from existing issues but nothing helped.
What am I doing wrong?
Thanks!
Do not use sudo - unless you have to.
Use the latest Postgres release if possible.
The Postgresql Docker Image provides some environment variables, that will help you bootstrapping your database.
Be aware:
The PostgreSQL image uses several environment variables which are easy to miss. The only variable required is POSTGRES_PASSWORD, the rest are optional.
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
When you do not provide the POSTGRES_USER environment variable in the docker-compose.yml file, it will default to postgres.
Your .env file used for Docker Compose does not contain the docker specific environment variables.
So amending/extending it to:
POSTGRES_USER=hleb
POSTGRES_DB=artwine
POSTGRES_PASSWORD=Mypassword
should do the trick. You will have to re-create the volume (delete it) to make this work, if the data directory already exists.

How to move a docker-compose environment to other computer

I am developing a service using docker-compose and I deploy the the containers to a remote host using this commands:
eval $(docker-machine env digitaloceanserver)
docker-compose build && docker-compose stop && docker-compose rm -f && docker-compose up -d
My problem is that I'm changing laptop and I exported the docker-machines to the new laptop and I can activate them.
But when I try to deploy new changes it raises these errors:
Creating postgres ... error Creating redis ...ERROR: for postgres
Cannot create container for service postgres: b'Conflict. The
container name "/postgres" is already in use by container
"612f3887544224aeCreating redis ... errorERROR: for redis Cannot
create container for service redis: b'Conflict. The container name
"/redis" is already in use by container
"01875947f0ce7ba3978238525923e54e0c800fa0a4b419dd2a28cc07c285eb78".
You have to remove (or rename) that container to be able to reuse that
name.'ERROR: for postgres Cannot create container for service
postgres: b'Conflict. The container name "/postgres" is already in use
by container
"612f3887544224ae79f67e29552b4d97e246104b8a057b3a03d39f6546dbbd38".
You have to remove (or rename) that container to be able to reuse that
name.'ERROR: for redis Cannot create container for service redis:
b'Conflict. The container name "/redis" is already in use by container
"01875947f0ce7ba3978238525923e54e0c800fa0a4b419dd2a28cc07c285eb78".
You have to remove (or rename) that container to be able to reuse that
name.' ERROR: Encountered errors while bringing up the project.
My docker-compose.yml is this:
services:
nginx:
build: './docks/nginx/.'
ports:
- '80:80'
- "443:443"
volumes:
- letsencrypt_certs:/etc/nginx/certs
- letsencrypt_www:/var/www/letsencrypt
volumes_from:
- web:ro
depends_on:
- web
letsencrypt:
build: './docks/certbot/.'
command: /bin/true
volumes:
- letsencrypt_certs:/etc/letsencrypt
- letsencrypt_www:/var/www/letsencrypt
web:
build: './sources/.'
image: 'websource'
ports:
- '127.0.0.1:8000:8000'
env_file: '.env'
command: 'gunicorn cuidum.wsgi:application -w 2 -b :8000 --reload --capture-output --enable-stdio-inheritance --log-level=debug --access-logfile=- --log-file=-'
volumes:
- 'cachedata:/cache'
- 'mediadata:/media'
depends_on:
- postgres
- redis
celery_worker:
image: 'websource'
env_file: '.env'
command: 'python -m celery -A cuidum worker -l debug'
volumes_from:
- web
depends_on:
- web
celery_beat:
container_name: 'celery_beat'
image: 'websource'
env_file: '.env'
command: 'python -m celery -A cuidum beat --pidfile= -l debug'
volumes_from:
- web
depends_on:
- web
postgres:
container_name: 'postgres'
image: 'mdillon/postgis'
ports:
- '127.0.0.1:5432:5432'
volumes:
- 'pgdata:/var/lib/postgresql/data/'
redis:
container_name: 'redis'
image: 'redis:3.2.0'
ports:
- '127.0.0.1:6379:6379'
volumes:
- 'redisdata:/data'
volumes:
pgdata:
redisdata:
cachedata:
mediadata:
staticdata:
letsencrypt_certs:
letsencrypt_www:
You’re seeing those errors because you’re explicitly setting container_name:, and those same container names are used elsewhere. Remove those explicit settings. (You don’t need them even for inter-container DNS; Docker Compose automatically creates an alias for you using the name of the service block.)
There are still potential issues from port conflicts. If your other PostgreSQL container is listening on the same (default) host port 5432 then the one you declare in this docker-compose.yml file will conflict with it. You might be able to just not expose your database container ports, or you might need to change the port numbers in this file.

Docker-compose mongoose

I'm new to Docker, and I'm trying the simplest of setups with docker-compose, but don't succeed to connect to Mongodb.
My docker-compose.local.yaml file:
version: "2"
services:
posts-api:
build:
dockerfile: Dockerfile.local
context: ./
volumes:
- ".:/app"
ports:
- "6820:6820"
depends_on:
- mongodb
mongodb:
image: mongo:3.5
ports:
- "27018:27018"
command: mongod --port 27018
My Docker file:
FROM node:7.8.0
MAINTAINER Livefeed 'project.livefeed#gmail.com'
RUN mkdir /app
VOLUME /app
WORKDIR /app
ADD package.json yarn.lock ./
RUN eval rm -rf node_modules && \
yarn
ADD server.js .
RUN mkdir config src
ADD config config/
ADD src src/
EXPOSE 6820
EXPOSE 27018
CMD yarn run local
In server.js I try to connect with:
mongoose.connect('mongodb://localhost:27018');
I also tried:
mongoose.connect('mongodb://mongodb:27018');
To run docker-compose:
docker-compose -f docker-compose.local.yaml up --build
And I receive the error:
connection error: { MongoError: failed to connect to server [localhost:27018] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27018]
What am I missing?
In server.js use mongodb instead of localhost:
mongoose.connect('mongodb://mongodb:27018');
Because containers in the same network can communicate using their service name.
Bear in mind that each container and your host have their own localhost. Each localhost is a different host: container A, container B, your host (each one has its own network interface).
Edit:
Be sure to get your mongo up:
docker-compose logs mongodb
docker-compose ps
Sometimes it doesn't get up because of disk space.
Edit 2:
With newer versions of mongo, you need to specify to listen to all interfaces too:
command: mongod --port 27018 --bind_ip_all
I think, that you should add links option in your config. Like this:
ports:
- "6820:6820"
depends_on:
- mongodb
links:
- mongodb
update
As I promised
version: '2.1'
services:
pm2:
image: keymetrics/pm2-docker-alpine:6
restart: always
container_name: pm2
volumes:
- ./pm2:/app
links:
- redis_db
- db
environment:
REDIS_CONNECTION_STRING: redis://redis_db:6379
nginx:
image: firesh/nginx-lua
restart: always
volumes:
- ./nginx:/etc/nginx
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
links:
- pm2
s3: # mock for development
image: lphoward/fake-s3:latest
redis_db:
container_name: redis_db
image: redis
ports:
- 6379:6379
db: # for scorebig-syncer
image: mysql:5.7
ports:
- 3306:3306