How to run docker-compose entrypoint configuration option with multiple bash commands
commands:
yarn install
yarn build
sleep infinity
In docker-compose.yml, for service gvhservice
gvhservice:
entrypoint:
- "/bin/sh"
- -ecx
- |
yarn install
yarn build
sleep infinity
OR
optionally, add all these commands to a file say - entrypoint.sh
and in docker-compose.yml,
gvhservice:
entrypoint: entrypoint.sh
OR,
Using the option of entrypoint.sh and command configuration option in docker-compose.yml (suitable for a variable number of commands to be passed during runtime)
entrypoint.sh
#!/bin/sh
set -ex
exec "$#"
docker-compose.yml
command:
- /bin/sh
- -ecx
- |
yarn install
yarn build
sleep infinity
Related
i have the following dockerfile for an elixir+phoenix app
FROM elixir:latest as build_base
RUN apt-get -y update
RUN apt-get -y install inotify-tools curl
ARG TARGETARCH
RUN if [ ${TARGETARCH} = arm64 ]; then \
curl -L -o /tmp/dart-sass.tar.gz https://github.com/sass/dart-sass/releases/download/1.54.5/dart-sass-1.54.5-linux-${TARGETARCH}.tar.gz \
;else \
curl -L -o /tmp/dart-sass.tar.gz https://github.com/sass/dart-sass/releases/download/1.54.5/dart-sass-1.54.5-linux-x64.tar.gz \
;fi
RUN tar -xvf /tmp/dart-sass.tar.gz -C /tmp
RUN mv /tmp/dart-sass/sass /usr/local/bin/sass
RUN mkdir -p /app
WORKDIR /app
COPY mix.* ./
RUN mix local.hex --force
RUN mix archive.install hex phx_new --force
RUN mix local.rebar --force
RUN mix deps.clean --all
RUN mix deps.get
RUN mix --version
RUN mix deps.compile
COPY assets assets
COPY vendor vendor
COPY lib lib
COPY config config
COPY priv priv
COPY test test
RUN mix compile
the docker-compose file looks like the following
services:
web:
build:
context: .
dockerfile: Dockerfile
target: build_base
volumes:
- ./:/app
ports:
- "80:80"
command: mix phx.server
I'm trying to run docker-compose as part of the build step in buildkite, this is an extract of the step in buildkite
- label: "run web"
key: "web"
commands:
- mix phx.server
plugins:
- docker-compose#v4.9.0:
run: web
config: docker-compose.yml
however when running web i see everything happens properly including the package installation, however when running the application i see the following error
web_1 | Unchecked dependencies for environment dev:
web_1 | * telemetry_metrics (Hex package)
web_1 | the dependency is not available, run "mix deps.get"
and the list goes on and on, this works fine on my local machine, its only when running on buildkite. does anyone have any idea on how to fix this ?
I am using a postgres image and i need to start ssh service on start.
The problem is that if I run a command in docker-compose file the proccess exits with code 0.
How can I start ssh service but keep postgres serice active too?
DOCKER FILE:
FROM postgres:13
RUN apt update && apt install openssh-server sudo -y
RUN echo 'root:password' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
DOCKER-COMPOSE FILE:
postgres:
container_name: db_postgres
command: sh -c "service ssh start "
image: postgresc
build:
context: ../backend_apollo_server_express
dockerfile: Dockerfile.database
environment:
- "POSTGRES_USER=lims"
- "POSTGRES_PASSWORD=lims"
volumes:
- /home/javier/lims/dockerVolumes/db:/var/lib/postgresql/data
- "/etc/timezone:/etc/timezone:ro"
- "/etc/localtime:/etc/localtime:ro"
ports:
- 5434:5432
You can try to use run postgres after you command
command: sh -c "service ssh start & postgres"
Try
command: sh -c "nohup service ssh start && service postgres start &"
In order to leave the process running in the background. This way the process won't exit
I have a small python app developed with docker containers.
My setup is:
Dockerfile
FROM python:3
ARG http_proxy
ARG https_proxy
ENV http_proxy ${http_proxy}
ENV https_proxy ${https_proxy}
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN apt-get update
RUN apt install -y vim screen
RUN \
echo 'alias py="/opt/venv/bin/python"' >> /root/.bashrc && \
echo 'alias ls="ls --color=auto"' >> /root/.bashrc && \
echo 'PS1="\u#\h:\[\e[33m\]\w\[\e[0m\]\$ "' >> /root/.bashrc
RUN python3 -m venv $VIRTUAL_ENV
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
docker-compose.yml
version: '3.8'
x-dev:
&proxy_conf
http_proxy: "${HTTP_PROXY}"
https_proxy: "${HTTPS_PROXY}"
services:
py_service:
container_name: ${APP_NAME}
build:
context: .
dockerfile: Dockerfile
args: *proxy_conf
image: ${APP_NAME}_img
volumes:
- '.:/app'
restart: always
command: tail -f /dev/null
.env
HTTP_PROXY=<http_proxy_server_here>
HTTPS_PROXY=<https_proxy_server_here>
APP_NAME=python_app
The problem is if the proxy server has changed i need to rebuild the image and i don't want that(as a last result maybe i will do it).
What i'm trying to do is change the proxy environment variables inside the container but i don't find the file where the env is stored.
The container OS version is:
[root#5b1b77079e10 ~ >>>] $ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
You should only need to recreate containers, not rebuild the image. I assume you are doing something like this to get everything up initially:
docker-compose build
docker-compose up -d
Then I assume you are updating you .env file, once you do that you should be able to just do the following for your container to pick up the change:
docker-compose down
docker-compose up -d
You should not need to do a docker-compose build again.
I'm trying to create a cache for the following Github action:
name: dockercompose
on: push
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Cache Docker Compose
id: cache-docker
uses: actions/cache#v1
with:
path: fhe_app/
key: cache-docker
- name: Build the stack
run: docker-compose up -d
working-directory: fhe_app/
whit the following Dockerfile:
FROM tensorflow/tensorflow:nightly-py3
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN python3 -m pip install --upgrade pip
COPY local_requirements.txt /usr/src/app/local_requirements.txt
RUN \
apt-get update && \
apt-get -y install python3 postgresql-server-dev-10 gcc python3-dev musl-dev netcat
RUN python3 -m pip install -r local_requirements.txt
# copy entrypoint.sh
COPY entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x entrypoint.sh
# copy project
COPY . /usr/src/app/
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
When pushing to Github, instead of a success message, I get:
Cache Docker Compose
Cache not found for input keys: cache-docker.
And:
Post Cache Docker Compose
Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/e13e2694-e020-476d-888e-cb29cb9184b6/cache.tgz -C /home/runner/work/fhe_server/fhe_server/fhe_app .
/bin/tar: ./app: file changed as we read it
##[warning]The process '/bin/tar' failed with exit code 1
I've other yml files not using Docker that are caching properly, so the overall structure of the yml should be fine. Is this the right way to cache docker-compose?
This is my circle.yml:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache py-pip bash
pip install docker-compose
- run:
name: Start service containers and run tests
command: |
docker-compose -f docker-compose.test.yml up -d db es redis
docker-compose run web bash -c "cd myDir && ./manage.py test"
This works fine in that it brings up my service containers (db, es, redis) and I build a new image for my web container. However, my working code is not inside the freshly built image (so "cd myDir" always fails).
I figure the following lines in my Dockerfile should make my code available when it's built but it appears that it doesn't work like that:
ENV APPLICATION_ROOT /app/
RUN mkdir -p $APPLICATION_ROOT
WORKDIR $APPLICATION_ROOT
ADD . $APPLICATION_ROOT
What am I doing wrong and how can I make my code available inside my test container?
Thanks,
Use COPY, Your Dockerfile should look something like this.
FROM image
COPY . /opt/app
WORKDIR "/opt/app"
(More commands)
ENTRYPOINT