Cannot cache Github action with docker compose - github

I'm trying to create a cache for the following Github action:
name: dockercompose
on: push
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Cache Docker Compose
id: cache-docker
uses: actions/cache#v1
with:
path: fhe_app/
key: cache-docker
- name: Build the stack
run: docker-compose up -d
working-directory: fhe_app/
whit the following Dockerfile:
FROM tensorflow/tensorflow:nightly-py3
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN python3 -m pip install --upgrade pip
COPY local_requirements.txt /usr/src/app/local_requirements.txt
RUN \
apt-get update && \
apt-get -y install python3 postgresql-server-dev-10 gcc python3-dev musl-dev netcat
RUN python3 -m pip install -r local_requirements.txt
# copy entrypoint.sh
COPY entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x entrypoint.sh
# copy project
COPY . /usr/src/app/
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
When pushing to Github, instead of a success message, I get:
Cache Docker Compose
Cache not found for input keys: cache-docker.
And:
Post Cache Docker Compose
Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/e13e2694-e020-476d-888e-cb29cb9184b6/cache.tgz -C /home/runner/work/fhe_server/fhe_server/fhe_app .
/bin/tar: ./app: file changed as we read it
##[warning]The process '/bin/tar' failed with exit code 1
I've other yml files not using Docker that are caching properly, so the overall structure of the yml should be fine. Is this the right way to cache docker-compose?

Related

running elixir on buildkite with docker-compose fails with dependencies

i have the following dockerfile for an elixir+phoenix app
FROM elixir:latest as build_base
RUN apt-get -y update
RUN apt-get -y install inotify-tools curl
ARG TARGETARCH
RUN if [ ${TARGETARCH} = arm64 ]; then \
curl -L -o /tmp/dart-sass.tar.gz https://github.com/sass/dart-sass/releases/download/1.54.5/dart-sass-1.54.5-linux-${TARGETARCH}.tar.gz \
;else \
curl -L -o /tmp/dart-sass.tar.gz https://github.com/sass/dart-sass/releases/download/1.54.5/dart-sass-1.54.5-linux-x64.tar.gz \
;fi
RUN tar -xvf /tmp/dart-sass.tar.gz -C /tmp
RUN mv /tmp/dart-sass/sass /usr/local/bin/sass
RUN mkdir -p /app
WORKDIR /app
COPY mix.* ./
RUN mix local.hex --force
RUN mix archive.install hex phx_new --force
RUN mix local.rebar --force
RUN mix deps.clean --all
RUN mix deps.get
RUN mix --version
RUN mix deps.compile
COPY assets assets
COPY vendor vendor
COPY lib lib
COPY config config
COPY priv priv
COPY test test
RUN mix compile
the docker-compose file looks like the following
services:
web:
build:
context: .
dockerfile: Dockerfile
target: build_base
volumes:
- ./:/app
ports:
- "80:80"
command: mix phx.server
I'm trying to run docker-compose as part of the build step in buildkite, this is an extract of the step in buildkite
- label: "run web"
key: "web"
commands:
- mix phx.server
plugins:
- docker-compose#v4.9.0:
run: web
config: docker-compose.yml
however when running web i see everything happens properly including the package installation, however when running the application i see the following error
web_1 | Unchecked dependencies for environment dev:
web_1 | * telemetry_metrics (Hex package)
web_1 | the dependency is not available, run "mix deps.get"
and the list goes on and on, this works fine on my local machine, its only when running on buildkite. does anyone have any idea on how to fix this ?

Run Pytest in Gitlab-CI as User

I had the following gitlab-ci.yml in my python-package repository:
image: python:latest
unit-test:
stage: test
tags:
- docker
script:
- pip install tox
- tox
formatting-check:
stage: test
tags:
- docker
script:
- pip install black
- black --check .
using this tox.ini file:
[tox]
envlist = my_env
[testenv]
deps =
-rrequirements.txt
commands =
python -m pytest tests -s
This did work as I wanted it to.
However, then I added tests to test my code against a local Postgresql database using https://pypi.org/project/pytest-postgresql/. For this, I had to install PostgreSQL(apt -y install postgresql postgresql-contrib libpq5).
When I added this to my gitlab-ci.yml:
image: python:latest
unit-test:
stage: test
tags:
- docker
script:
- apt -y install postgresql postgresql-contrib libpq5
- pip install tox
- tox
formatting-check:
stage: test
tags:
- docker
script:
- pip install black
- black --check .
I got the error from tox, that some module in Postgres (pg_ctl) wouldn't allow being run as the root. Log here: https://pastebin.com/fMu1JY5L
So, I must execute tox as a user, not the root.
My first idea was to create a new user (useradd) and then switch to that user, but su requires inputting a password.
From a quick google search I found out the easiest solution to creating a new user is to create a new Docker Image using Docker-in-Docker.
So, as of now I have this configuration:
gitlab-ci.yml:
image: docker:19.03.12
services:
- docker:19.03.12-dind
stages:
- build
- test
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker info
docker-build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
formatting-check:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE black --check .
unit-test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE tox
Dockerfile:
FROM python:latest
RUN apt update
RUN apt -y install postgresql postgresql-contrib libpq5
RUN useradd -m exec_user
USER exec_user
ENV PATH "$PATH:/home/exec_user/.local/bin"
RUN pip install black tox
(I had to add ENV PATH "$PATH:/home/exec_user/.local/bin" because pip would cry about it not being in the Path)
tox.ini:
[tox]
envlist = my_env
[testenv]
deps =
-rrequirements.txt
commands =
python -m pytest tests -s
The job docker-build completes — the other two fail.
As for formatting-check:
$ docker run $CONTAINER_TEST_IMAGE black --check .
ERROR: Job failed: execution took longer than 1h0m0s seconds
The black command usually executes extremely fast (<1s).
As for unit-test:
/bin/sh: eval: line 120: tox: not found
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127
I have also found, that replacing docker run $CONTAINER_TEST_IMAGE tox with docker run $CONTAINER_TEST_IMAGE python3 -m tox doesn't work. Here, python3 isn't found (which seems odd given that the base image is python:latest).
If you have any idea how to solve this issue, let me know :D
My first idea was to create a new user (useradd) and then switch to that user, but su requires inputting a password.
This should work for your use case. Running su as root will not require a password. You could also use sudo -u postgres tox (must apt install sudo first).
As a basic working example using su (as seen here - job) using the postgres user, which is created automatically when postgres is installed.
myjob:
image: python:3.9-slim
script:
- apt update && apt install -y --no-install-recommends libpq-dev postgresql-client postgresql build-essential
- pip install psycopg2 psycopg pytest pytest-postgresql
- su postgres -c pytest
# or in your case, you might use: su postgres -c tox
Alternatively, you might consider just using GitLab's services feature to run your postgres server if that's the only obstacle in your way. You can pass --postgresql-host and --postgresql-password to pytest to tell the extension to use the services.

Docker-Compose up Failed Because `Service 'nginx' failed to build`

I'm new to docker, and have been trying to troubleshoot this error for a while. I've read similar posts and nothing seems to work.
Full error:
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to copy: httpReadSeeker: failed open: could not fetch content descriptor sha256:eff196a3849ad6541fd3afe676113896be214753740e567575bb562986bd2cd4 (application/vnd.docker.distribution.manifest.v1+json) from remote: not found
ERROR: Service 'nginx' failed to build : Build failed
I have three Dockerfiles, one for my react frontend, one for my django backend, and one for nginx.
Frontend dockerfile:
COPY ./react_app/package.json .
RUN apk add --no-cache --virtual .gyp \
python \
make \
g++ \
&& npm install \
&& apk del .gyp
COPY ./react_app .
ARG API_SERVER
ENV REACT_APP_API_SERVER=${API_SERVER}
RUN REACT_APP_API_SERVER=${API_SERVER} \
npm run build
WORKDIR /usr/src/app
RUN npm install -g serve
COPY --from=builder /usr/src/app/build ./build
Django Python backend Dockerfile
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
FROM python:3.7.9-slim-stretch
RUN apt-get update && apt-get install -y --no-install-recommends netcat && \
apt-get autoremove -y && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/*
WORKDIR /usr/src/app
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
COPY ./django_app .
RUN chmod +x /usr/src/app/entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
and the nginx dockerfile
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
WORKDIR /usr/src/app
Backend Dockerfile
###########
# BUILDER #
###########
# pull official base image
FROM python:3.7.9-slim-stretch as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.7.9-slim-stretch
# installing netcat (nc) since we are using that to listen to postgres server in entrypoint.sh
RUN apt-get update && apt-get install -y --no-install-recommends netcat && \
apt-get autoremove -y && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install dependencies
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/*
# set work directory
WORKDIR /usr/src/app
# copy entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
# copy our django project
COPY ./django_app .
# run entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
Nginx Dockerfile
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
WORKDIR /usr/src/app
I don't know where to go from here. I've tried following 5 or 6 similar stack overflows and many more github issues, to no avail. Thanks, please let me know.

docker-compose environment variables inside container

I have a small python app developed with docker containers.
My setup is:
Dockerfile
FROM python:3
ARG http_proxy
ARG https_proxy
ENV http_proxy ${http_proxy}
ENV https_proxy ${https_proxy}
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN apt-get update
RUN apt install -y vim screen
RUN \
echo 'alias py="/opt/venv/bin/python"' >> /root/.bashrc && \
echo 'alias ls="ls --color=auto"' >> /root/.bashrc && \
echo 'PS1="\u#\h:\[\e[33m\]\w\[\e[0m\]\$ "' >> /root/.bashrc
RUN python3 -m venv $VIRTUAL_ENV
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
docker-compose.yml
version: '3.8'
x-dev:
&proxy_conf
http_proxy: "${HTTP_PROXY}"
https_proxy: "${HTTPS_PROXY}"
services:
py_service:
container_name: ${APP_NAME}
build:
context: .
dockerfile: Dockerfile
args: *proxy_conf
image: ${APP_NAME}_img
volumes:
- '.:/app'
restart: always
command: tail -f /dev/null
.env
HTTP_PROXY=<http_proxy_server_here>
HTTPS_PROXY=<https_proxy_server_here>
APP_NAME=python_app
The problem is if the proxy server has changed i need to rebuild the image and i don't want that(as a last result maybe i will do it).
What i'm trying to do is change the proxy environment variables inside the container but i don't find the file where the env is stored.
The container OS version is:
[root#5b1b77079e10 ~ >>>] $ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
You should only need to recreate containers, not rebuild the image. I assume you are doing something like this to get everything up initially:
docker-compose build
docker-compose up -d
Then I assume you are updating you .env file, once you do that you should be able to just do the following for your container to pick up the change:
docker-compose down
docker-compose up -d
You should not need to do a docker-compose build again.

CircleCI 2.0 testing with docker-compose and code checkout

This is my circle.yml:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache py-pip bash
pip install docker-compose
- run:
name: Start service containers and run tests
command: |
docker-compose -f docker-compose.test.yml up -d db es redis
docker-compose run web bash -c "cd myDir && ./manage.py test"
This works fine in that it brings up my service containers (db, es, redis) and I build a new image for my web container. However, my working code is not inside the freshly built image (so "cd myDir" always fails).
I figure the following lines in my Dockerfile should make my code available when it's built but it appears that it doesn't work like that:
ENV APPLICATION_ROOT /app/
RUN mkdir -p $APPLICATION_ROOT
WORKDIR $APPLICATION_ROOT
ADD . $APPLICATION_ROOT
What am I doing wrong and how can I make my code available inside my test container?
Thanks,
Use COPY, Your Dockerfile should look something like this.
FROM image
COPY . /opt/app
WORKDIR "/opt/app"
(More commands)
ENTRYPOINT