How to make not to strip quotes in docker-compose? - docker-compose

I'm using selenium, pytest and pytest-bdd to test a browser system. At this moment I'd like to run pytest-bdd tags such as pytest -m "apple and banana" to run #apple #banana tagged test scenarios when you run docker-compose by passing "apple and banana" injected run-time outside.
I think I can handle inserting "apple and banana" into docker-compose but I'm struggling to let docker-compose not to strip quotes when it runs pytest.
How to make docker-compose run pytest -m "apple and banana" without stripping?
pytest-container:
image: AWS ECR Image address here
container_name: pytest-container
entrypoint: [ "/bin/sh", "./docker_configs/wait-for-grid.sh" ]
command:
- >
pytest -m "apple and banana"
depends_on:
- selenium-hub
- chrome

Related

Invalid compose name even docker-compose.yaml exists

I try to upload my rasa chatbot with okteto via docker. So i has implemented a "Dockerfile", a "docker-compose.yaml" and a "okteto.yaml". The last past weeks the code works fine. Today it wont work anymore because Okteto gives the error: Invalid compose name: must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric characterexit status 1.
I really dont understand what i should change. thanks
docker-compose.yaml:
version: '3.4'
services:
rasa-server:
image: rasa-bot:latest
working_dir: /app
build: "./"
restart: always
volumes:
- ./actions:/app/actions
- ./data:/app/data
command: bash -c "rm -rf .rasa/* && rasa train && rasa run --enable-api --cors \"*\" -p 5006"
ports:
- '5006:5006'
networks:
- all
rasa-actions-server:
image: rasa-bot:latest
working_dir: /app
build: "./"
restart: always
volumes:
- ./actions:/app/actions
command: bash -c "rasa run actions"
ports:
- '5055:5055'
networks:
- all
networks:
all:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "true"
Dockerfile:
FROM python:3.7.13 AS BASE
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["./bot.py"]
RUN pip install --no-cache-dir --upgrade pip
RUN pip install rasa==3.3.0
ADD config.yml config.yaml
ADD domain.yml domain.yaml
ADD credentials.yml credentials.yaml
ADD endpoints.yml endpoints.yaml
okteto.yml:
name: stubu4ewi
autocreate: true
image: okteto.dev/rasa-bot:latest
command: bash
volumes:
- /root/.cache/pip
sync:
- .:/app
forward:
- 5006:5006
reverse:
- 9000:9000
Error
Found okteto manifest on /okteto/src/okteto.yml
Unmarshalling manifest...
Okteto manifest unmarshalled successfully
Found okteto compose manifest on docker-compose.yaml
Unmarshalling compose...
x Invalid compose name: must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric characterexit
status 1
Dont have any clue what went wrong. It works fine till yesterday and even when nothings changed okteto gives this error.
Tried to rename the docker-compose.yaml to: docker-compose.yml, okteto-compose.yml
That error is not about the file's name itself but the name of the services defined inside your docker-compose.yaml file.
What command did you run, and what version of the okteto cli are you using? okteto version will give it to you.
If you everr face the Problem: Rename your Repo so that it consists only of lower case alphanumeric characters or '-', and starts and ends with an alphanumeric character.
Seems like Okteto uses the Repository Name to build the Images.

How to add scripts to image before running them in gitlab CI

I am trying to run a CI job in gitlab, where the integration tests depend on postgresql.
In gitlab I've used the postgresql runner. The issue is that the integration tests require the extension uuid-ossp. I could run the SQL commands before each test to ensure the extension is applied, but I'd rather apply it once before running all the tests.
So I've used the image tag in the CI script to add a .sh file in the postgresql image in /docker-entrypoint-initdb.d/, and then try to run the integration tests with the same image. The problem is that it doesn't seem to apply the extension as the integration tests fail where the uuid functions are used -- function uuid_generate_v4() does not exist
prep-postgres:
stage: setup-db
image: postgres:12.2-alpine
script:
- echo "#!/bin/bash
set -e
psql \"$POSTGRES_DB\" -v --username \"$POSTGRES_USER\" <<-EOSQL
create extension if not exists \"uuid-ossp\";
EOSQL" > /docker-entrypoint-initdb.d/create-uuid-ossp-ext.sh
artifacts:
untracked: true
test-integration:
stage: test
services:
- postgres:12.2-alpine
variables:
POSTGRES_DB: db_name
POSTGRES_USER: postgres
script:
- go test ./... -v -race -tags integration
An alternate i was hoping that would work was
prep-postgres:
stage: setup-db
image: postgres:12.2-alpine
script:
- psql -d postgresql://postgres#localhost:5432/db_name -c "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\";"
artifacts:
untracked: true
But in this case the client is unable to connect to postgres (i imagine it's because i'm editing the image not running it?)
I must be missing something obvious, or is this even possible?
In both case in the job prep-postgres, you make changes in a running container (from postgres:12.2-alpine image) but you don't save these changes, so test-integration job can't use them.
I advice you to build your own image using a Dockerfile and the entrypoint script for the Postgres Docker image. This answer from #Elton Stoneman could help.
After that, you can refer your previously built image as services: in the test-integration job and you will benefit from the created extension.
At the moment i've had to do something a little smelly and download postgres client before running the extension installation.
.prepare_db: &prepare_db |
apt update \
&& apt install -y postgresql-client \
&& psql -d postgresql://postgres#localhost/db_name -c "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\";"
test-integration:
stage: test
services:
- postgres:12.2-alpine
variables:
POSTGRES_DB: db_name
POSTGRES_USER: postgres
script:
- *prepare_db
- go test ./... -v -race -tags integration
This isn't perfect. I was hoping that there was a way to save the state of the docker image between stages but there doesn't seem to be that option. So the options seem to be either:
install it during the test-integration stage.
create a base image specifically for this purpose where the installation of the expansion has already been done.
I've gone with option 1 for now, but will reply if i find something more concise, easier to maintain and fast.

Send arguments to a Job

I have a docker Image that basically runs a one time script. That scripts takes 3 arguments. My docker file is
FROM <some image>
ARG URL
ARG USER
ARG PASSWORD
RUN apt update && apt install curl -y
COPY register.sh .
RUN chmod u+x register.sh
CMD ["sh", "-c", "./register.sh $URL $USER $PASSWORD"]
When I spin up the contianer using docker run -e URL=someUrl -e USER=someUser -e PASSWORD=somePassword -itd <IMAGE_ID> it works perfectly fine.
Now I want to deploy this as a job.
My basic Job looks like:
apiVersion: batch/v1
kind: Job
metadata:
name: register
spec:
template:
spec:
containers:
- name: register
image: registeration:1.0
args: ["someUrl", "someUser", "somePassword"]
restartPolicy: Never
backoffLimit: 4
But this the pod errors out on
Error: failed to start container "register": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"someUrl\": executable file not found in $PATH"
Looks like it is taking my args as commands and trying to execute them. Is that correct ? What can I do to fix this ?
In the Dockerfile as you've written it, two things happen:
The URL, username, and password are fixed in the image. Anyone who can get the image can run docker history and see them in plain text.
The container startup doesn't take any arguments; it just runs the single command with its fixed set of arguments.
Especially since you're planning to pass these arguments in at execution time, I wouldn't bother trying to include them in the image. I'd reduce the Dockerfile to:
FROM ubuntu:18.04
RUN apt update \
&& DEBIAN_FRONTEND=noninteractive \
apt install --assume-yes --no-install-recommends \
curl
COPY register.sh /usr/bin
RUN chmod u+x /usr/bin/register.sh
ENTRYPOINT ["register.sh"]
When you launch it, the Kubernetes args: get passed as command-line parameters to the entrypoint. (It is the same thing as the Docker Compose command: and the free-form command at the end of a plain docker run command.) Making the script be the container entrypoint will make your Kubernetes YAML work the way you expect.
In general I prefer using CMD to ENTRYPOINT. (Among other things, it makes it easier to docker run --rm -it ... /bin/sh to debug your image build.) If you do that, then the Kubernetes args: need to include the name of the script it's running:
args: ["./register.sh", "someUrl", "someUser", "somePassword"]
Use:
args: ["sh", "-c", "./register.sh someUrl someUser somePassword"]

CircleCI 2.0 testing with docker-compose and code checkout

This is my circle.yml:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache py-pip bash
pip install docker-compose
- run:
name: Start service containers and run tests
command: |
docker-compose -f docker-compose.test.yml up -d db es redis
docker-compose run web bash -c "cd myDir && ./manage.py test"
This works fine in that it brings up my service containers (db, es, redis) and I build a new image for my web container. However, my working code is not inside the freshly built image (so "cd myDir" always fails).
I figure the following lines in my Dockerfile should make my code available when it's built but it appears that it doesn't work like that:
ENV APPLICATION_ROOT /app/
RUN mkdir -p $APPLICATION_ROOT
WORKDIR $APPLICATION_ROOT
ADD . $APPLICATION_ROOT
What am I doing wrong and how can I make my code available inside my test container?
Thanks,
Use COPY, Your Dockerfile should look something like this.
FROM image
COPY . /opt/app
WORKDIR "/opt/app"
(More commands)
ENTRYPOINT

Executing wait-for-it.sh in python Docker container

I have a Python docker container that needs to wait until another container (postgres server) finishes setup. I tried the standard wait-for-it.sh but several commands weren't included. I tried a basic sleep (again in an sh file) but now it's reporting exec: 300: not found when trying to finally execute the command I'm waiting on.
How do I get around this (preferably without changing the image, or having to extend an image.)
I know I could also just run a Python script, but ideally I'd like to use wait-for-it.sh to wait for the server to turn up rather than just sleep.
Dockerfile (for stuffer):
FROM python:2.7.13
ADD ./stuff/bin /usr/local/bin/
ADD ./stuff /usr/local/stuff
WORKDIR /usr/local/bin
COPY requirements.txt /opt/updater/requirements.txt
COPY internal_requirements.txt /opt/stuff/internal_requirements.txt
RUN pip install -r /opt/stuff/requirements.txt
RUN pip install -r /opt/stuff/other_requirements.txt
docker-compose.yml:
version: '3'
services:
local_db:
build: ./local_db
ports:
- "localhost:5432:5432"
stuffer:
build: ./
depends_on:
- local_db
command: ["./wait-for-postgres.sh", "-t", "300", "localhost:5432", "--", "python", "./stuffing.py", "--file", "./afile"]
Script I want to use (but can't because no psql or exec):
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until psql -h "$host" -U "postgres" -c '\l'; do >&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
Sergey's comment. I had wrong argument order. This issue had nothing to do with docker and everything to do with my inability to read.
I made an example so you can see it working:
https://github.com/nitzap/wait-for-postgres
On the other hand also you can have errors inside the execution of the script to validate that the service is working. You should not refer as localhost .... because that is within the contexts of containers, if you want to point to another container has to be through the name of the service.