I'm trying to build a docker-compose to include my app as a service and testcafe as another service. Both containers are build and initialized, but I can't get testcafe to wait until my app is available to start running the tests.
I've tried passing the --app-init-delay 30000 as a parameter to testcafe-docker.sh, but it ignores it.
entrypoint: ["/opt/testcafe/docker/testcafe-docker.sh", "'chromium --no-sandbox'", "--app-init-delay 30000", "e2e"]
Also tried to use the script https://github.com/Eficode/wait-for in the entrypoint or command before calling testcafe-docker.sh. In the command seems to get in conflict with the entrypoint command, when using it on the entrypoint I get testcafe to wait, but instead of running the tests ends with 'Operation timed out'
entrypoint: ['/script/wait-for', 'app:8080 -- "/opt/testcafe/docker/testcafe-docker.sh chromium --no-sandbox e2e"']
(Seems that all the parameters of wait-for need to be within the same entry of the array for it to work as an entrypoint script)
This is my docker-compose file
version: "2"
services:
app:
container_name: app
build: ./dist/docker/
ports:
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./dist/docker/dependency:/dependency
testcafe:
container_name: testcafe
image: testcafe/testcafe
depends_on:
- app
volumes:
- ./test/e2e:/e2e
- ./package.json:/package.json
- ./package-lock.json:/package-lock.json
- ./script:/script
entrypoint: ['/script/wait-for', 'app:8080 -- "/opt/testcafe/docker/testcafe-docker.sh chromium --no-sandbox e2e"']
# entrypoint: ["/opt/testcafe/docker/testcafe-docker.sh", "'chromium --no-sandbox'", "--app-init-delay 30000", "e2e"]
It seems that I'm very close to solving the issue with wait-for, but somehow my entrypoint syntax is incorrect
You can do it simpler:
testcafe:
container_name: testcafe
image: testcafe/testcafe
depends_on:
- app
volumes:
- ./test/e2e:/e2e
- ./package.json:/package.json
- ./package-lock.json:/package-lock.json
- ./script:/script
entrypoint: ['/script/run.sh']
Create run.sh in the folder script and make it executable:
#!/bin/bash
/script/wait-for app:8080 -t 60 -- /opt/testcafe/docker/testcafe-docker.sh chromium --no-sandbox e2e
Related
I'd like to run some integration tests against a real database, but I fail to start an additional container (for the db), because I need to mount a config file that is in my repo before it is starting up.
This is how I use the database on my local computer (docker-compose):
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-
I guess I cannot use a service container as the code is not available at the time the service container is started, therefore it won't pick up my configuration.
That's why I tried to run a container within my container using --network host (see below) and the container seems to be running fine, still I'm not able to curl it.
- name: Start DB for tests
run: |
docker run -d \
--network host \
-v ${{ github.workspace }}/dev/conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml \
-v ${{ github.workspace }}/dev/conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties \
tinkerpop/gremlin-server:3.5
- name: Test connection
run: |
curl "localhost:8182/gremlin?gremlin=g.V().valueMap()"
According to the documentation about the job context the id of the container network should be available ({{job.container.network}}) but is empty if you don’t use any job-level service or container.
Any ideas what I could try next?
This is what I ended up with: I'm now using docker-compose to run the integration tests (on my local computer as well as on GitHub Actions). I'm just mounting the entire directory/repo in the test container. Pulling the node:14-slim delays the build by some seconds, but I guess it's still the best option:
version: "3.2"
services:
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./data/:/opt/gremlin-server/data/
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties
- ./conf/initData.groovy:/opt/gremlin-server/scripts/initData.groovy
test:
image: node:14-slim
working_dir: /app
depends_on:
- gremlin-server
networks:
- graphdb_net
volumes:
- ../:/app
environment:
- NEPTUNE_CONNECTION_STRING=ws://gremlin-server:8182
command:
yarn test
networks:
graphdb_net:
driver: bridge
and I'm running them like this in my workflow:
- name: Spin up test environment
run: |
docker compose -f dev/docker-compose.yaml pull
docker compose -f dev/docker-compose.yaml build
- name: Run tests
run: |
docker compose -f dev/docker-compose.yaml run test
It's based on #DannyB's suggestion and his answer here so all props go to him.
I'm trying to set up a docker-file for a selenium grid that is able to change the nodes based on an environment variable.
selenoida:
image: "aerokube/selenoid:latest"
container_name: selenoid
network_mode: bridge
ports:
- "0.0.0.0:4444:4444"
volumes:
- ".:/etc/selenoid"
- "./target:/output"
- "/var/run/docker.sock:/var/run/docker.sock"
- "./target:/opt/selenoid/video"
environment:
- "OVERRIDE_VIDEO_OUTPUT_DIR=$PWD/target"
env_file:
- variables.env
command: ["-limit", "$NODES","-enable-file-upload", "-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video"]
But looks like the command is not able to use variables. I tested with ${NODES}, NODES.
Any ideas on how to use env variables set from a file in commands?
The command in exec form will not invoke the shell to expand the variables.
You can try using the command in shell form, or just overwrite the entrypoint with:
entrypoint: [""sh", "-c", "/usr/bin/selenoid -listen :4444 -conf /etc/selenoid/browsers.json -video-output-dir /opt/selenoid/video/ -enable-file-upload -limit $${NODES}"]
I'm building a dockerfile. But I meet a problem. It says that :
/bin/sh: 1: mongod: not found
My dockerfile:
FROM mongo:latest
FROM node
RUN mongod
COPY . .
RUN node ./scripts/import-data.js
Here is what happen when docker build:
Sending build context to Docker daemon 829.5MB
Step 1/8 : FROM rabbitmq
---> e8261c2af9fe
Step 2/8 : FROM portainer/portainer
---> 00ead811e8ae
Step 3/8 : FROM docker.elastic.co/elasticsearch/elasticsearch:6.5.1
---> 32f93c89076d
Step 4/8 : FROM mongo:latest
---> 5976dac61f4f
Step 5/8 : FROM node
---> b074182f4154
Step 6/8 : RUN mongod
---> Running in 0a4b66a77178
/bin/sh: 1: mongod: not found
The command '/bin/sh -c mongod' returned a non-zero code: 127
Any idea ?
The problem is that you are using two FROM instructions, which is referred to as a multi-stage build. The final image will be based on the node image that doesn't contain the mongo database.
* Edit *
here are more details about what is happening:
FROM mongo:latest
the base image is mongo:latest
FROM node
now the base image is node:latest. The previous image is just standing there...
RUN mongod
COPY . .
RUN node ./scripts/import-data.js
now you run mongod and the other commands in your final image that is based on node (which doesn't contain mongo)
It happens because multiple FROM instructions should be used for Multistage Build (check the documentation) and NOT for image creation contains all of present applications.
Multistage builds provide you possibility of delegation building process into container's environment without local application installation.
FROM rabbitmq
...some instructions require rabbitmq...
FROM mongo:latest
...some instructions require mongo...
In other words if you want to create an image with rabbitmq, mongo and other application you have to choose the image and install applications manually.
Use docker-compose (https://docs.docker.com/compose/install/) to run the images rather than attempting to build a new image from a collection of existing images. Your docker-compose.yml might look something like:
version: '3.7'
services:
portainer:
image: 'portainer/portainer'
container_name: 'portainer'
hostname: 'portainer'
domainname: 'example.com'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
- 'portainer_data:/data'
ports:
- '9000:9000'
rabbitmq:
image: 'rabbitmq'
container_name: 'rabbitmq'
hostname: 'rabbitmq'
domainname: 'example.com'
volumes:
- 'rabbitmq_data:/var/lib/rabbitmq'
elasticsearch:
image: 'elasticsearch:7.1.1'
container_name: 'elasticsearch'
hostname: 'elasticsearch'
domainname: 'example.com'
environment:
- 'discovery.type=single-node'
volumes:
- 'elasticsearch_data:/usr/share/elasticsearch/data'
ports:
- '9200:9200'
- '9300:9300'
node:
image: 'node:12'
container_name: 'node'
hostname: 'node'
domainname: 'example.com'
user: 'node'
working_dir: '/home/node/app'
environment:
- 'NODE_ENV=production'
volumes:
- './my-app:/home/node/app'
ports:
- '3000:3000'
command: 'npm start'
mongo:
image: 'mongo'
container_name: 'mongo'
hostname: 'mongo'
domainname: 'example.com'
restart: 'always'
environment:
- 'MONGO_INITDB_ROOT_USERNAME=root'
- 'MONGO_INITDB_ROOT_PASSWORD=example'
volumes:
- 'mongo_data:/data/db'
volumes:
portainer_data:
rabbitmq_data:
elasticsearch_data:
mongo_data:
I see, quite simple
step1. create this Dockerfile:
FROM mongo:latest
step2. create image from this Dockerfile:
docker build . -t my_mongo_build
This is equal to docker run ..... mongo:latest, used for some strange scenario
So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)
When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
I have a API Rest (Express & Mongodb) with nginx proxy-pass.
Docker file:
FROM node:8-alpine
EXPOSE 3000
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
RUN mkdir /app
WORKDIR /app
ADD package.json yarn.lock /app/
RUN yarn --pure-lockfile
ADD . /app
CMD ["yarn", "start"]
Docker compose:
version: "2"
services:
api:
build: .
environment:
- NODE_ENV=production
command: yarn start
volumes:
- .:/app
ports:
- "3000:3000"
tty: true
depends_on:
- mongodb
restart: always
nginx:
image: nginx
depends_on:
- api
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- mongodb
restart: always
mongodb:
image: mongo
ports:
- "27017:27017"
restart: always
When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
That seems expected, since docker run would run one image.
As opposed to docker compose, which will run a multi-container Docker application.
You need for all images to run, starting with the right order, in order to anything to happen.