Run specific spec files on cypress docker-compose setup - docker-compose

I have a docker-compose file to run cypress tests, but I see that it is identifying all the spec files in the integration folder and running the tests. I wanted to run a subset of the spec files . for example: only one specific spec file.
I tried with command: and the cypress run the specific file which did not help. Is there any way to run a specific spec file with the docker-compose setup.
version: '3.2'
services:
cypress:
image: 'cypress/included:6.6.0'
environment:
- CYPRESS_environment=test
working_dir: /test
volumes:
- ./:/test

If you look at the Dockerfile it uses entrypoint, so you can use a command parameter in your compose file to run specific file
version: '3.2'
services:
cypress:
image: 'cypress/included:6.6.0'
environment:
- CYPRESS_environment=test
working_dir: /test
volumes:
- ./:/test
command: "--spec /test/integration/mytest.js"

Related

docker-compose - NestJS container cannot access RethinkDB container

Problem
I am trying to containerize a full stack app. For now, I am putting the front-end aside, so I am trying to set up only three containers :
PostgreSQL
RethinkDB
NestJS
But when I try to run my containers with
docker-compose up
the NestJS container can't access the RethinkDB container.
Code
docker-compose.yaml
version: "3.9"
services:
opm_postgres:
container_name: opm_postgres_1
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: *******
POSTGRES_USER: postgres
volumes:
- 'opm_postgres:/var/lib/postgresql/data'
opm_adminer:
container_name: opm_adminer_1
image: adminer
restart: always
ports:
- 8085:8080
opm_rethink:
container_name: opm_rethink_1
image: rethinkdb
restart: always
ports:
- 28016:28015
- 8084:8080
volumes:
- 'opm_rethink:/data'
opm_back:
container_name: opm_back_1
build: ../OPM-back
restart: always
ports:
- "3000:3000"
volumes:
opm_postgres:
opm_rethink:
NestJS Dockerfile (coming from : Ultimate Guide: NestJS Dockerfile For Production [2022])
# Base image
FROM node:14
# Create app directory
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
# Creates a "dist" folder with the production build
RUN npm run build
# Start the server using the production build
CMD [ "node", "dist/main.js" ]
Logs
docker-compose up
docker ps
Additional info
I used the containers names as DB hosts, both for RethinkDB and PostgreSQL.
Also, when I comment the rethink part in my docker-compose.yaml, everything works fine, I can call a route on my NestJS API and it queries correctly in my PostgreSQL db. The problem seems to be specific to RethinkDB.

Docker-Compose cannot find config env file

I've created image from my Go project that has config env file. Here's my script for docker file
FROM alpine AS base
RUN apk add --no-cache curl wget
FROM golang:1.15 AS go-builder
WORKDIR /go/app
COPY . /go/app
RUN GO111MODULE=on CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o /go/app/main ./main.go
FROM base
COPY --from=go-builder /go/app/main /main
CMD ["/main"]
Also i create docker-compose file to connect with postgresql like this:
version: "3.7"
services:
postgres:
container_name: postgres
image: postgres
ports:
- 5432:5432
networks:
- go_network
app-golang:
container_name: app1
image: app-go:1.0
ports:
- 8000:8000
depends_on:
- postgres
networks:
- go_network
env_file:
- /env/config
networks:
go_network:
name: go_network
My env file is in config file format, and as seen above, i store it on /env/config. The problem is when i do docker-compose up -d the log is said cannot find /env/config like this ERROR: Couldn't find env file: /env/config Is it has different function to read from config format file?
EDIT 1:
my env file is in config format as shown below, and I'm using "github.com/kenshaw/envcfg" package to read config file using envcfg:

How to specify multiple "env_files" for a docker compose service?

Currently I have setup my service like the following.
version: '3'
services:
gateway:
container_name: gateway
image: node:lts-alpine
working_dir: /
volumes:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./services/gateway:/services/gateway
- ./packages:/packages
- ./node_modules:/node_modules
env_file: .env
command: yarn run ts-node-dev services/gateway --colors
ports:
- 3000:3000
So I have specified one env_file. But now I want to pass multiple .env files.
Unfortunately, the following is not possible:
env_files:
- .env.secrets
- .env.development
Is there any way to pass multiple .env files to one service in docker-compsoe?
You can specify multiple env files on the env_file option (without s).
For instance:
version: '3'
services:
hello:
image: alpine
entrypoint: ["sh"]
command: ["-c", "env"]
env_file:
- a.env
- b.env
Note that, complementary to #conradkleineespel's answer, if an environment variable is defined in multiple .env files listed under env_file, the value found in the last environment file in the list overwrites all prior (tested in a docker-compose file with version: '3.7'.

Get docker-compose up to only run certain containers

So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)

Changing Environment in docker-compose up

I'm new to docker.
Here is my simple docker-compose file.
version: '3.4'
services:
web:
image: 'myimage:latest'
build: .
ports:
- "5265:5265"
environment:
- NODE_ENV=production
To run this, I usually use docker-compose up command.
Can I change the NODE_ENV variable to anything while running docker-compose up?
For example:
docker-compose up -x NODE_ENV=staging
Use docker-compose run, you can manage services but not the complete stack. Useful for one-off commands.
$ docker-compose run -d -e NODE_ENV=staging web
Ref - https://docs.docker.com/compose/reference/run/
OR
Best way i could see as if now is to use shell & export the environment variable before doing a docker-compose up as below -
$ export NODE_ENV=staging && docker-compose up -d
Where your docker-compose will look something as below -
version: '3.4'
services:
web:
image: 'myimage:latest'
build: .
ports:
- "5265:5265"
environment:
- NODE_ENV=${NODE_ENV}