Get docker-compose up to only run certain containers - docker-compose

So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help

docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.

Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service

Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)

Related

DOCKER_Cannot run multiple services by docker-compose

I'm set up docker compose for my project with 2 services: spring-boot and postgresql. I created Dockerfile and docker-compose,yml as below:
Dockerfile :
FROM openjdk:8-jdk-alpine
MAINTAINER linhan.com
COPY target/LinhAn-0.0.1-SNAPSHOT.jar linhan-server-1.0.0.jar
ENTRYPOINT ["java","-jar","/linhan-server-1.0.0.jar"]
docker-compose.yml:
version: '2'
services:
spring_boot:
image: 'linhan'
build: .
container_name: api
ports:
- "8080:8080"
depends_on:
- postgres
environment:
- SPRING_DATASOURCE_URL=jdbc:jdbc:postgresql://localhost:5432/test_db
- SPRING_DATASOURCE_USERNAME=user
- SPRING_DATASOURCE_PASSWORD=123456
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
postgres:
image: 'postgres:13.1-alpine'
container_name: db
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=123456
Then, when I type docker-compose up in terminal, postgres ran only, spring boot still not.
I searched google for solution but seems no hope. Please help me, thanks a lot!!!!!
I think you need to change the SPRING_DATASOURCE_URL to reference your service name instead of localhost. The service name is resolved automatically to your service since all services are part of the default_network by default in docker-compose.
- SPRING_DATASOURCE_URL=jdbc:jdbc:postgresql://postgres:5432/test_db
Also, for clarity I would suggest you add the port to your docker-compose postgres service, so it is clear which port is being used, even if it is the default:
postgres:
image: 'postgres:13.1-alpine'
container_name: db
ports:
- "5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=123456
Also, another suggestion would be to try and use a healthcheck to see if your database service becomes available instead of a simple depends_on. The short version will mark the dependency fulfilled as soon as the container is Running, regardless of the availability of the database.
Either that, or you can add application logic to retry database connection in case of failure.

How to attach a PostgreSQL volume to a Docker image generated with SBT native packager?

I would like to be able to deploy my app in a pre-prod environment for integration testing using a Docker volume that will expose an instance of PostgreSQL. I'm using Scala v2.12.8 and Play v2.7.
Looking at the environment settings of the SBT native packager it seems possible to define dockerExposedVolumes in order to attach a DB.
Using a normal Docker compose file I would do something like that:
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgress
- POSTGRES_DB=postgres
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- suruse
volumes:
pgdata:
This configuration has been taken from this SO answer.
I tried searching for config examples but I didn't find anything useful so far. Now I'm wondering how I should define a new docker volume and then expose it to the Docker image created by SBT exactly?
THE WORKING SOLUTION
The final version. I've fully tested it and it works exposing the DB on the TCP port 5433.
# https://docs.docker.com/samples/library/postgres/
version: "3"
services:
app-pgsql:
image: postgres:9.6
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=yourPasswordHere
- POSTGRES_DB=yourDatabaseNameHere
- POSTGRES_INITDB_ARGS="--encoding=UTF8"
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
driver: local
Launch the docker compose using sbt dockerComposeUp -useStaticPorts and then check if the containers have been actually exposed using docker ps -a. Also, check the log files using the command provided by dockerComposeUp or dockerComposeInstances.
There is a sbt Plugin that helps you to achieve this:
sbt-docker-compose
With that you can add your database to a docker compose file and you can run everything within sbt.
This is a Docker standard. Here is an explaination how to do it for Postgres:
[run_postgresql_docker_compose][2]
The docker-compose.yml from that example:
version: '3'
services:
mydb:
image: postgres
volumes:
- db-data:/var/lib/postgresql/data
ports:
- 5432:5432/tc
volumes:
db-data:
driver: local
As this is a standard way of Docker you will find more examples.

Docker-compose postgresql integration

I'm new to docker and am trying to make a composed image consisting of services, nginx and postgresql database. I'm following the tutorial here : http://www.patricksoftwareblog.com/how-to-use-docker-and-docker-compose-to-create-a-flask-application/
And have been successful up to adding postgresql where I'm having difficulties and questions.
My docker-compose.yml:
version : '2'
services:
web:
restart: always
build: ./home/admin/
expose:
- "8000"
nginx:
restart: always
build: ./etc/nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:9.6
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./var/lib/postgresql
volumes_from:
- data
ports:
- "5432:5432"
I have included his docker generator script under /var/lib/postgresql but keep facing ERROR: Dockerfile parse error line 1: unknown instruction: IMPORT when I run 'docker-compose build'.
If I leave in the 'data' section & remove the postgres section in my docker-compose.yml file, my containers seemingly run fine but I'm unsure if postgresql is properly running at all. I'm able to GET using curl but still - I'm unsure how to go about confirming postgres specifics to confirm a proper environment and would appreciate examples on this topic in particular.
I was also wondering if running my docker-compose containers then simply running a separate postgresql container could also function if provided the correct ports.
Thank you!
Check the content of your docker-compose.yml:
yaml format (see for instance codebeautify.org/yaml-validator)
eol or encoding issue
multi-line instructions

How do I properly set up my Keystone.js app to run in docker with mongo?

I have built my app which runs fine locally. When I try to run it in docker (docker-compose up) it appears to start, but then throws an error message:
Creating mongodb ... done
Creating webcms ... done
Attaching to mongodb, webcms
...
Mongoose connection "error" event fired with:
MongoError: failed to connect to server [localhost:27017] on first connect
...
webcms exited with code 1
I have read that with Keystone.js you need to configure the Mongo location in the .env file, which I have:
MONGO_URI=mongodb://localhost:27017
Here is my Docker file:
# Use node 9.4.0
FROM node:9.4.0
# Copy source code
COPY . /app
# Change working directory
WORKDIR /app
# Install dependencies
RUN npm install
# Expose API port to the outside
EXPOSE 3000
# Launch application
CMD ["node","keystone"]
...and my docker-compose
version: "2"
services:
# NodeJS app
web:
container_name: webcms
build: .
ports:
- 3000:3000
depends_on:
- mongo
# MongoDB
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db/mongo
ports:
- 27017:27017
When I run docker ps it confirms that mongo is up and running in a container...
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3e06e4a5cfe mongo "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:27017->27017/tcp mongodb
I am either missing some config or I have it configured incorrectly. Could someone tell me what that is?
Any help would be appreciated.
Thanks!
It is not working properly because you are sending the wrong host.
your container does not understand what is localhost:27017 since it's your computer address and not its container address.
Important to understand that each service has it's own container with a different IP.
The beauty of the docker-compose that you do not need to know your container address! enough to know your service name:
version: "2"
volumes:
db-data:
driver: local
services:
web:
build: .
ports:
- 3000:3000
depends_on:
- mongo
environment:
- MONGO_URI=mongodb://mongo:27017
mongo:
image: mongo
volumes:
- "db-data:/data/db/mongo"
ports:
- 27017:27017
just run docker-compose up and you are all-set
A couple of things that may help:
First. I am not sure what your error logs look like but buried in my error logs was:
...Error: The cookieSecret config option is required when running Keystone in a production environment.Update your app or environment config so this value is supplied to the Keystone constructor....
To solve this problem, in your Keystone entry file (eg: index.js) make sure your Keystone constructor has the cookieSecret parameter set correctly: process.env.NODE_ENV === 'production'
Next. Change the mongo uri from the one Keystone generated (mongoUri: mongodb://localhost/my-keystone) to: mongoUri: 'mongodb://mongo:27017'. Docker needs this because it is the mongo container address. This change should also be reflected in your docker-compose file under the environment variable under MONGO_URI:
... environment: - MONGO_URI=mongodb://mongo:27017 ...
After these changes your Keystone constructor should look like this:
const keystone = new Keystone({
adapter: new Adapter(adapterConfig),
cookieSecret: process.env.NODE_ENV === 'production',
sessionStore: new MongoStore({ url: 'mongodb://mongo:27017' }),
});
And your docker-compose file, something like this (I used a network instead of links for my docker-compose as Docker has stated that links are a legacy option. I've included mine in case its useful for anyone else):
version: "3.3"
services:
mongo:
image: mongo
networks:
- appNetwork
ports:
- "27017:27017"
environment:
- MONGO_URI=mongodb://mongo:27017
appservice:
build:
context: ./my-app
dockerfile: Dockerfile
networks:
- appNetwork
ports:
- "3000:3000"
networks:
appNetwork:
external: false
It is better to use mongo db atlas if you does not want complications. You can use it in local and in deployment.
Simple steps to get the mongo url is available in https://www.mongodb.com/cloud/atlas
Then add a env variable
CONNECT_TO=mongodb://your_url
For passing the .env to docker, use
docker run --publish 8000:3000 --env-file .env --detach --name kb keystoneblog:1.0

two docker-compose .yml in the same network with COMPOSE_PROJECT_NAME

I am trying to have my own network name for my docker-compose files (server.yml and test.yml), as test.yml gets only started from time to time, but needs access to some services in the server.yml. I can make it work with docker-compose -p nameofproject up, but not with COMPOSE_PROJECT_NAME.
server.yml
version: '2'
networks:
mynetwork:
driver: bridge
services:
app1:
networks:
- mynetwork
environment:
POSTGRES_PASSWORD: somepassword
COMPOSE_PROJECT_NAME: serverstack
app2:
networks:
- mynetwork
environment:
COMPOSE_PROJECT_NAME: serverstack
depends_on:
- app1
My expectation is that when the container is starting I should see
Creating serverstackmynetwork_app_1
Creating serverstackmynetwork_app_2
the network should be named (docker network ls)
serverstack_mynetwork
just like when I do the following, which actually works
docker-compose -p serverstack up
And then I can connect just by using docker-compose up with the second file (which works just fine when using the -p option on the server.yml)
testing.yml
version: '2'
networks:
testapp_network:
external:
name: serverstack_mynetwork
services:
testapp:
networks:
- testapp_network
But using it without -p serverstack on the server.yml I see directories as names
Creating directoryofapp1_app1_1
Creating directoryofapp2_app2_1
so COMPOSE_PROJECT_NAME is being omitted and I also cannot connect to the server service though serverstack_mynetwork
I did add the COMPOSE_PROJECT_NAME: serverstack after building the image, but I would expect it should work anyhow. What am I missing?
I solved this by creating the ".env" file containing
COMPOSE_PROJECT_NAME=myprojectname