How to combine multiple docker build statements to create single container - docker-compose

I have three " docker build --build-arg value -t value -f value " statements used for building separate docker containers.
Is there a way combine them to create a single container ?

First of all with docker build you are creating images not containers. If i understood correctly you want to create a single image from multiple DockerFiles, right? If yes, you can combine your code in a single dockerFile using multi-stage build as shown here.

Related

How to run docker-compose across different lifecycle environments

How to run docker-compose across different lifecycle environments (say dev, qa, staging, production).
Sometimes a larger VM is being shared by multiple developers, so would like to start the containers with appropriate developer specific suffixes (say dev1, dev2, dev3 ..). Should port customization be handled manually via the environment file (i.e. .env file)
This is an unusual use case for docker-compose, but I'll leave some tips anyway! :)
There's two different ways to name stuff you start with docker-compose. One is to name the service that you specify under the main services: key of your docker-compose.yml file. By default, individual running containers will be assigned names indicating what project they are from (by default, the name of the directory from which your docker-compose file is in), what service they run (this is what's specified under your services: key), and which instance of that service they are (this number changes if eg. you're using replicas). Eg. default container names for a service named myservice specified in a compose file ~/my_project/docker/docker-compose.yml will have a name like docker_myservice_1 (or _2, _3, etc if more than one container is supposed to run).
You can use environment variables to specify a lot of key-value pairs in docker-compose files, but you can't conditionally specify the service name - service keys are only allowed to have alphanumeric characters in them and compose files can't look like eg:
version: "3"
services:
${ENVVAR}:
image: ubuntu:20.04
However, you can override the container naming scheme by using the container_name field in your docker-compose file (documentation for usage here). Maybe a solution you could use looks like this:
version: "3"
services:
myservice:
image: ubuntu:20.04
container_name: ${DEVELOPER_ENVVAR?err}
this will require a developer to specify DEVELOPER_ENVVAR at runtime, either by exporting it in their shell or by running docker-compose like DEVELOPER_ENVVAR=myservice_dev1 docker-compose up. Note that using container_name is incompatible with using replicas to run multiple containers for the same service - the names have to be unique for those running containers, so you'll either have to define separate services for each name, or give up on using container_name.
However, you're in a pickle if you expect multiple developers to be able to run containers with different names using the same compose file in the same directory. That's because when starting a service, docker-compose has a Recreating step where, if there's already containers implementing that service running, they'll wait for that container to finish. Ultimately, I think this is for the best - if multiple developers were trying to run the exact same compose project at once, should a developer have control over other developers' running containers? Probably not, right?
If you want multiple developers to be able to run services at once in the same VM, I think you probably want to do two things:
first, (and you may well have already done this! but it's still a good reminder) make sure that this is a good idea. Are there going to be resource contention issues (eg. for port-forwarding) that make different running instances of your project conflict? For many Docker services, there are going to be, but there probably won't be for eg. images that are meant to be run in a swarm.
second, have different compose files checked out in different directories, so that there are separate compose projects for each developer. To use .env files one way one obvious option is to just maintain separate copies, one per developer directory. If, for your use case, it's unsatisfactory to maintain one copy of .env per developer this way, you could use symlinks named .env (or whatever your env file is named) to the same file somewhere else on the VM.
After you've done this, you'll be able to tell from the container names who is running what.
If none of these are satisfactory, you might want to consider, eg. using one VM per developer, or maybe even considering using a different container management system than docker-compose.
I have done very similar automation and I've used Ansible to create "docker compose" config on the fly.
So based on input-Environment , the ansible playbook will create the relevant docker-compose file. So basically I have a docker-compose template in my git repository with values that are dynamic and ansible playbook populates them etc.
and also you can use ansible to trigger such creation or automation one after another
A similar sample has been posted at ansible_docker_splunk repository.
Basically the whole project is to automate end-to-end docker cluster from CSV file

Resolved: How to create a Mongodb docker container without volume

I need to create a snapshot of a MongoDB in a specific state without volume, for auto-testing proposes.
To be clear, I need all of the data saved on the container memory and persist, not using a volume, then with a commit, I can take the snapshot.
After Jenkins will deploy this image from a repository, with initial data inside to run integration tests on it, and different executions of the test will run on the exact point of the database that we want. Without scripts or something like that.
I try changing the conf of mongo inside the container, docker compose.
Any idea or example will be grateful.
----------------------------solution------------------------------
if you need an image with initial data for sandboxing/testing purposes, you can do a docker file with all the conf and without volume declaration, that will force to save in the inner layers of memory in the container, then you can access and work normally until you have the state you like, next step is to make a snapshot.
Always you start the container you will find that concrete state in MongoDB for your test.
Use Docker commit to save the state of a running container as an image:
https://docs.docker.com/engine/reference/commandline/commit/

How to replace image tag in a compose.yml file for one container

I have compose.yml file where I have multiple containers with common image-tag variable and am passing the value of variable from Jenkins .
Image-tag/version is same for all applications.
Incase if for particular container if I want to use old container version and rest containers will have same new version.
How do I do it

Build multiple images with Docker Compose?

I have a repository which builds three different images:
powerpy-base
powerpy-web
powerpy-worker
Both powerpy-web and powerpy-worker inherit from powerpy-base using the FROM keyword in their Dockerfile.
I'm using Docker Compose in the project to run a Redis and RabbitMQ container. Is there a way for me to tell Docker Compose that I'd like to build the base image first and then the web and worker images?
You can use depends_on to enforce an order, however that order will also be applied during "runtime" (docker-compose up), which may not be correct.
If you're only using compose to build images it should be fine.
You could also split it into two compose files. a docker-compose.build.yml which has depends_on for build, and a separate one for running the images as services.
These is a related issue: https://github.com/docker/compose/issues/295
About run containers:
It was bug before, but they fixed it since docker-compose v1.10.
https://blog.docker.com/2016/02/docker-1-10/
Start linked containers in correct order when restarting daemon: This is a little thing, but if you’ve run into it you’ll know what a headache it is. If you restarted a daemon with linked containers, they sometimes failed to start up if the linked containers weren’t running yet. Engine will now attempt to start up containers in the correct order.
About build:
You need to build base image first.

Docker: Running multiple applications VS running multiple containers

I am trying to run Wildfly, Jenkins and Postgresql in Docker container(s).
As far as I could understand from articles I've read, the Docker way is to have each application run in a different container.
Is my assumption correct or is it better to have only one container containing these three applications?
Afaik the basic philosophy behind docker is to run one service per container. You can run whole application inside a container, but I don't think that will go well with the way docker work. Running different services in different containers gives you more flexibility and a better modularity for your app.