Exclude services from starting with docker-compose - docker-compose

Use Case
The docker-compose.yml defines multiple services which represent the full application stack. In development mode, we'd like to dynamically exclude certain services, so that we can run them from an IDE.
As of Docker compose 1.28 it is possible to assign profiles to services as documented here but as far as I have understood, it only allows specifying which services shall be started, not which ones shall be excluded.
Another way I could imagine is to split "excludable" services into their own docker-compose.yml file but all of this seems kind of tedious to me.
Do you have a better way to exclude services?

It seems we both overlooked a certain very important thing about profiles and that is:
Services without a profiles attribute will always be enabled
So if you run docker-compose up with a file like this:
version: "3.9"
services:
database:
image: debian:buster
command: echo database
frontend:
image: debian:buster
command: echo frontend
profiles: ['frontend']
backend:
image: debian:buster
command: echo backend
profiles: ['backend']
It will start the database container only. If you run it with docker-compose --profile backend up it will bring database and backend containers. To start everything you need to docker-compose --profile backend --profile frontend up or use a single profile but several times.
That seems to me as the best way to make docker-compose not to run certain containers. You just need to mark them with a profile and it's done. I suggest you give the profiles reference a second chance as well. Apart from some good examples it explains how the feature interacts with service dependencies.

Related

How to run docker-compose across different lifecycle environments

How to run docker-compose across different lifecycle environments (say dev, qa, staging, production).
Sometimes a larger VM is being shared by multiple developers, so would like to start the containers with appropriate developer specific suffixes (say dev1, dev2, dev3 ..). Should port customization be handled manually via the environment file (i.e. .env file)
This is an unusual use case for docker-compose, but I'll leave some tips anyway! :)
There's two different ways to name stuff you start with docker-compose. One is to name the service that you specify under the main services: key of your docker-compose.yml file. By default, individual running containers will be assigned names indicating what project they are from (by default, the name of the directory from which your docker-compose file is in), what service they run (this is what's specified under your services: key), and which instance of that service they are (this number changes if eg. you're using replicas). Eg. default container names for a service named myservice specified in a compose file ~/my_project/docker/docker-compose.yml will have a name like docker_myservice_1 (or _2, _3, etc if more than one container is supposed to run).
You can use environment variables to specify a lot of key-value pairs in docker-compose files, but you can't conditionally specify the service name - service keys are only allowed to have alphanumeric characters in them and compose files can't look like eg:
version: "3"
services:
${ENVVAR}:
image: ubuntu:20.04
However, you can override the container naming scheme by using the container_name field in your docker-compose file (documentation for usage here). Maybe a solution you could use looks like this:
version: "3"
services:
myservice:
image: ubuntu:20.04
container_name: ${DEVELOPER_ENVVAR?err}
this will require a developer to specify DEVELOPER_ENVVAR at runtime, either by exporting it in their shell or by running docker-compose like DEVELOPER_ENVVAR=myservice_dev1 docker-compose up. Note that using container_name is incompatible with using replicas to run multiple containers for the same service - the names have to be unique for those running containers, so you'll either have to define separate services for each name, or give up on using container_name.
However, you're in a pickle if you expect multiple developers to be able to run containers with different names using the same compose file in the same directory. That's because when starting a service, docker-compose has a Recreating step where, if there's already containers implementing that service running, they'll wait for that container to finish. Ultimately, I think this is for the best - if multiple developers were trying to run the exact same compose project at once, should a developer have control over other developers' running containers? Probably not, right?
If you want multiple developers to be able to run services at once in the same VM, I think you probably want to do two things:
first, (and you may well have already done this! but it's still a good reminder) make sure that this is a good idea. Are there going to be resource contention issues (eg. for port-forwarding) that make different running instances of your project conflict? For many Docker services, there are going to be, but there probably won't be for eg. images that are meant to be run in a swarm.
second, have different compose files checked out in different directories, so that there are separate compose projects for each developer. To use .env files one way one obvious option is to just maintain separate copies, one per developer directory. If, for your use case, it's unsatisfactory to maintain one copy of .env per developer this way, you could use symlinks named .env (or whatever your env file is named) to the same file somewhere else on the VM.
After you've done this, you'll be able to tell from the container names who is running what.
If none of these are satisfactory, you might want to consider, eg. using one VM per developer, or maybe even considering using a different container management system than docker-compose.
I have done very similar automation and I've used Ansible to create "docker compose" config on the fly.
So based on input-Environment , the ansible playbook will create the relevant docker-compose file. So basically I have a docker-compose template in my git repository with values that are dynamic and ansible playbook populates them etc.
and also you can use ansible to trigger such creation or automation one after another
A similar sample has been posted at ansible_docker_splunk repository.
Basically the whole project is to automate end-to-end docker cluster from CSV file

How to use Docker in the development/deployment workflow?

I'm not sure I completely understand the role of Docker in the process of development and deployment.
Say, I create a Dockerfile with nginx, some database and something else which creates a container and runs fine.
I drop it somewhere in the cloud and execute it to install and configure all the dependencies and environment settings.
Next, I have a repository with a web application which I want to run inside the container I created and deployed in the first 2 steps. I regularly work on it and push the changes.
Now, how do I integrate the web application into the container?
Do I put it as a dependency inside the Dockerfile I create in the 1st step and recreate the container each time from scratch?
Or, do I deploy the container once but have procedures inside Dockerfile that install utils that pull the code from repo by command or via hooks?
What if a container is running but I want to change some settings of, say, nginx? Do I add these changes into Dockerfile and recreate the image?
In general, what's the role of Docker in the daily app development routine? Is it used often if the infrastructure is running fine and only code is changing?
I think there is no singl "use only this" answer - as you already outlined, there are different viable concepts available.
Deployment to staging/production/pre-production
a)
Do I put it as a dependency inside the Dockerfile I create in the 1st step and recreate the container each time from scratch?
This is for sure the most docker`ish way and aligns fully with he docker-philosophy. It is highly portable, reproducible and suites anything, from one container to "swarm" thousands of. E.g. this concept has no issue suddenly scaling horizontally when you need more containers, lets say due to heavy traffic / load.
It also aligns with the idea that only the configuration/data should be dynamic in a docker container, not code / binaries /artifacts
This strategy should be chosen for production use, so when not as frequent deployments happen. If you care about downtimes during container-rebuilds (on upgrade), there are good concepts to deal with that too.
We use this for production and pre-production intances.
b)
Or, do I deploy the container once but have procedures inside
Dockerfile that install utils that pull the code from repo by command
or via hooks?
This is a more common practice for very frequent deployment. You can go the pull ( what you said ) or the push (docker cp / ssh scp) concept, while i guess the latter is preferred in this kind of environment.
We use this for any kind strategy for staging instances, which basically should reflect the current "codebase" and its status. We also use this for smoke-tests and CI, but depending on the application. If the app actually changes its dependencies a lot and a clean build requires a rebuild with those to really ensure stuff is tested as it is supposed to, we actually rebuild the image during CI.
Configuration management
1.
What if a container is running but I want to change some settings of,
say, nginx? Do I add these changes into Dockerfile and recreate the
image?
I am not using this as c) since this is configuration management, not applications deployment and the answer to this can be very complicated, depending on your case. In general, if redeployment needs configuration changes, it depends on your configuration management, if you can go with b) or always have to go a).
E.g. if you use https://github.com/markround/tiller with consul as the backend, you can push the configuration changes into consul, regenerating the configuration with tiller, while using consul watch -prefix /configuration tiller as a watch-task to react on those value changes.
This enables you to go b) and fix the configuration
You can also use https://github.com/markround/tiller and on deployment, e.g. change ENV vars or some kind of yml file ( tiller supports different backends ), and call tiller during deployment yourself. This most probably needs you to have ssh or you ssh on the host and use docker cp and docker exec
Development
In development, you generally reuse your docker-compose.yml file you use for production, but overload it with docker-compose-dev.yml to e.g. mount your code-folder, set RAILS_ENV=development, reconfigurat / mount some other configurations like xdebug or more verbose nginx loggin, whatever you need. You can also add some fake MTA-services like fermata and so on
docker-compose -f docker-compose.yml -f docker-compose-dev.yml up
docker-compose-dev.yml only overloads some values, it does not redefine it or duplicate it.
Depending on how powerful your configuration management is, you can also do a pre-installation during development stack up.
We actually use scaffolding for that, we use https://github.com/xeger/docker-compose and after running it, we use docker exec and docker cp to preinstall a instance or stage something. Some examples are here https://github.com/EugenMayer/docker-sync/wiki/7.-Scripting-with-docker-sync
If you are developing under OSX and you face performance issues due to OSXFS / code shares, you probably want to have a look at http://docker-sync.io ( i am biased though )

Opsworks Chef 12 recipes

Has anyone attempted to convert Opsworks Chef v11 recipes to Chef v12?
Im running multiple stacks on Chef 11 and decided to start converting some of them to Chef 12. Since AWS dropped their opsworks app layers, such as rails layer recipes, we (opsworks users) are now responsible for creating deploy user, git checkout repos into deploy_to, etc.
Its all good with flexibility and no more namespace conflicts, but we are missing all that good stuff opsworks was giving us for free.
Wonder if someone converted recipes for Chef 12 and open-sourced? Otherwise, is community interested in these recipes at all? Im pretty sure Im not alone here.
Thank you in advance!
The opsworks_ruby cookbook on the Supermarket is basically everything you need. It even puts the apps into the same directories (i.e. /srv/www/app_name/), sets up your database.yml, etc, etc.
The main difference between this recipe and other non-OpsWorks recipes is that this will pull everything out of the OpsWorks configuration for you. You don't have to customize the recipes, just make sure your app and layers are named correctly - it'll build everything from there - including your RDS configuration for the database.yml!
The main difference is that the layers in OpsWorks won't be "Ruby aware" so you won't have fields for Rails-ish or Ruby-ish things and instead will need to manage those elsewhere. The way ENV vars is loaded is a little different too.
Also be sure to read up on AWS' implementation of Chef 12 for OpsWorks. They technically have two chef cookbooks running, their internal one and yours. Theirs include managing the agent is up-to-date, loading up users (for ssh), wiring up monitoring, etc. You'll have to manage the rest.
We've either replaced stuff from their huge cookbook with individual cookbooks from the Supermarket or just rewrote it. For instance, the old Chef 11 opsworks_initial_setup had a couple things around tweaking network and linux settings - we recreated that.
It also does use deploy users as applicable, for instance:
$ ps -eo user,command
USER COMMAND
// snip
root nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
aws opsworks-agent: master 10820
aws opsworks-agent: keep_alive of master 10820
aws opsworks-agent: statistics of master 10820
aws opsworks-agent: process_command of master 10820
deploy unicorn_rails master --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[0] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[1] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[2] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
deploy unicorn_rails worker[3] --env production --daemonize -c /srv/www/app/shared/config/unicorn.conf
nginx nginx: worker process
nginx nginx: worker process
Just a small example of the process output but root boots things as needed and each process utilizes their own users to limit rights and access.
I think the most common way is to use the "application" cookbook from the supermarket: https://supermarket.chef.io/cookbooks/application/versions/4.1.6 (which is also based on Poise). Attention: use version ~4, they removed almost all of the good features in v5.
It will create the directory structure, supports different deploy-strategies and offers some events to hook.
Be aware: In my opinion, the Opsworks documentation is semi-good when it comes to the "deploy with opsworks and chef12" topic: The information from the gui (like repo-url etc), is not on the node object but in a databag from the application. For debugging it can be very helpful to have a look into the /var/chef/runs/<run-id>/ directory to see what is available from where.
Small snippet that shows the idea:
app = search("aws_opsworks_app").first
application "#{app['shortname']}" do
owner 'root'
group 'root'
repository app['app_source']['url']
revision 'master'
path "/srv/#{app['shortname']}"
end
This will create the releases/current directory structure on /srv and checkout the code. Note: you could think that ssh-key you specify in the GUI is somehow automatically put in the proper place. Its not, you'll have to take care of that on your own. Check the chef11 opsworks cookbook: https://github.com/aws/opsworks-cookbooks/blob/release-chef-11.10/scm_helper/libraries/git.rb
I don't know about the old OpsWorks cookbooks but check out https://github.com/poise/application_examples/ for some examples of doing doing Rails (and more) deploys using plain Chef (will work on OpsWorks too).

Build multiple images with Docker Compose?

I have a repository which builds three different images:
powerpy-base
powerpy-web
powerpy-worker
Both powerpy-web and powerpy-worker inherit from powerpy-base using the FROM keyword in their Dockerfile.
I'm using Docker Compose in the project to run a Redis and RabbitMQ container. Is there a way for me to tell Docker Compose that I'd like to build the base image first and then the web and worker images?
You can use depends_on to enforce an order, however that order will also be applied during "runtime" (docker-compose up), which may not be correct.
If you're only using compose to build images it should be fine.
You could also split it into two compose files. a docker-compose.build.yml which has depends_on for build, and a separate one for running the images as services.
These is a related issue: https://github.com/docker/compose/issues/295
About run containers:
It was bug before, but they fixed it since docker-compose v1.10.
https://blog.docker.com/2016/02/docker-1-10/
Start linked containers in correct order when restarting daemon: This is a little thing, but if you’ve run into it you’ll know what a headache it is. If you restarted a daemon with linked containers, they sometimes failed to start up if the linked containers weren’t running yet. Engine will now attempt to start up containers in the correct order.
About build:
You need to build base image first.

How do you manage per-environment data in Docker-based microservices?

In a microservice architecture, I'm having a hard time grasping how one can manage environment-specific config (e.g. IP address and credentials for database or message broker).
Let's say you have three microservices ("A", "B", and "C"), each owned and maintained by a different team. Each team is going to need a team integration environment... where they work with the latest snapshot of their microservice, along with stable versions of all dependency microservices. Of course, you'll also need QA/staging/production environments as well. A simplified view of the big picture would look like this:
"Microservice A" Team Environment
Microservice A (SNAPSHOT)
Microservice B (STABLE)
Microservice C (STABLE)
"Microservice B" Team Environment
Microservice A (STABLE)
Microservice B (SNAPSHOT)
Microservice C (STABLE)
"Microservice C" Team Environment
Microservice A (STABLE)
Microservice B (STABLE)
Microservice C (SNAPSHOT)
QA / Staging / Production
Microservice A (STABLE, RELEASE, etc)
Microservice B (STABLE, RELEASE, etc)
Microservice C (STABLE, RELEASE, etc)
That's a lot of deployments, but that problem can be solved by a continuous integration server and perhaps something like Chef/Puppet/etc. The really hard part is that each microservice would need some environment data particular to each place in which it's deployed.
For example, in the "A" Team Environment, "A" needs one address and set of credentials to interact with "B". However, over in the "B" Team Environment, that deployment of "A" needs a different address and credentials to interact with that deployment of "B".
Also, as you get closer to production, environmental config info like this probably needs security restrictions (i.e. only certain people are able to modify or even view it).
So, with a microservice architecture, how to you maintain environment-specific config info and make it available to the apps? A few approaches come to mind, although they all seem problematic:
Have the build server bake them into the application at build-time - I suppose you could create a repo of per-environment properties files or scripts, and have the build process for each microservice reach out and pull in the appropriate script (you could also have a separate, limited-access repo for the production stuff). You would need a ton of scripts, though. Basically a separate one for every microservice in every place that microservice can be deployed.
Bake them into base Docker images for each environment - If the build server is putting your microservice applications into Docker containers as the last step of the build process, then you could create custom base images for each environment. The base image would contain a shell script that sets all of the environment variables you need. Your Dockerfile would be set to invoke this script prior to starting your application. This has similar challenges to the previous bullet-point, in that now you're managing a ton of Docker images.
Pull in the environment info at runtime from some sort of registry - Lastly, you could store your per-environment config inside something like Apache ZooKeeper (or even just a plain ol' database), and have your application code pull it in at runtime when it starts up. Each microservice application would need a way of telling which environment it's in (e.g. a startup parameter), so that it knows which set of variable to grab from the registry. The advantage of this approach is that now you can use the exact same build artifact (i.e. application or Docker container) all the way from the team environment up to production. On the other hand, you would now have another runtime dependency, and you'd still have to manage all of that data in your registry anyway.
How do people commonly address this issue in a microservice architecture? It seems like this would be a common thing to hear about.
Docker compose supports extending compose files, which is very useful for overriding specific parts of your configuration.
This is very useful at least for development environments and may be useful in small deployments too.
The idea is having a base shared compose file you can override for different teams or environments.
You can combine that with environment variables with different settings.
Environment variables are good if you want to replace simple values, if you need to make more complex changes then you use an extension file.
For instance, you can have a base compose file like this:
# docker-compose.yml
version: '3.3'
services:
service-a:
image: "image-name-a"
ports:
- "${PORT_A}"
service-b:
image: "image-name-b"
ports:
- "${PORT_B}"
service-c:
image: "image-name-c"
ports:
- "${PORT_C}"
If you want to change the ports you could just pass different values for variables PORT_X.
For complex changes you can have separate files to override specific parts of the compose file. You can override specific parameters for specific services, any parameter can be overridden.
For instance you can have an override file for service A with a different image and add a volume for development:
# docker-compose.override.yml
services:
service-a:
image: "image-alternative-a"
volumes:
- /my-dev-data:/var/lib/service-a/data
Docker compose picks up docker-compose.yml and docker-compose.override.yml by default, if you have more files, or files with different names, you need to specify them in order:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml -f docker-compose.dev-service-a.yml up -d
For more complex environments the solution is going to depend on what you use, I know this is a docker question, but nowadays it's hard to find pure docker systems as most people use Kubernetes. In any case you are always going to have some sort of secret management provided by the environment and managed externally, then from the docker side of things you just have variables that are going to be provided by that environment.