Running specific container on predefined host with docker stack - docker-compose

I have spent lots of time trying to figure out the following scenario but no luck uptil now. so here is the case.
I have 2 machines(machine A with IP 123 and machine B with IP 456), 123 is the manager node and 456 is the worker node.
Now there are 2 services that I would like to run, Service-A on IP-123 and Service-B on IP 456 respectively, for that I am using the following compose file
version: '3.8'
networks:
same-network:
ser1-service:
container_name: ser1-service
image: ser1
networks:
- same-network
ports:
- 9057:7057
entrypoint: ["java","-jar","ser1.jar"]
ser2-service:
container_name: ser2-service
image: ser2
networks:
- same-network
ports:
- 9052:7052
entrypoint: ["java","-jar","ser2.jar"]
Now, when I start it using docker stack deploy, it deploys the services randomly. What I want is, to make sure that every time Service-A should be deployed on IP-123 and Service-B should be deployed on IP 456
Just to add one thing I have downloaded both images on both server. Reason is, in the actual scenario, we have lots of services and lots of hosts and they have hard binding with each other. In addition, I want the images to be downloaded on run time on each worker node.
Any help from anyone would be highly appreciated.

The first issue (make sure services A goes to IP 123) - the closest thing I can suggest is to add a deploy section with placement constraint to your stack file. One of the available placement constraints is by hostname, which may be close enough (though not exactly by ip address). Another viable placement constraint is by node labels (which you add yourself to the nodes).
https://docs.docker.com/compose/compose-file/compose-file-v3/#placement
https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints---constraint
ser1-service:
container_name: ser1-service
image: ser1
...
placement:
constraints: [node.hostname == hostname_of_ip_123]
The second issue I think you're asking about making sure the images are downloaded each time (a bit unsure due to wording though) - I believe you want the "--resolve-image=always" flag on your docker stack deploy command:
https://docs.docker.com/engine/reference/commandline/stack_deploy/
--resolve-image always API 1.30+Swarm
Query the registry to resolve image digest and supported platforms ("always"|"changed"|"never")
docker stack deploy -c your_stack_file.yml --resolve-image=always your_stack_name

Related

Exclude services from starting with docker-compose

Use Case
The docker-compose.yml defines multiple services which represent the full application stack. In development mode, we'd like to dynamically exclude certain services, so that we can run them from an IDE.
As of Docker compose 1.28 it is possible to assign profiles to services as documented here but as far as I have understood, it only allows specifying which services shall be started, not which ones shall be excluded.
Another way I could imagine is to split "excludable" services into their own docker-compose.yml file but all of this seems kind of tedious to me.
Do you have a better way to exclude services?
It seems we both overlooked a certain very important thing about profiles and that is:
Services without a profiles attribute will always be enabled
So if you run docker-compose up with a file like this:
version: "3.9"
services:
database:
image: debian:buster
command: echo database
frontend:
image: debian:buster
command: echo frontend
profiles: ['frontend']
backend:
image: debian:buster
command: echo backend
profiles: ['backend']
It will start the database container only. If you run it with docker-compose --profile backend up it will bring database and backend containers. To start everything you need to docker-compose --profile backend --profile frontend up or use a single profile but several times.
That seems to me as the best way to make docker-compose not to run certain containers. You just need to mark them with a profile and it's done. I suggest you give the profiles reference a second chance as well. Apart from some good examples it explains how the feature interacts with service dependencies.

How to deploy multiple docker images to an Azure App Service through Azure Pipelines

I have a docker compose .yml file that I use to manually install and run multiple docker images on a Linux Azure App Service. My next step is to automate this through Azure Pipelines, which I have been successful in doing for a single image; but can't figure out how to do it for multiple images on the same App Service instance.
This is a client requirement to run multiple images on the same App Service instance. I have some flexibility if there is a much better way, but cost is a factor.
I'm specifically looking for the type of Task to add to my release pipeline and if there are any examples or documentation I can read. So far I haven't found anything that really seems to fit the bill, but I'm not a DevOps engineer, so it's likely I am just not asking the question correctly.
Here is an example of the docker compose yml file I have.
version: '3.7'
services:
exampleapi:
image: examplecontainerregistry.azurecr.io/example.api:latest
container_name: example.api
volumes:
- example:/mnt/example
- common:/mnt/common
certprojectservice:
image: examplecontainerregistry.azurecr.io/example.certproject.service:latest
container_name: example.certproject.service
volumes:
- example:/mnt/example
- common:/mnt/common
unityemailservice:
image: examplecontainerregistry.azurecr.io/example.email.service:latest
container_name: example.email.service
volumes:
- example:/mnt/example
- common:/mnt/common
eventconsumerservice:
image: examplecontainerregistry.azurecr.io/example.eventconsumers.service:latest
container_name: example.eventconsumers.service
volumes:
- example:/mnt/example
- common:/mnt/common
webhookresponseservice:
image: examplecontainerregistry.azurecr.io/example.webhookresponse.service:latest
container_name: example.webhookresponse.service
volumes:
- example:/mnt/example
- common:/mnt/common
unitywebhooksservice:
image: examplecontainerregistry.azurecr.io/example.webhooks.service:latest
container_name: example.webhooks.service
volumes:
- example:/mnt/example
- common:/mnt/common
Not sure whether the method I used is suitable for you, but you can have a check.
1) To run multi docker images in one Azure app service instance, firstly I need make my Azure App service type is available for Docker-Compose:
Note: Since my images are stored at ACR, here I make this Azure app service connect to the ACR I used.
2) Upload the docker-compose.yml into that configuration.
3) The third step, which is the very meaningful step is enable Continuous Deployment. The significance of this step is once there has new images are pushed to the ACR which connected with the current app service. It will obtain the latest image is automatically from the ACR, and then deploy as the docker-compose file configured.
After enable the Continuous Deployment, click on show url to get the wehhook URL:
4) Go ACR, then select Webhooks from left panel. Add => Input the webhook URL we copied from AppService to Service URL. Save it.
Now, every time you push a new version of the image to ACR, the Azure App Service will trigger the redeploy of the containers using the most recent image. And I do not need configure pipeline with deploy task in azure devops.
So, for anyone interested, the way I was able to solve my exist situation was to make sure that during the build pipeline, I pushed the docker compose file as a Pipeline Artifact. Then in the release pipeline I was able to download that artifact and use it in an Azure CLI task.
The CLI task used the "az webapp config container set" command. Reasonably simple once I figured it out, but a more fit for purpose task would have been nice.

Azure Devops - Docker Compose Build Image Not Found

I am having an issue getting an image to build in Azure Devops from a docker-compose file.
It appears that the first issue is that the image does not build.
This is, I believe, causing the push step to fail, as there is no created image, it is just running an existing image.
What can I do to "force" the process to build an image off of this to pass into our repo? Here is our current docker compose file
version: '3.4'
services:
rabbit:
image: rabbitmq:3.6.16-management
labels:
NAME: "rabbit"
environment:
- "RabbitMq/Host=localhost"
ports:
- "15672:15672"
- "5672:5672"
container_name: rabbit
restart: on-failure:5
Here's the build and push steps (truncating the top which doesn't really matter)
Build:
Push:
I spent a fair amount of time fighting with this today (similar issue, anyway). I do not believe the image being non-local is necessarily your problem.
Looks like you are using the "Docker Compose" Task in Azure DevOps for your Build. I had the same problem - I could get it to build fine, but couldn't ever seem to "pipe" the result to the Push Task. I suspect one could add another Task between them to resolve this, but there is an easier way.
Instead, trying using the "Docker" Task to do your build. Without really changing anything else, I was able to make that work and the "Push" step next in line was happy as could be.

Separate Dev and Production instances and database

I have a web application hosted on a server, it uses virtualEnv to separate dev and prod instances. Both instances share the same postgres database. (all on the same server)
I am kind of new to docker and I would like to replace the dev and prod instances with docker containers, and each link to its dev or prod postgres containers (or a similar effect so that a code change in development will not affect production database.)
what is the best design for this scenario? should I have the dev and prod container mapped to different ports? Can I have 1 dockerfile for both dev and prod containers? How do I deal with 2 postgres container?
Seems your requirement is not very complicated, so I think you can run 2 pair containers (each pair have one application container and one postgres container) to achieve this, the basic structure described as below:
devContainer---> pgsDBContainer:<port_number1> ---> dataVolume1
prodContainer---> pgsDBContainer:<port_number2> ---> dataVolume2
Each container pair have one dedicated port number and one dedicated volume. The port numbers used for dev or prod application to connect to corresponding postgres database, which should be easy to understand. But volume is another story.
Please read this Manage data in containers doc for container volume. As you mentioned that "a code change in development will not affect production database", which means you should have two separate volumes for postgres containers, so the data of the databases will not mixed up.
Can I have 1 dockerfile for both dev and prod containers?
Yes you can, just as I mentioned, you should give each postgres container different port and volume config when you start them with docker run command. docker run has EXPOSE option and VOLUME option for you to config the port number and volume location.
Just a reminder, When you run a database in container, you may need to consider the Data Persistent in container environment to avoid data loss that caused by container removed. Some discussions of container Data Persistent can be found here and there.

Build multiple images with Docker Compose?

I have a repository which builds three different images:
powerpy-base
powerpy-web
powerpy-worker
Both powerpy-web and powerpy-worker inherit from powerpy-base using the FROM keyword in their Dockerfile.
I'm using Docker Compose in the project to run a Redis and RabbitMQ container. Is there a way for me to tell Docker Compose that I'd like to build the base image first and then the web and worker images?
You can use depends_on to enforce an order, however that order will also be applied during "runtime" (docker-compose up), which may not be correct.
If you're only using compose to build images it should be fine.
You could also split it into two compose files. a docker-compose.build.yml which has depends_on for build, and a separate one for running the images as services.
These is a related issue: https://github.com/docker/compose/issues/295
About run containers:
It was bug before, but they fixed it since docker-compose v1.10.
https://blog.docker.com/2016/02/docker-1-10/
Start linked containers in correct order when restarting daemon: This is a little thing, but if you’ve run into it you’ll know what a headache it is. If you restarted a daemon with linked containers, they sometimes failed to start up if the linked containers weren’t running yet. Engine will now attempt to start up containers in the correct order.
About build:
You need to build base image first.