Azure Devops - Docker Compose Build Image Not Found - docker-compose

I am having an issue getting an image to build in Azure Devops from a docker-compose file.
It appears that the first issue is that the image does not build.
This is, I believe, causing the push step to fail, as there is no created image, it is just running an existing image.
What can I do to "force" the process to build an image off of this to pass into our repo? Here is our current docker compose file
version: '3.4'
services:
rabbit:
image: rabbitmq:3.6.16-management
labels:
NAME: "rabbit"
environment:
- "RabbitMq/Host=localhost"
ports:
- "15672:15672"
- "5672:5672"
container_name: rabbit
restart: on-failure:5
Here's the build and push steps (truncating the top which doesn't really matter)
Build:
Push:

I spent a fair amount of time fighting with this today (similar issue, anyway). I do not believe the image being non-local is necessarily your problem.
Looks like you are using the "Docker Compose" Task in Azure DevOps for your Build. I had the same problem - I could get it to build fine, but couldn't ever seem to "pipe" the result to the Push Task. I suspect one could add another Task between them to resolve this, but there is an easier way.
Instead, trying using the "Docker" Task to do your build. Without really changing anything else, I was able to make that work and the "Push" step next in line was happy as could be.

Related

What does x-airflow-common do in the airflow docker-compose.yaml file

Decided to try and really understand the docker-compose.yaml file for airflow. At the beginning of the file there is this piece of code.
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.2.5}
What I'm gathering is that x-airflow-common is defining a variable and that the the &airflow-common says that "any image in this file that points to *airflow-common look here. That is why further down we see.
<<: *airflow-common
Which says "look in this docker-compose file for the image declared in airflow-common" then when it runs the command against the image: scheduler, celery worker etc. The airflow image sees those commands and knows what type of image to spin up for that container.
Hoping someone can confirm/correct my assumptions or point me to good documentation for this. I've been searching the past two days, but have been unable to locate anything that "dissects" this file.
This uses a feature in Docker Compose called YAML anchors. It allows you to create a sort of template block, and then create other services that are based on that template, but replace certain settings in it.
This section on the Compose specification docs can probably explain it better than I can.

Running specific container on predefined host with docker stack

I have spent lots of time trying to figure out the following scenario but no luck uptil now. so here is the case.
I have 2 machines(machine A with IP 123 and machine B with IP 456), 123 is the manager node and 456 is the worker node.
Now there are 2 services that I would like to run, Service-A on IP-123 and Service-B on IP 456 respectively, for that I am using the following compose file
version: '3.8'
networks:
same-network:
ser1-service:
container_name: ser1-service
image: ser1
networks:
- same-network
ports:
- 9057:7057
entrypoint: ["java","-jar","ser1.jar"]
ser2-service:
container_name: ser2-service
image: ser2
networks:
- same-network
ports:
- 9052:7052
entrypoint: ["java","-jar","ser2.jar"]
Now, when I start it using docker stack deploy, it deploys the services randomly. What I want is, to make sure that every time Service-A should be deployed on IP-123 and Service-B should be deployed on IP 456
Just to add one thing I have downloaded both images on both server. Reason is, in the actual scenario, we have lots of services and lots of hosts and they have hard binding with each other. In addition, I want the images to be downloaded on run time on each worker node.
Any help from anyone would be highly appreciated.
The first issue (make sure services A goes to IP 123) - the closest thing I can suggest is to add a deploy section with placement constraint to your stack file. One of the available placement constraints is by hostname, which may be close enough (though not exactly by ip address). Another viable placement constraint is by node labels (which you add yourself to the nodes).
https://docs.docker.com/compose/compose-file/compose-file-v3/#placement
https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints---constraint
ser1-service:
container_name: ser1-service
image: ser1
...
placement:
constraints: [node.hostname == hostname_of_ip_123]
The second issue I think you're asking about making sure the images are downloaded each time (a bit unsure due to wording though) - I believe you want the "--resolve-image=always" flag on your docker stack deploy command:
https://docs.docker.com/engine/reference/commandline/stack_deploy/
--resolve-image always API 1.30+Swarm
Query the registry to resolve image digest and supported platforms ("always"|"changed"|"never")
docker stack deploy -c your_stack_file.yml --resolve-image=always your_stack_name

Exclude services from starting with docker-compose

Use Case
The docker-compose.yml defines multiple services which represent the full application stack. In development mode, we'd like to dynamically exclude certain services, so that we can run them from an IDE.
As of Docker compose 1.28 it is possible to assign profiles to services as documented here but as far as I have understood, it only allows specifying which services shall be started, not which ones shall be excluded.
Another way I could imagine is to split "excludable" services into their own docker-compose.yml file but all of this seems kind of tedious to me.
Do you have a better way to exclude services?
It seems we both overlooked a certain very important thing about profiles and that is:
Services without a profiles attribute will always be enabled
So if you run docker-compose up with a file like this:
version: "3.9"
services:
database:
image: debian:buster
command: echo database
frontend:
image: debian:buster
command: echo frontend
profiles: ['frontend']
backend:
image: debian:buster
command: echo backend
profiles: ['backend']
It will start the database container only. If you run it with docker-compose --profile backend up it will bring database and backend containers. To start everything you need to docker-compose --profile backend --profile frontend up or use a single profile but several times.
That seems to me as the best way to make docker-compose not to run certain containers. You just need to mark them with a profile and it's done. I suggest you give the profiles reference a second chance as well. Apart from some good examples it explains how the feature interacts with service dependencies.

How to deploy multiple docker images to an Azure App Service through Azure Pipelines

I have a docker compose .yml file that I use to manually install and run multiple docker images on a Linux Azure App Service. My next step is to automate this through Azure Pipelines, which I have been successful in doing for a single image; but can't figure out how to do it for multiple images on the same App Service instance.
This is a client requirement to run multiple images on the same App Service instance. I have some flexibility if there is a much better way, but cost is a factor.
I'm specifically looking for the type of Task to add to my release pipeline and if there are any examples or documentation I can read. So far I haven't found anything that really seems to fit the bill, but I'm not a DevOps engineer, so it's likely I am just not asking the question correctly.
Here is an example of the docker compose yml file I have.
version: '3.7'
services:
exampleapi:
image: examplecontainerregistry.azurecr.io/example.api:latest
container_name: example.api
volumes:
- example:/mnt/example
- common:/mnt/common
certprojectservice:
image: examplecontainerregistry.azurecr.io/example.certproject.service:latest
container_name: example.certproject.service
volumes:
- example:/mnt/example
- common:/mnt/common
unityemailservice:
image: examplecontainerregistry.azurecr.io/example.email.service:latest
container_name: example.email.service
volumes:
- example:/mnt/example
- common:/mnt/common
eventconsumerservice:
image: examplecontainerregistry.azurecr.io/example.eventconsumers.service:latest
container_name: example.eventconsumers.service
volumes:
- example:/mnt/example
- common:/mnt/common
webhookresponseservice:
image: examplecontainerregistry.azurecr.io/example.webhookresponse.service:latest
container_name: example.webhookresponse.service
volumes:
- example:/mnt/example
- common:/mnt/common
unitywebhooksservice:
image: examplecontainerregistry.azurecr.io/example.webhooks.service:latest
container_name: example.webhooks.service
volumes:
- example:/mnt/example
- common:/mnt/common
Not sure whether the method I used is suitable for you, but you can have a check.
1) To run multi docker images in one Azure app service instance, firstly I need make my Azure App service type is available for Docker-Compose:
Note: Since my images are stored at ACR, here I make this Azure app service connect to the ACR I used.
2) Upload the docker-compose.yml into that configuration.
3) The third step, which is the very meaningful step is enable Continuous Deployment. The significance of this step is once there has new images are pushed to the ACR which connected with the current app service. It will obtain the latest image is automatically from the ACR, and then deploy as the docker-compose file configured.
After enable the Continuous Deployment, click on show url to get the wehhook URL:
4) Go ACR, then select Webhooks from left panel. Add => Input the webhook URL we copied from AppService to Service URL. Save it.
Now, every time you push a new version of the image to ACR, the Azure App Service will trigger the redeploy of the containers using the most recent image. And I do not need configure pipeline with deploy task in azure devops.
So, for anyone interested, the way I was able to solve my exist situation was to make sure that during the build pipeline, I pushed the docker compose file as a Pipeline Artifact. Then in the release pipeline I was able to download that artifact and use it in an Azure CLI task.
The CLI task used the "az webapp config container set" command. Reasonably simple once I figured it out, but a more fit for purpose task would have been nice.

Is it possible to build a docker image without pushing it?

I want to build a docker image in my pipeline and then run a job inside it, without pushing or pulling the image.
Is this possible?
It's by design that you can't pass artifacts between jobs in a pipeline without using some kind of external resource to store it. However, you can pass between tasks in a single job. Also, you specify images on a per-task level rather than a per-job level. Ergo, the simplest way to do what you want may be to have a single job that has a first task to generate the docker-image, and a second task which consumes it as the container image.
In your case, you would build the docker image in the build task and use docker export to export the image's filesystem to a rootfs which you can put into the output (my-task-image). Keep in mind the particular schema to the rootfs output that it needs to match. You will need rootfs/... (the extracted 'docker export') and metadata.json which can just contain an empty json object. You can look at the in script within the docker-image-resource for more information on how to make it match the schema : https://github.com/concourse/docker-image-resource/blob/master/assets/in. Then in the subsequent task, you can add the image parameter in your pipeline yml as such:
- task: use-task-image
image: my-task-image
file: my-project/ci/tasks/my-task.yml
in order to use the built image in the task.
UDPATE: the PR was rejected
This answer doesn't currently work, as the "dry_run" PR was rejected. See https://github.com/concourse/docker-image-resource/pull/185
I will update here if I find an approach which does work.
The "dry_run" parameter which was added to the docker resource in Oct 2017 now allows this (github pr)
You need to add a dummy docker resource like:
resources:
- name: dummy-docker-image
type: docker-image
icon: docker
source:
repository: example.com
tag: latest
- name: my-source
type: git
source:
uri: git#github.com:me/my-source.git
Then add a build step which pushes to that docker resource but with "dry_run" set so that nothing actually gets pushed:
jobs:
- name: My Job
plan:
- get: my-source
trigger: true
- put: dummy-docker-image
params:
dry_run: true
build: path/to/build/scope
dockerfile: path/to/build/scope/path/to/Dockerfile