Error initiliazing container job in a Hosted Azure Pipelines - azure-devops

I want to run a skopeo container as a container job.
I keep getting this error message during the Initialize containers step
Error response from daemon: Container 7e741e4aafb30bb89e1dfb830c1cb69fa8d47d219f28cc7b8e57727253632256 is not running
my pipeline looks like this:
- job: publish_branch_image
pool:
vmImage: ubuntu-latest
container: docker.io/ananace/skopeo:latest
steps:
- script: |
# clean branchname for imagename
export COMMIT_IMAGE="$(Image.TagName)"
export TARGET_IMAGE="$(Image.Name)":$(echo $(Build.SourceBranch) | sed 's./.-.g')
echo "Pushing to ${TARGET_IMAGE}"
skopeo copy docker://${COMMIT_IMAGE} docker://${TARGET_IMAGE} --src-creds="$(Registry.USER):$(Registry.PASSWORD)" --dest-creds="$(Registry.USER):$(Registry.PASSWORD)"
displayName: publish-branch-release-image

According to the error message, it seems that the container is not running, we could run the cmd docker pull docker.io/ananace/skopeo:latest to pull the image and run it via docker run docker.io/ananace/skopeo:latest, then we could use it.
Update1
Thanks for michiel sharing, according to the doc Endpoints and Linux-based containers:
Containers can be hosted on registries other than Docker Hub. To host an image on Azure Container Registry or another private container registry, add a service connection to the private registry. Then you can reference it in a container spec:
container:
image: xxx/xxx:tag
endpoint: xxx

Related

How to use a Linux container service with a windows server pipeline image

I have a .NET Framework solution that builds a migration assembly file. Currently, it is building without any issues utilizing a Hosted agent using the Hosted Windows 2019 with VS2019 pool.
I also have a Linux container database using mcr.microsoft.com/mssql/server base image. On the pipeline, I was hoping to create a container service from my database that I could execute migrations against. Eventually, I wanted this to be a build policy to prevent migrations from being added that would fail when migrated to an actual environment.
I'm starting to question whether this is a scenario that the service containers can handle. Unless I change the image to ubuntu-latest, the initialize container step that is created will fail because it can't run a Linux container on the windows image agent.
Is there a way I can structure the YAML such that I can run a Linux container service and interact with it from a Windows pool stage?
Here is a YAML example where the container service is created, but none of the existing build steps (removed from example) will execute because they depend on the windows server image.
resources:
containers:
- container: local
endpoint: acr-endpointexample
ports:
- 1433:1433
image: example.azurecr.io/ci/database/mssql:latest
options: -e "ACCEPT_EULA=Y" -e MSSQL_COLLATION="Latin1_General_BIN" -h localhost --name "local"
trigger:
batch: true
branches:
include:
- feature/ado-host-agent-container-support
pool:
vmImage: ubuntu-latest
strategy:
matrix:
Release:
BuildConfiguration: 'Release'
maxParallel: 2
services:
database: local
steps:
- checkout: self
submodules: true
clean: true
persistCredentials: true
- task: GitVersion#5
displayName: Set version
inputs:
runtime: 'full'
updateAssemblyInfo: true
updateAssemblyInfoFilename: '$(Build.SourcesDirectory)\SolutionAssemblyInfo.cs'
additionalArguments: '/output buildserver'
configFilePath: 'GitVersion.yml'

Is anyone else having trouble using host.docker.internal in azure pipelines

It seems to have stopped working recently.
I use docker compose to run some microservices so that unit tests can use them. Some of the microservices talk to each other, so they use a configuration value for the base URL. This is an example of my docker-compose.yml
version: '3.8'
services:
microsa:
container_name: api.a
image: *****
restart: always
ports:
- "20001:80"
microsb:
container_name: api.b
image: *****
restart: always
ports:
- "20002:80"
depends_on:
microsa:
condition: service_healthy
environment:
- ApiUrl=http://host.docker.internal:20001/api/v1/test
This works perfectly on my Windows machine docker desktop, but it will not work in Azure Pipelines on either ubuntu-latest or windows-latest
- task: DockerCompose#0
displayName: 'Run docker compose for unit tests'
inputs:
containerregistrytype: 'Azure Container Registry'
azureSubscription: ${{ parameters.azureResourceManagerConnection }}
azureContainerRegistry: ${{ parameters.acrUrl }}
dockerComposeFile: 'docker-compose.yml'
action: 'Run services'
When api.b attempts to call api.a, I get the following exception:
No such host is known. (host.docker.internal:20001)
Using http://microsa:20001/... gives the following error:
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (microsa:20001)
I've also tried http://localhost:20001/...
I've also confirmed that microsa is accessible directly, so there is no errors within that container.
I've also tried running docker-compose up via AzureCLI#2 instead of DockerCompose#0 with the same results
I ran into the same issue but couldn't use the service dns name because I'm sharing a configuration file between the dependencies and the test project which contains the connection strings for various services defined in the docker-compose file. The test project (which is not running inside docker-compose) needs access to some of those services as well.
To solve it, all I had to do was add a bash script at the start of the pipeline that adds a new record to the hosts file:
steps:
- bash: |
echo '127.0.0.1 host.docker.internal' | sudo tee -a /etc/hosts
displayName: 'Update Hosts File'
I have no idea why http://host.docker.internal:20001 is not working now, even though I'm certain it used to...
However, using http://microsa/... (without the port number) does work.

How to cache docker-compose build inside github-action

Is there any way to cache docker-compose so that it will not build again and again?
here is my action workflow file:
name: Github Action
on:
push:
branches:
- staging
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Bootstrap app on Ubuntu
uses: actions/setup-node#v1
with:
node-version: '12'
- name: Install global packages
run: npm install -g yarn prisma
- name: Install project deps
if: steps.cache-yarn.outputs.cache-hit != 'true'
run: yarn
- name: Build docker-compose
run: docker-compose -f docker-compose.test.prisma.yml up --build -d
I want to cache the docker build step. I have tried using if: steps.cache-docker.outputs.cache-hit != 'true' then only build but didn't work.
What you are referring to is called "docker layer caching", and it is not yet natively supported in GitHub Actions.
This is discussed extensively in several places, like:
Cache docker image forum thread
Cache a Docker image built in workflow forum thread
Docker caching issue in actions/cache repository
As mentioned in the comments, there are some 3rd party actions that provide this functionality (like this one), but for such a core and fundamental feature, I would be cautious with anything that is not officially supported by GitHub itself.
For those arriving here via Google, this now "supported". Or at least it is working: https://github.community/t/use-docker-layer-caching-with-docker-compose-build-not-just-docker/156049.
The idea is to build the images using docker (and its cache) and then use docker compose to run (up) them.
If using docker/bake-action or docker/build-push-action & want to access a cached image in subsequent steps -
Use load:true to save the image
Use the same image name as the cached image across steps in order to skip rebuilds.
Example:
...
name: Build and push
uses: docker/bake-action#master
with:
push: false
load: true
set: |
web.cache-from=type=gha
web.cache-to=type=gha
-
name: Test via compose
command: docker compose run web tests
...
services:
web:
build:
context: .
image: username/imagename
command: echo "Test run successful!"
See the docker team's responses;
How to access the bake-action cached image in subsequent steps?
How to use this plugin for a docker-compose?
How to share layers with Docker Compose?`
Experiment on caching docker compose images in GitHub Actions

Docker Compose bind mount doesn't work in GitHub Actions

If I run a Docker Compose command in GitHub Actions which uses a bind mount, it says the source directory doesn't exist. Here's the error.
Cannot create container for service chat: invalid mount config for type "bind": bind source path does not exist: /__w/omni-chat/omni-chat
I think the issue is that the root directory is incorrectly being passed to GitHub Actions. I specified the absolute path as the conventional ., but I don't know what caveats GitHub Actions has regarding that.
Here's a simplified version of my workflow.
on: push
jobs:
test-server:
runs-on: ubuntu-latest
container: docker/compose
steps:
- uses: actions/checkout#v2
- run: docker-compose run --rm chat gradle test
Here's a simplified version of my Docker Compose file.
version: '3.7'
services:
chat:
image: gradle:6.3-jdk8
command: bash
volumes:
- type: bind
source: .
target: /home/gradle
- type: volume
source: gradle-cache
target: /home/gradle/.gradle
volumes:
gradle-cache:
If you need the full details, here's the exact run.
It turns out that you should use preinstalled Docker Compose installation. So simply removing the specified container will allow bind mounts to work since it's no longer a Docker-in-Docker scenario.

Azure DevOps execute container job in container agent

I start Azure DevOps container agent with docker run -e VSTS_ACCOUNT='kagarlickij' -e VSTS_POOL='Self-Hosted' -e VSTS_TOKEN='a***q' -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker mcr.microsoft.com/azure-pipelines/vsts-agent:ubuntu-16.04-docker-18.06.1-ce
my pipeline has the following lines:
pool:
name: Self-Hosted
container: kagarlickij/packer-ansible-azure-docker-runtime:2.0.0
..and get: [error]Container feature is not supported when agent is already running inside container. Please reference documentation (https://go.microsoft.com/fwlink/?linkid=875268)
Is it possible to Azure DevOps execute container job in container agent?