Is anyone else having trouble using host.docker.internal in azure pipelines - docker-compose

It seems to have stopped working recently.
I use docker compose to run some microservices so that unit tests can use them. Some of the microservices talk to each other, so they use a configuration value for the base URL. This is an example of my docker-compose.yml
version: '3.8'
services:
microsa:
container_name: api.a
image: *****
restart: always
ports:
- "20001:80"
microsb:
container_name: api.b
image: *****
restart: always
ports:
- "20002:80"
depends_on:
microsa:
condition: service_healthy
environment:
- ApiUrl=http://host.docker.internal:20001/api/v1/test
This works perfectly on my Windows machine docker desktop, but it will not work in Azure Pipelines on either ubuntu-latest or windows-latest
- task: DockerCompose#0
displayName: 'Run docker compose for unit tests'
inputs:
containerregistrytype: 'Azure Container Registry'
azureSubscription: ${{ parameters.azureResourceManagerConnection }}
azureContainerRegistry: ${{ parameters.acrUrl }}
dockerComposeFile: 'docker-compose.yml'
action: 'Run services'
When api.b attempts to call api.a, I get the following exception:
No such host is known. (host.docker.internal:20001)
Using http://microsa:20001/... gives the following error:
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (microsa:20001)
I've also tried http://localhost:20001/...
I've also confirmed that microsa is accessible directly, so there is no errors within that container.
I've also tried running docker-compose up via AzureCLI#2 instead of DockerCompose#0 with the same results

I ran into the same issue but couldn't use the service dns name because I'm sharing a configuration file between the dependencies and the test project which contains the connection strings for various services defined in the docker-compose file. The test project (which is not running inside docker-compose) needs access to some of those services as well.
To solve it, all I had to do was add a bash script at the start of the pipeline that adds a new record to the hosts file:
steps:
- bash: |
echo '127.0.0.1 host.docker.internal' | sudo tee -a /etc/hosts
displayName: 'Update Hosts File'

I have no idea why http://host.docker.internal:20001 is not working now, even though I'm certain it used to...
However, using http://microsa/... (without the port number) does work.

Related

Is there any way to use compose v2 in bitbucket pipeline?

First of all - no, I cannot switch from Bitbucket pipelines to something appropriate, unfortunately, it is direct requirement.
[x] I have searched other SO questions and google, the following two questions are related:
Bitbucket Pipeline - docker compose error (no answer)
How to use docker compose V2 in Bitbucket Pipelines (answer not working even when literally copied to pipeline definition for one of reasons below)
Working v1 main pipeline (only significant step and job, of course, it is larger)
image: python:3.10
definitions:
steps:
- step: &run-tests
name: Test
image: docker/compose:debian-1.29.2
caches:
- docker
services:
- docker
script:
- COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
# - ... (wait until ready and run tests, ignored, because error happens earlier)
pipelines:
default:
- parallel:
- step: *run-tests
Encountered errors
I'll to refer to them multiple times, so let's define short aliases:
403
+ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
listing workers for Build: failed to list workers: Unavailable: connection error: desc = "transport: Error while dialing unable to upgrade to h2c, received 403"
priviliged
+ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 2.8s done
#1 creating container buildx_buildkit_default 0.0s done
#1 ERROR: Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed
------
> [internal] booting buildkit:
------
Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed
Unfortunately, there is no docker/compose v2 image, and our deployment uses v2, so some inconsistencies happen. I'm trying to use v2 in pipeline now. I replaced docker-compose references with docker compose and try to prevent this command from crashing. Important thing to note: I need docker buildkit and cannot go without it, because I'm using Dockerfile.name.dockerignore files which are separate for prod and dev, and docker without buildkit does not support it (builds will simply fail).
Things I tried (debug smts like docker version and docker compose version were always working OK in these cases):
using image: linuxserver/docker-compose:2.10.2-v2. Result: 403.
using image: library/docker:20.10.18.
No more changes. Result: privileged.
Add docker buildx create --driver-opt image=moby/buildkit:v0.10.4-rootless --use as a step. Result: privileged (logs show that this image is actually used: pulling image moby/buildkit:v0.10.4-rootless 6.3s done).
using no explicit image (relying on bitbucket docker installation).
with official compose installation method (result: 403):
- mkdir -p /usr/local/lib/docker/cli-plugins/
- wget -O /usr/local/lib/docker/cli-plugins/docker-compose https://github.com/docker/compose/releases/download/v2.10.2/docker-compose-linux-x86_64
- chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
with solution from 2nd link above (result: 403, but with some portion of success: downloaded two services that do not require building - postgres and redis - and failed only then)
If it is important, compose file for CI (only healthchecks trimmed, everything else not touched):
# We need this file without volumes due to bitbucket limitations.
version: '3.9'
services:
db:
image: mariadb:10.8.3-jammy
env_file: .env.ci
volumes:
- ./tests/db_init/:/docker-entrypoint-initdb.d
networks:
- app_network
redis:
image: redis:alpine
environment:
- REDIS_REPLICATION_MODE=master
networks:
- app_network
app:
build:
context: .
args:
- APP_USER=reporting
- APP_PORT
env_file: .env.ci
depends_on:
- db
- redis
networks:
- app_network
nginx:
build:
context: .
dockerfile: configs/Dockerfile.nginx
env_file: .env.ci
environment:
- APP_HOST=app
ports:
- 80:80
depends_on:
- app
networks:
- app_network
networks:
app_network:
driver: bridge
For now I reverted everything and keep using v1. The limitations of bitbucket pipelines drive me mad, I can easily run the same stuff in github actions, but now have to remove one service (that uses docker directory mounting, so cannot run on bitbucket) and spend whole day trying to upgrade compose. Sorry for this tone, this really makes me desire to quit bitbucket forever and never touch it again.

How to use a Linux container service with a windows server pipeline image

I have a .NET Framework solution that builds a migration assembly file. Currently, it is building without any issues utilizing a Hosted agent using the Hosted Windows 2019 with VS2019 pool.
I also have a Linux container database using mcr.microsoft.com/mssql/server base image. On the pipeline, I was hoping to create a container service from my database that I could execute migrations against. Eventually, I wanted this to be a build policy to prevent migrations from being added that would fail when migrated to an actual environment.
I'm starting to question whether this is a scenario that the service containers can handle. Unless I change the image to ubuntu-latest, the initialize container step that is created will fail because it can't run a Linux container on the windows image agent.
Is there a way I can structure the YAML such that I can run a Linux container service and interact with it from a Windows pool stage?
Here is a YAML example where the container service is created, but none of the existing build steps (removed from example) will execute because they depend on the windows server image.
resources:
containers:
- container: local
endpoint: acr-endpointexample
ports:
- 1433:1433
image: example.azurecr.io/ci/database/mssql:latest
options: -e "ACCEPT_EULA=Y" -e MSSQL_COLLATION="Latin1_General_BIN" -h localhost --name "local"
trigger:
batch: true
branches:
include:
- feature/ado-host-agent-container-support
pool:
vmImage: ubuntu-latest
strategy:
matrix:
Release:
BuildConfiguration: 'Release'
maxParallel: 2
services:
database: local
steps:
- checkout: self
submodules: true
clean: true
persistCredentials: true
- task: GitVersion#5
displayName: Set version
inputs:
runtime: 'full'
updateAssemblyInfo: true
updateAssemblyInfoFilename: '$(Build.SourcesDirectory)\SolutionAssemblyInfo.cs'
additionalArguments: '/output buildserver'
configFilePath: 'GitVersion.yml'

Gcloud build trigger environment variable substitution in app.yaml for appEngine

I am trying to substitue variable in app.yaml with a cloud build trigger.
I Added substitution variable in build trigger.
Add environment variables to app.yaml in a way they can be easily substituted with build trigger variables. Like this:
env_variables:
SECRET_KEY: %SECRET_KEY%
Add a step in cloudbuild.yaml to substitute all %XXX% variables inside app.yaml with their values from build trigger.
steps:
- name: node:10.15.1
entrypoint: npm
args: ["install"]
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: bash
args:
- '-c'
- |
sed -i 's/%SESSION_SECRET%/'${_SESSION_SECRET}'/g' app.yaml
timeout: "1600s"
The problem is that Gcloud Build throw an exception :
Already have image (with digest): gcr.io/cloud-builders/gcloud
bash: _L/g: No such file or directory
Why ? How can I make a substitution of my app.yaml ?
I have a app.yaml to the root of the project at the same level of the cloudbuild.yaml
UPDATED
I am trying to build and debug gcloud locally with this command:
sudo cloud-build-local --config=cloudbuild.yaml --write-workspace=../workspace --dryrun=false --substitutions=_SESSION_SECRET=test --push .
When I take a look into the app.yaml file, the substitution worked as expected and there is no exception at all.
What is the difference with the gcloud build environment ?
OK I finally decided to use github action instead of google cloud triggers.
Since Google cloud triggers aren't able to find its own app.yaml and manage the freaking environment variable by itself.
Here is how to do it:
My environment :
App engine,
standard (not flex),
Nodejs Express application,
a PostgreSQL CloudSql
First the setup :
1. Create a new Google Cloud Project (or select an existing project).
2. Initialize your App Engine app with your project.
[Create a Google Cloud service account][sa] or select an existing one.
3. Add the the following Cloud IAM roles to your service account:
App Engine Admin - allows for the creation of new App Engine apps
Service Account User - required to deploy to App Engine as service account
Storage Admin - allows upload of source code
Cloud Build Editor - allows building of source code
[Download a JSON service account key][create-key] for the service account.
4. Add the following [secrets to your repository's secrets][gh-secret]:
GCP_PROJECT: Google Cloud project ID
GCP_SA_KEY: the downloaded service account key
The app.yaml
runtime: nodejs14
env: standard
env_variables:
SESSION_SECRET: $SESSION_SECRET
beta_settings:
cloud_sql_instances: SQL_INSTANCE
Then the github action
name: Build and Deploy to GKE
on: push
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
DATABASE_URL: ${{ secrets.DATABASE_URL}}
jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: '12'
- run: npm install
- uses: actions/checkout#v1
- uses: ikuanyshbekov/app-yaml-env-compiler#v1.0
env:
SESSION_SECRET: ${{ secrets.SESSION_SECRET }}
- shell: bash
run: |
sed -i 's/SQL_INSTANCE/'${{secrets.DATABASE_URL}}'/g' app.yaml
- uses: actions-hub/gcloud#master
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
APPLICATION_CREDENTIALS: ${{ secrets.GCLOUD_AUTH }}
CLOUDSDK_CORE_DISABLE_PROMPTS: 1
with:
args: app deploy app.yaml
To add secrets into git hub action you must go to : Settings/secrets
Take note that I could handle all the substitution with the bash script. So I would not depend on the github project "ikuanyshbekov/app-yaml-env-compiler#v1.0"
It's a shame that GAE doesn't offer an easiest way to handle environment variable for the app.yaml. I don't want to use KMS since I need to update the beta-settings/cloud sql instance.. I really needed to substitute everything into the app.yaml.
This way I can make a specific action for the right environment and manage the secrets.
The entrypoint should be an executable, use /bin/bash or /bin/sh.
How to inspect inside the image (in general):
$ docker pull gcr.io/cloud-builders/gcloud
Using default tag: latest
latest: Pulling from cloud-builders/gcloud
...
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/cloud-builders/gcloud latest 8499764c4ef6 About an hour ago 4.01GB
$ docker run -ti --entrypoint '/bin/bash' 8499764c4ef6
root#60354dfb588a:/#
You can test your commands from there to test without having to sending it to Cloud Build each time.

How to cache docker-compose build inside github-action

Is there any way to cache docker-compose so that it will not build again and again?
here is my action workflow file:
name: Github Action
on:
push:
branches:
- staging
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Bootstrap app on Ubuntu
uses: actions/setup-node#v1
with:
node-version: '12'
- name: Install global packages
run: npm install -g yarn prisma
- name: Install project deps
if: steps.cache-yarn.outputs.cache-hit != 'true'
run: yarn
- name: Build docker-compose
run: docker-compose -f docker-compose.test.prisma.yml up --build -d
I want to cache the docker build step. I have tried using if: steps.cache-docker.outputs.cache-hit != 'true' then only build but didn't work.
What you are referring to is called "docker layer caching", and it is not yet natively supported in GitHub Actions.
This is discussed extensively in several places, like:
Cache docker image forum thread
Cache a Docker image built in workflow forum thread
Docker caching issue in actions/cache repository
As mentioned in the comments, there are some 3rd party actions that provide this functionality (like this one), but for such a core and fundamental feature, I would be cautious with anything that is not officially supported by GitHub itself.
For those arriving here via Google, this now "supported". Or at least it is working: https://github.community/t/use-docker-layer-caching-with-docker-compose-build-not-just-docker/156049.
The idea is to build the images using docker (and its cache) and then use docker compose to run (up) them.
If using docker/bake-action or docker/build-push-action & want to access a cached image in subsequent steps -
Use load:true to save the image
Use the same image name as the cached image across steps in order to skip rebuilds.
Example:
...
name: Build and push
uses: docker/bake-action#master
with:
push: false
load: true
set: |
web.cache-from=type=gha
web.cache-to=type=gha
-
name: Test via compose
command: docker compose run web tests
...
services:
web:
build:
context: .
image: username/imagename
command: echo "Test run successful!"
See the docker team's responses;
How to access the bake-action cached image in subsequent steps?
How to use this plugin for a docker-compose?
How to share layers with Docker Compose?`
Experiment on caching docker compose images in GitHub Actions

Docker Compose bind mount doesn't work in GitHub Actions

If I run a Docker Compose command in GitHub Actions which uses a bind mount, it says the source directory doesn't exist. Here's the error.
Cannot create container for service chat: invalid mount config for type "bind": bind source path does not exist: /__w/omni-chat/omni-chat
I think the issue is that the root directory is incorrectly being passed to GitHub Actions. I specified the absolute path as the conventional ., but I don't know what caveats GitHub Actions has regarding that.
Here's a simplified version of my workflow.
on: push
jobs:
test-server:
runs-on: ubuntu-latest
container: docker/compose
steps:
- uses: actions/checkout#v2
- run: docker-compose run --rm chat gradle test
Here's a simplified version of my Docker Compose file.
version: '3.7'
services:
chat:
image: gradle:6.3-jdk8
command: bash
volumes:
- type: bind
source: .
target: /home/gradle
- type: volume
source: gradle-cache
target: /home/gradle/.gradle
volumes:
gradle-cache:
If you need the full details, here's the exact run.
It turns out that you should use preinstalled Docker Compose installation. So simply removing the specified container will allow bind mounts to work since it's no longer a Docker-in-Docker scenario.