Azure pipeline docker fails copy with multiple projects - azure-devops

Copying of Data.csproj to Data/ is failing when building my app in azure devops. Though, the first copy command, Api.csproj to Api/ is working fine. Do note that I did not specify the buildContext on my azure-pipeline.yml file. But, when I did add the buildContext, buildContext: '$(Build.Repository.LocalPath)', it failed even on the first copy.
Any inputs or suggestion on how to fix this one? I tried searching and adding the buildcontext or adding the folder on the csproj doesn't seem to work. For example, COPY ["/Data/Data.csproj", "Data/"]
This is my folder structure (my azure-pipeline.yml file is outside the App folder):
App
- Api/
- Api.csproj
- Dockerfile
- Data/
- Data.csproj
- Domain/
- Domain.csproj
- App.sln
My dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Api.csproj", "Api/"]
COPY ["Data.csproj", "Data/"]
COPY ["Domain.csproj", "Domain/"]
RUN dotnet restore "Api/Api.csproj"
COPY . .
WORKDIR "/src/Api"
RUN dotnet build "Api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Api.dll"]
parts of my azure-pipeline.yml
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: 'App'
dockerfile: '**/Dockerfile'
tags: |
$(tag)
Here's the error:
Step 6/28 : WORKDIR /src
---> Running in 266a78d293ee
Removing intermediate container 266a78d293ee
---> 2d899fafdf05
Step 7/28 : COPY ["Api.csproj", "Api/"]
---> 92c8c1450c3c
Step 8/28 : COPY ["Data.csproj", "Data/"]
COPY failed: stat /var/lib/docker/tmp/docker-builder764823890/Data.csproj: no such file or directory
##[error]COPY failed: stat /var/lib/docker/tmp/docker-builder764823890/Data.csproj: no such file or directory
##[error]The process '/usr/bin/docker' failed with exit code 1

Okay, after trying so many times, I was able to fix this by changing the dockerfile and azure-pipelines.yml.
I think what fixed the issue is to specifically set the buildContext to 'App/' instead of the variable '$(Build.Repository.LocalPath)' that I'm not sure what's the exact value.
I'll just post the part that I made changes to.
Dockerfile
COPY ["Api/Api.csproj", "Api/"]
COPY ["Data/Data.csproj", "Data/"]
COPY ["Domain/Domain.csproj", "Domain/"]
azure-pipelines.yml
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
buildContext: 'App/'

Related

Github action cache image used multi-stage build

I have a multi-stage build that uses a docker image which contains all the node_modules needed for a build and other dependencies. I used that in a multi-stage docker build to build a production dist for a project. In the 2nd stage I use a much smaller docker to store the dist files so it's easier to distribute.
Dockerfile for the build:
FROM ghcr.io/dancing-star/packages-image-full:latest AS builder
# Make the dir
WORKDIR /var/dancing-star
COPY . .
# Set the cache location
RUN npm config set cache /tmp/node-cache --global \
&& npm run prod \
&& rm -rf src \
&& ls -l
# Dist stage
FROM alpine
WORKDIR /var/dancing-star
COPY --from=builder ["/var/dancing-star/dist", "/var/dancing-star/assets", "/var/dancing-star/config", "/var/dancing-star/scripts", "/var/dancing-star/*.json", "/var/dancing-star/*.md", "/var/dancing-star/*.js", "/var/dancing-star/"]
RUN ls -l
#CMD [ "pm2-runtime", "start", "app.pm2.json", "--env", "production" ]
The ghcr.io/dancing-star/packages-image-full contains the node_modules and works fine. I used a github action to cache ghcr.io/dancing-star/packages-image-full but it doesn't seem to be caching it very well since it's constantly pulling it again.
.github/workflows/docky.yaml
name: Build Docker
on:
push:
branches: [ main ]
tags:
- v*
env:
IMAGE_NAME: project-docker
BUILD_IMAGE_NAME: ghcr.io/naologic/dancing-star/packages-image-full
BUILD_IMAGE_PATH: ghcr.io/naologic/dancing-star/packages-image-full:latest
jobs:
build:
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action#v2
- name: Login to GitHub Container Registry
uses: docker/login-action#v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.DOCKER_DEPLOY_GITHUB }}
- name: Build and push
uses: docker/build-push-action#v4
with:
context: .
file: Dockerfile
push: true
tags: ${{ env.BUILD_IMAGE_PATH }}
cache-from: type=registry,ref=${{ env.BUILD_IMAGE_NAME }}:buildcache
cache-to: type=registry,ref=${{ env.BUILD_IMAGE_NAME }}:buildcache,mode=max
IMPORTANT: ghcr.io/dancing-star/packages-image-full is shared between many builds so ideally it would be a shared cache.
How should I use actions to cache ghcr.io/dancing-star/packages-image-full for the build?
My solution doesn't seem to be working very well.
I tried the current action, cache still pulls for 2 minutes before starting
I tried without cache, build time is about the same

Confused by the use of $(Build.ArtifactStagingDirectory) in separate stage from published Artifact

Problem:
$(Build.ArtifactStagingDirectory) is empty, despite being able to view the published Artifacts in the Azure DevOps Pipeline UI, which causes the pipeline to fail.
What I'm trying to do:
Build microservice (e.g., /api).
Run unit tests.
If unit tests are passing, publish the build as an Artifact.
Dockerize the build Artifact using buildContext.
This is based on advice here, here, and here.
Publish Stage Config
My config for publishing after unit tests have passed is the following:
- publish: $(System.DefaultWorkingDirectory)/${{ parameters.pathName }}
artifact: ${{ parameters.pathName }}
condition: succeeded()
$(System.DefaultWorkingDirectory) should be /home/vsts/work/1/s/ from what I gather.
${{ parameters.pathName }} is just api.
I can see correct artifacts are generated in the Azure DevOps Pipelines UI.
Docker buildAndPush Stage Config
My config for grabbing the artifact and using it in a Docker buildAndPush config is the following:
- task: Docker#2
condition: contains(variables['servicesChanged'], '${{ parameters.serviceName }}')
displayName: Build and Push ${{ parameters.pathName }} Docker image
inputs:
command: buildAndPush
repository: $(imageRepository)-${{ parameters.pathName }}
dockerfile: $(dockerfilePath)/${{ parameters.pathName }}/Dockerfile
buildContext: $(Build.ArtifactStagingDirectory)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
${{ parameters.tag }}-${{ parameters.tagVersion }}
From what I gather, $(Build.ArtifactStagingDirectory) should be /home/vsts/work/1/a/.
However, it is empty and this stage fails.
$(dockerfilePath) is equal to $(Build.SourcesDirectory).
Dockerfile Config
Informational, but this is what the Dockerfile contains:
FROM python:3.8-slim
WORKDIR /app
EXPOSE 5000
COPY . .
RUN pip install -r requirements.txt
CMD ["gunicorn", "-b", ":5000", "--log-level", "info", "config.wsgi:application", "-t", "150"]
Project Structure
/project-root
/admin
package.json
Dockerfile
/api
requirements.txt
Dockerfile
/client
package.json
Dockerfile
What I've Tried
dockerfile: $(dockerfilePath)/${{ parameters.pathName }}/Dockerfile
buildContext: $(Build.ArtifactStagingDirectory)
Step 5/17 : RUN pip install -r requirements.txt
---> Running in 277ce44b61cf
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
##[error]The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
##[error]The process '/usr/bin/docker' failed with exit code 1
dockerfile: $(dockerfilePath)/${{ parameters.pathName }}/Dockerfile
buildContext: $(Build.ArtifactStagingDirectory)/${{ parameters.pathName }}
unable to prepare context: path "/home/vsts/work/1/a/api" not found
##[error]unable to prepare context: path "/home/vsts/work/1/a/api" not found
##[error]The process '/usr/bin/docker' failed with exit code 1
dockerfile: $(Build.ArtifactStagingDirectory)/${{ parameters.pathName }}/Dockerfile
buildContext: $(Build.ArtifactStagingDirectory)
##[error]Unhandled: No Dockerfile matching /home/vsts/work/1/a/api/Dockerfile was found.
dockerfile: $(Build.ArtifactStagingDirectory)/Dockerfile
buildContext: $(Build.ArtifactStagingDirectory)
##[error]Unhandled: No Dockerfile matching /home/vsts/work/1/a/Dockerfile was found.
What Has Worked
dockerfile: $(System.DefaultWorkingDirectory)/${{ parameters.pathName }}/Dockerfile
buildContext: $(System.DefaultWorkingDirectory)/${{ parameters.pathName }}
But doing this seems to negate the need to publish as an Artifact. Maybe this is the "correct" way, I don't know. It seems like it is doing what I want to accomplish by COPY what was built for unit testing into the Docker image instead of using a different version.
I'm pretty sure this isn't what I'm after since it looks like it is just cloning the repo again to /home/vsts/work/1/a/ at the beginning of this stage.
Question(s)
Why is $(Build.ArtifactStagingDirectory) empty?
Is it a deprecated env var?
Is using what I have in "What Has Worked" supposed to be the correct way of how to handle this? (I don't think that it is).
So how should I be persisting the tested build between the unit testing stage and the Docker stage so I can use the exact build from the unit testing stage?
When you use:
- publish: $(System.DefaultWorkingDirectory)/${{ parameters.pathName }}
artifact: ${{ parameters.pathName }}
condition: succeeded()
behind the scene it uses Publish Pipeline Artifact task which by default downloads
downloads files to $(Pipeline.Workspace). This is the default and recommended path for all types of artifacts.
So please try
buildContext: $(Pipeline.Workspace)
or
buildContext: $(Pipeline.Workspace)/${{ parameters.pathName }}
However, it makes sense when you have multi stage pipeline (which I'm not sure as you didn't publish the whole pipeline). Please check here
For build artifacts, it's common to copy files to $(Build.ArtifactStagingDirectory) and then use the Publish Build Artifacts task to publish this folder. With the Publish Pipeline Artifact task, you can just publish directly from the path containing the files.
So there is no need to move files to $(Build.ArtifactStagingDirectory) before publishing them. Thus for a newer and recommended task this folder can be empty all the time.

Interpolating strings in a file path in a DockerFile

I have a Docker file which starts like this:
ARG FILE_PATH
FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
WORKDIR /src
COPY ["${FILE_PATH}/src/NuGet.config", "src/"]
I call it using the azure-cli like this:
$pathToSrc = "$(Build.SourcesDirectory)/My folder"
az acr build --build-arg "FILE_PATH=$pathToSrc" ...
This always fails with the message:
COPY failed: file not found in build context or excluded by
.dockerignore: stat src/NuGet.config: file does not exist
I have tried variations such as:
COPY [$FILE_PATH/src/NuGet.config, "src/"]
COPY ["FILE_PATH/src/NuGet.config", "src/"]
and
az acr build --build-arg "FILE_PATH='$pathToSrc'" ...
but always end up with the same message.
Is there a way to do this. I am running on a hosted agent in Azure-devops pipeline. The task is task: AzureCLI#2 using a PowerShell Core script.
This may be related: https://stackoverflow.com/a/56748289/4424236
...after every FROM statements all the ARGs gets collected and are no longer available. Be careful with multi-stage builds.
Try this:
FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build
WORKDIR /src
ARG FILE_PATH
COPY ["${FILE_PATH}/src/NuGet.config", "src/"]

CircleCI "Could not ensure that workspace directory exists"

I am using CircleCI with a GameCI docker image in order to build a Unity project. The build works, but I am trying to make use of the h-matsuo/github-release orb in order to create a release on GitHub for the build. I have created a new separate job for this, so I needed to share data between the jobs. I am using persist_to_workspace in order to do that, as specified in the documentation, but the solution doesn't seem to work. I get the following error:
Could not ensure that workspace directory /root/project/Zipped exists
For the workspace persist logic, I've added the following lines of code in my config.yml file:
working_directory: /root/project - Inside the executor of the main job
persist_to_workspace - As a last command inside my main job's steps
attach_workspace - As a beginning command inside my second job's steps
Here's my full config.yml file:
version: 2.1
orbs:
github-release: h-matsuo/github-release#0.1.3
executors:
unity_exec:
docker:
- image: unityci/editor:ubuntu-2019.4.19f1-windows-mono-0.9.0
environment:
BUILD_NAME: speedrun-circleci-build
working_directory: /root/project
.build: &build
executor: unity_exec
steps:
- checkout
- run: mkdir -p /root/project/Zipped
- run:
name: Git submodule recursive
command: git submodule update --init --recursive
- run:
name: Remove editor folder in shared project
command: rm -rf ./Assets/Shared/Movement/Generic/Attributes/Editor/
- run:
name: Converting Unity license
command: chmod +x ./ci/unity_license.sh && ./ci/unity_license.sh
- run:
name: Building game binaries
command: chmod +x ./ci/build.sh && ./ci/build.sh
- run:
name: Zipping build
command: apt update && apt -y install zip && zip -r "/root/project/Zipped/build.zip" ./Builds/
- store_artifacts:
path: /root/project/Zipped/build.zip
- run:
name: Show all files
command: find "$(pwd)"
- persist_to_workspace:
root: Zipped
paths:
- build.zip
jobs:
build_windows:
<<: *build
environment:
BUILD_TARGET: StandaloneWindows64
release:
description: Build project and publish a new release tagged `v1.1.1`.
executor: github-release/default
steps:
- attach_workspace:
at: /root/project/Zipped
- run:
name: Show all files
command: sudo find "/root/project"
- github-release/create:
tag: v1.1.1
title: Version v1.1.1
description: This release is version v1.1.1.
file-path: ./build.zip
workflows:
version: 2
build:
jobs:
- build_windows
- release:
requires:
- build_windows
Can somebody help me with this please?
If somebody ever encounters the same issue, try to avoid making use of the /root path. I've stored the artifacts somewhere inside /tmp/, and before storing artifacts, I've manually created the folder with chmod 777 by using mkdir with the -m flag to specify chmod permissions.

How to keep secure files after a job finishes in Azure Devops Pipeline?

Currently I'm working on a pipeline script for Azure Devops. I want to provide a maven settings file as a secure files for the pipeline. The problem is, when I define a job only for providing the file, the file isn't there anymore when the next job starts.
I tried to define a job with a DownloadSecureFile task and a copy command to get the settings file. But when the next job starts the file isn't there anymore and therefore can't be used.
I already checked that by using pwd and ls in the pipeline.
This is part of my current YAML file (that actually works):
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
I wanted to put the DownloadSecureFile task and "cp $(settingsxml.secureFilePath) ./settings.xml" into an own job, because there are more jobs that need this file for other branches/releases and I don't want to copy the exact same code to all jobs.
This is the YAML file as I wanted it:
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: provide_maven_settings
# no condition because all branches need the file
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- script: |
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
In my dockerfile the settings file is used like this:
FROM maven:3.6.1-jdk-8-alpine AS MAVEN_TOOL_CHAIN
COPY pom.xml /tmp/
COPY src /tmp/src/
COPY settings.xml /root/.m2/ # can't find file when executing this
WORKDIR /tmp/
RUN mvn install
...
The error happens, when docker build is started, because it can't find the settings file. It can though, when I use my first YAML example. I have a feeling that it has something to do with each job having a "Checkout" phase, but I'm not sure about that.
Each job in Azure DevOps is running on different agent, so when you use Microsoft Hosted Agents and you separator the pipeline to few jobs, if you copy the secure file in one job, the second job running in new fresh agent that of course don't have the file.
You can solve your issue by using Self Hosted agent (then copy the file to your machine and the second job running in the same machine).
Or you can upload the file to somewhere else (secured) that you can downloaded it in the second job (so why not do it from the start...).