AWS CodeBuild: Skipping invalid file path, UPLOAD_ARTIFACTS State: FAILED - aws-cloudformation

I have the following buildspec.yml
version: 0.2
phases:
install:
commands:
- npm install -g aws-cdk#1.72.0
build:
commands:
- cd Lambda
- cd NetworkRailGateway-Functions
- for d in ./*/; do (npm install --only=prod); done
- cd ..
- cd RealtimeStations-Functions
- for d in ./*/; do (npm install --only=prod); done
- cd ..
- cdk synth > cfStack.yml
- ls
- pwd
artifacts:
files:
- cfStack.yml
The ls command is showing the cfStack.yml file is present in the current directory, however, I do get the following artifact error which seems to imply that the file does not exist
[Container] 2021/05/20 14:10:51 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2021/05/20 14:10:51 Phase context status code: Message:
[Container] 2021/05/20 14:10:51 Expanding base directory path: .
[Container] 2021/05/20 14:10:51 Assembling file list
[Container] 2021/05/20 14:10:51 Expanding .
[Container] 2021/05/20 14:10:51 Expanding file paths for base directory .
[Container] 2021/05/20 14:10:51 Assembling file list
[Container] 2021/05/20 14:10:51 Expanding cfStack.yml
[Container] 2021/05/20 14:10:51 Skipping invalid file path cfStack.yml
[Container] 2021/05/20 14:10:51 Phase complete: UPLOAD_ARTIFACTS State: FAILED
[Container] 2021/05/20 14:10:51 Phase context status code: CLIENT_ERROR Message: no matching artifact paths found
I have also tried ./cfStack.yml to no avail.

It looks the file is present inside the directory 'Lambda'. So you should prefix the direcotry name as below.
artifacts:
files:
- 'Lambda/cfStack.yml'
Or you should set the base directory as below.
artifacts:
files:
- 'cfStack.yml'
base-directory: 'Lambda'

Related

Cloudbuild wait for artifacts upload before specific step

I wrote a cloudbuild.yaml file that doing deploy for application to Compute Engine, the process take the code and build it with go build ..., then archive the binary file and upload to Cloud Storage, then create Compute Engine template that have startup-script which read the file from cloud storage and doing the deploy and initialization for each machine. These are the relevant steps:
- name: 'mirror.gcr.io/library/golang:1.18-buster'
id: 'build-app'
env: [
'GO111MODULE=on',
'GOPROXY=https://proxy.golang.org,direct',
'GOOS=linux',
'GOARCH=amd64'
]
args: ['go', 'build', '-o', 'deploy/usr/bin/app', './services/service-name/']
- name: 'debian'
id: 'tar-app-file'
args: [ 'tar', '-czf', '${_DEPLOY_FILENAME}', '-C', './deploy', '.' ]
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: 'move-startup-script'
args: [ 'gsutil', 'cp', './services/service-name/startup-script.sh', '${_STARTUP_SCRIPT_URL}' ]
- name: 'gcr.io/cloud-builders/gcloud'
id: 'create-template'
args: [ 'compute', 'instance-templates', 'create', 'MY_NICE_TEMPLATE',
....
'--metadata', 'app-location=${_DEPLOY_DIR}${_DEPLOY_FILENAME},startup-script-url=${_STARTUP_SCRIPT_URL}' ]
# ... more steps that replace that instance group template to the newly created one using "gcloud compute instance-groups managed rolling-action" command
substitutions:
_DEPLOY_DIR: 'gs://bucket-name/deploy/service-name/${COMMIT_SHA}/'
_DEPLOY_FILENAME: 'app.tar.gz'
_STARTUP_SCRIPT_URL: 'gs://bucket-name/deploy/service-name/startup-script.sh'
artifacts:
objects:
location: '${_DEPLOY_DIR}'
paths: ['${_DEPLOY_FILENAME}']
The startup script file:
#! /bin/sh
set -ex
APP_LOCATION=$(curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/app-location" -H "Metadata-Flavor: Google")
gsutil cp "$APP_LOCATION" app.tar.gz
tar -xzf app.tar.gz
# Start the service included in app.tar.gz.
service service-name start
The problem is that sometimes the startup script run before the build artifcate finish uploaded, so the file is not yet exist in Cloud Storage so I get this error
startup-script-url: CommandException: No URLs matched: gs://bucket-name/deploy/service-name/some-commit-sha-123/app.tar.gz
And the build is finished successfully, so eventullay there is an instance up and running that didn't start up properly.
How can I tell cloudbuild to wait for artifacts upload to finish before starting a new step?
How can I mark the build as failed in case the startup script failed? So the instance group won't update in this case (not necessarily related to the specific error above, but any error)?
This is expected because you're depending on the artifacts statement.
This statement will upload the artifacts only when all the steps are done so you're incurring in a race condition.
There is no way to say to Cloud Build to upload the artifacts before finishing the steps when using:
artifacts:
objects:
location: '${_DEPLOY_DIR}'
paths: ['${_DEPLOY_FILENAME}']
Then you may need to explicitly upload them in a step before updating your MIG:
...
- name: 'debian'
id: 'tar-app-file'
args: [ 'tar', '-czf', '${_DEPLOY_FILENAME}', '-C', './deploy', '.' ]
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: 'upload-artifacts'
args: [ 'gsutil', 'cp', '${_DEPLOY_FILENAME}', '${_DEPLOY_DIR}' ]
...

CircleCI "Could not ensure that workspace directory exists"

I am using CircleCI with a GameCI docker image in order to build a Unity project. The build works, but I am trying to make use of the h-matsuo/github-release orb in order to create a release on GitHub for the build. I have created a new separate job for this, so I needed to share data between the jobs. I am using persist_to_workspace in order to do that, as specified in the documentation, but the solution doesn't seem to work. I get the following error:
Could not ensure that workspace directory /root/project/Zipped exists
For the workspace persist logic, I've added the following lines of code in my config.yml file:
working_directory: /root/project - Inside the executor of the main job
persist_to_workspace - As a last command inside my main job's steps
attach_workspace - As a beginning command inside my second job's steps
Here's my full config.yml file:
version: 2.1
orbs:
github-release: h-matsuo/github-release#0.1.3
executors:
unity_exec:
docker:
- image: unityci/editor:ubuntu-2019.4.19f1-windows-mono-0.9.0
environment:
BUILD_NAME: speedrun-circleci-build
working_directory: /root/project
.build: &build
executor: unity_exec
steps:
- checkout
- run: mkdir -p /root/project/Zipped
- run:
name: Git submodule recursive
command: git submodule update --init --recursive
- run:
name: Remove editor folder in shared project
command: rm -rf ./Assets/Shared/Movement/Generic/Attributes/Editor/
- run:
name: Converting Unity license
command: chmod +x ./ci/unity_license.sh && ./ci/unity_license.sh
- run:
name: Building game binaries
command: chmod +x ./ci/build.sh && ./ci/build.sh
- run:
name: Zipping build
command: apt update && apt -y install zip && zip -r "/root/project/Zipped/build.zip" ./Builds/
- store_artifacts:
path: /root/project/Zipped/build.zip
- run:
name: Show all files
command: find "$(pwd)"
- persist_to_workspace:
root: Zipped
paths:
- build.zip
jobs:
build_windows:
<<: *build
environment:
BUILD_TARGET: StandaloneWindows64
release:
description: Build project and publish a new release tagged `v1.1.1`.
executor: github-release/default
steps:
- attach_workspace:
at: /root/project/Zipped
- run:
name: Show all files
command: sudo find "/root/project"
- github-release/create:
tag: v1.1.1
title: Version v1.1.1
description: This release is version v1.1.1.
file-path: ./build.zip
workflows:
version: 2
build:
jobs:
- build_windows
- release:
requires:
- build_windows
Can somebody help me with this please?
If somebody ever encounters the same issue, try to avoid making use of the /root path. I've stored the artifacts somewhere inside /tmp/, and before storing artifacts, I've manually created the folder with chmod 777 by using mkdir with the -m flag to specify chmod permissions.

Azure pipeline docker fails copy with multiple projects

Copying of Data.csproj to Data/ is failing when building my app in azure devops. Though, the first copy command, Api.csproj to Api/ is working fine. Do note that I did not specify the buildContext on my azure-pipeline.yml file. But, when I did add the buildContext, buildContext: '$(Build.Repository.LocalPath)', it failed even on the first copy.
Any inputs or suggestion on how to fix this one? I tried searching and adding the buildcontext or adding the folder on the csproj doesn't seem to work. For example, COPY ["/Data/Data.csproj", "Data/"]
This is my folder structure (my azure-pipeline.yml file is outside the App folder):
App
- Api/
- Api.csproj
- Dockerfile
- Data/
- Data.csproj
- Domain/
- Domain.csproj
- App.sln
My dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Api.csproj", "Api/"]
COPY ["Data.csproj", "Data/"]
COPY ["Domain.csproj", "Domain/"]
RUN dotnet restore "Api/Api.csproj"
COPY . .
WORKDIR "/src/Api"
RUN dotnet build "Api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Api.dll"]
parts of my azure-pipeline.yml
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: 'App'
dockerfile: '**/Dockerfile'
tags: |
$(tag)
Here's the error:
Step 6/28 : WORKDIR /src
---> Running in 266a78d293ee
Removing intermediate container 266a78d293ee
---> 2d899fafdf05
Step 7/28 : COPY ["Api.csproj", "Api/"]
---> 92c8c1450c3c
Step 8/28 : COPY ["Data.csproj", "Data/"]
COPY failed: stat /var/lib/docker/tmp/docker-builder764823890/Data.csproj: no such file or directory
##[error]COPY failed: stat /var/lib/docker/tmp/docker-builder764823890/Data.csproj: no such file or directory
##[error]The process '/usr/bin/docker' failed with exit code 1
Okay, after trying so many times, I was able to fix this by changing the dockerfile and azure-pipelines.yml.
I think what fixed the issue is to specifically set the buildContext to 'App/' instead of the variable '$(Build.Repository.LocalPath)' that I'm not sure what's the exact value.
I'll just post the part that I made changes to.
Dockerfile
COPY ["Api/Api.csproj", "Api/"]
COPY ["Data/Data.csproj", "Data/"]
COPY ["Domain/Domain.csproj", "Domain/"]
azure-pipelines.yml
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
buildContext: 'App/'

Concourse task input folder is empty

I'm experimenting with building a gradle based java app. My pipeline looks like this:
---
resources:
- name: hello-concourse-repo
type: git
source:
uri: https://github.com/ractive/hello-concourse.git
jobs:
- name: gradle-build
public: true
plan:
- get: hello-concourse-repo
trigger: true
- task: build
file: hello-concourse-repo/ci/build.yml
- task: find
file: hello-concourse-repo/ci/find.yml
The build.yml looks like:
---
platform: linux
image_resource:
type: docker-image
source:
repository: java
tag: openjdk-8
inputs:
- name: hello-concourse-repo
outputs:
- name: output
run:
path: hello-concourse-repo/ci/build.sh
caches:
- path: .gradle/
And the build.sh:
#!/bin/bash
export ROOT_FOLDER=$( pwd )
export GRADLE_USER_HOME="${ROOT_FOLDER}/.gradle"
export TERM=${TERM:-dumb}
cd hello-concourse-repo
./gradlew --no-daemon build
mkdir -p output
cp build/libs/*.jar output
cp src/main/docker/* output
ls -l output
And finally find.yml
---
platform: linux
image_resource:
type: docker-image
source: {repository: busybox}
inputs:
- name: output
run:
path: ls
args: ['-alR']
The output of ls at the end of the bash.sh script shows me that the output folder contains the expected files, but the find task only shows empty folders:
What am I doing wrong that the output folder that I'm using as an input in the find task is empty?
The complete example can be found here with the concourse files in the ci subfolder.
You need to remember some things:
There is an initial working directory for your tasks, lets call it '.' (Unless you specify 'dir'). In this initial directory you will find a directory for all your inputs and outputs.
i.e.
./hello-concourse-repo
./output
When you declare an output, there's no need to create a folder 'output' from your script, it will be created automatically.
If you navigate to a different folder in your script, you need to return to the initial working directory or use relative paths to find other folders.
Below you will find the updated script with some comments to fix the problem:
#!/bin/bash
export ROOT_FOLDER=$( pwd )
export GRADLE_USER_HOME="${ROOT_FOLDER}/.gradle"
export TERM=${TERM:-dumb}
cd hello-concourse-repo #You changed directory here, so your 'output' folder is in ../output
./gradlew --no-daemon build
# Add this line to return to the initial working directory or use ../output or $ROOT_FOLDER/output when compiling.
#mkdir -p output <- This line is not required, you already defined an output with this name
cp build/libs/*.jar ../output
cp src/main/docker/* ../output
ls -l ../output
Since you are defining ROOT_FOLDER variable you can use it to navigate.
You are still inside hello-concourse-repo and need to move output up one level.

Pipeline fails due to `hijack: Backend error`

I'm following Stark & Wayne tutorial and got into a problem:
Pipeline fails with
hijack: Backend error: Exit status: 500, message {"Type":"","Message": "
runc exec: exit status 1: exec failed: container_linux.go:247:
starting container process caused \"exec format error\"\n","Handle":""}
I have one git resource and one job with one task:
- task: test
file: resource-ci/ci/test.yml
test.yml file:
platform: linux
image_resource:
type: docker-image
source:
repository: busybox
tag: latest
inputs:
- name: resource-tool
run:
path: resource-tool/scripts/deploy.sh
deploy.sh is simple dummy file with one echo command
echo [+] Testing in the process ...
So what could it be?
This error means that the shell it's trying to invoke in your script is unavailable on the container running your task.
Busybox doesn't come with bash, it only comes with /bin/sh, check the shebang in deploy.sh, making sure it looks like:
#!/bin/sh
# rest of script
I also ran into this error when I forgot to add a ! at the top of my pipelines shell script:
#/bin/bash
# rest of script