I wrote a cloudbuild.yaml file that doing deploy for application to Compute Engine, the process take the code and build it with go build ..., then archive the binary file and upload to Cloud Storage, then create Compute Engine template that have startup-script which read the file from cloud storage and doing the deploy and initialization for each machine. These are the relevant steps:
- name: 'mirror.gcr.io/library/golang:1.18-buster'
id: 'build-app'
env: [
'GO111MODULE=on',
'GOPROXY=https://proxy.golang.org,direct',
'GOOS=linux',
'GOARCH=amd64'
]
args: ['go', 'build', '-o', 'deploy/usr/bin/app', './services/service-name/']
- name: 'debian'
id: 'tar-app-file'
args: [ 'tar', '-czf', '${_DEPLOY_FILENAME}', '-C', './deploy', '.' ]
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: 'move-startup-script'
args: [ 'gsutil', 'cp', './services/service-name/startup-script.sh', '${_STARTUP_SCRIPT_URL}' ]
- name: 'gcr.io/cloud-builders/gcloud'
id: 'create-template'
args: [ 'compute', 'instance-templates', 'create', 'MY_NICE_TEMPLATE',
....
'--metadata', 'app-location=${_DEPLOY_DIR}${_DEPLOY_FILENAME},startup-script-url=${_STARTUP_SCRIPT_URL}' ]
# ... more steps that replace that instance group template to the newly created one using "gcloud compute instance-groups managed rolling-action" command
substitutions:
_DEPLOY_DIR: 'gs://bucket-name/deploy/service-name/${COMMIT_SHA}/'
_DEPLOY_FILENAME: 'app.tar.gz'
_STARTUP_SCRIPT_URL: 'gs://bucket-name/deploy/service-name/startup-script.sh'
artifacts:
objects:
location: '${_DEPLOY_DIR}'
paths: ['${_DEPLOY_FILENAME}']
The startup script file:
#! /bin/sh
set -ex
APP_LOCATION=$(curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/app-location" -H "Metadata-Flavor: Google")
gsutil cp "$APP_LOCATION" app.tar.gz
tar -xzf app.tar.gz
# Start the service included in app.tar.gz.
service service-name start
The problem is that sometimes the startup script run before the build artifcate finish uploaded, so the file is not yet exist in Cloud Storage so I get this error
startup-script-url: CommandException: No URLs matched: gs://bucket-name/deploy/service-name/some-commit-sha-123/app.tar.gz
And the build is finished successfully, so eventullay there is an instance up and running that didn't start up properly.
How can I tell cloudbuild to wait for artifacts upload to finish before starting a new step?
How can I mark the build as failed in case the startup script failed? So the instance group won't update in this case (not necessarily related to the specific error above, but any error)?
This is expected because you're depending on the artifacts statement.
This statement will upload the artifacts only when all the steps are done so you're incurring in a race condition.
There is no way to say to Cloud Build to upload the artifacts before finishing the steps when using:
artifacts:
objects:
location: '${_DEPLOY_DIR}'
paths: ['${_DEPLOY_FILENAME}']
Then you may need to explicitly upload them in a step before updating your MIG:
...
- name: 'debian'
id: 'tar-app-file'
args: [ 'tar', '-czf', '${_DEPLOY_FILENAME}', '-C', './deploy', '.' ]
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: 'upload-artifacts'
args: [ 'gsutil', 'cp', '${_DEPLOY_FILENAME}', '${_DEPLOY_DIR}' ]
...
Related
I'm using GitHub actions to copy the artifact to the runner/VM, here I added my VM as a self-hosted runner and ran the workflow directly on the runner. I'm downloading the artifact from artifactory and copying it to the deployment location, Now, I Need to do the same thing on another runner/VM as I have an identical deployment VM for the application.
To achieve this, I have copied the same job and changed the 'runs-on' value with a different runner name which is my second VM. below is my workflow code snippet.
My question is, Instead of 2 jobs, how can we run using a single job for all VMs related to dev? let's say, Assume that I have 4 VMs for the Dev environment and 3 VMs for QA, and 5 VMs for Production
can someone help with this or do we need to continue with the same approach as whatever I'm doing right now?
I have tried matrix but looking to see if there is any other solution
name: Deployment_workflow
on:
workflow_dispatch:
inputs:
dev:
type: boolean
required: false
default: false
qa:
type: boolean
required: false
default: false
jobs:
dev_deploy_1:
if: github.event.inputs.dev =='true'
runs-on: dev-vm-1
steps:
- name: Download the artifact from artifactory
run: |
cd /tmp
curl -u ${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPT} -T dep-deploy-1.war "$(ARTIFACTORY_URL)/artifactory/general-artifacts/dep-deploy-1.war"
- name: copy the file from /tmp to deployment location
run: |
cp /tmp/dep-deploy-1.war /var/lib/app_deploy_dir
dev_deploy_2:
if: github.event.inputs.dev =='true'
runs-on: dev-vm-2
steps:
- name: Download the artifact from artifactory
run: |
cd /tmp
curl -u ${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPT} -T dep-deploy-2.war "$(ARTIFACTORY_URL)/artifactory/general-artifacts/dep-deploy-2.war"
- name: copy the file from /tmp to deployment location
run: |
cp /tmp/dep-deploy-1.bar /var/lib/app_deploy_dir
I am using CircleCI with a GameCI docker image in order to build a Unity project. The build works, but I am trying to make use of the h-matsuo/github-release orb in order to create a release on GitHub for the build. I have created a new separate job for this, so I needed to share data between the jobs. I am using persist_to_workspace in order to do that, as specified in the documentation, but the solution doesn't seem to work. I get the following error:
Could not ensure that workspace directory /root/project/Zipped exists
For the workspace persist logic, I've added the following lines of code in my config.yml file:
working_directory: /root/project - Inside the executor of the main job
persist_to_workspace - As a last command inside my main job's steps
attach_workspace - As a beginning command inside my second job's steps
Here's my full config.yml file:
version: 2.1
orbs:
github-release: h-matsuo/github-release#0.1.3
executors:
unity_exec:
docker:
- image: unityci/editor:ubuntu-2019.4.19f1-windows-mono-0.9.0
environment:
BUILD_NAME: speedrun-circleci-build
working_directory: /root/project
.build: &build
executor: unity_exec
steps:
- checkout
- run: mkdir -p /root/project/Zipped
- run:
name: Git submodule recursive
command: git submodule update --init --recursive
- run:
name: Remove editor folder in shared project
command: rm -rf ./Assets/Shared/Movement/Generic/Attributes/Editor/
- run:
name: Converting Unity license
command: chmod +x ./ci/unity_license.sh && ./ci/unity_license.sh
- run:
name: Building game binaries
command: chmod +x ./ci/build.sh && ./ci/build.sh
- run:
name: Zipping build
command: apt update && apt -y install zip && zip -r "/root/project/Zipped/build.zip" ./Builds/
- store_artifacts:
path: /root/project/Zipped/build.zip
- run:
name: Show all files
command: find "$(pwd)"
- persist_to_workspace:
root: Zipped
paths:
- build.zip
jobs:
build_windows:
<<: *build
environment:
BUILD_TARGET: StandaloneWindows64
release:
description: Build project and publish a new release tagged `v1.1.1`.
executor: github-release/default
steps:
- attach_workspace:
at: /root/project/Zipped
- run:
name: Show all files
command: sudo find "/root/project"
- github-release/create:
tag: v1.1.1
title: Version v1.1.1
description: This release is version v1.1.1.
file-path: ./build.zip
workflows:
version: 2
build:
jobs:
- build_windows
- release:
requires:
- build_windows
Can somebody help me with this please?
If somebody ever encounters the same issue, try to avoid making use of the /root path. I've stored the artifacts somewhere inside /tmp/, and before storing artifacts, I've manually created the folder with chmod 777 by using mkdir with the -m flag to specify chmod permissions.
I have a Concourse job that pulls a repo into a docker image and then executes a command on it, now I need to execute a script that comes form the docker image and after it is done execute a command inside the repo, something like this:
run:
dir: my-repo-resource
path: /get-git-context.sh && ./gradlew
args:
- build
get-git-context.sh is the script coming from my docker image and .gradlew is the standard gradlew inside my repo with the build param, I am getting the following error with this approach:
./gradlew: no such file or directory
Meaning the job cd'd into / when executing the first command, executing only one command works just fine.
I've also tried adding two run sections:
run:
path: /get-git-context.sh
run:
dir: my-repo-resource
path: ./gradlew
args:
- build
But only the second part is executed, what is the correct way to concat these two commands?
We usually solve this by wrapping the logic in a shell script and setting the path: /bin/bash with corresponding args (path to the script).
run:
path: /bin/sh
args:
- my-repo_resource/some-ci-folder/build_script.sh
The other option would be to define two tasks and pass the resources through the job's workspace, but we usually do more steps than just two and this would result in complex pipelines:
plan:
- task: task1
config:
...
outputs:
- name: taskOutput
run:
path: /get-git-context.sh
- task: task2
config:
inputs:
## directory defined in task1
- name: taskOutput
run:
path: ./gradlew
args:
- build
I know its not quite simple to do this and tried to explore many approaches but either I couldn't understand it properly or didn't work for me.
I have a concourse job which runs angular build (ng build) and creates /dist folder. This works well.
jobs:
- name: cache
plan:
- get: source
trigger: true
- get: npm-cache
- name: build
plan:
- get: source
trigger: true
passed: [cache]
- get: npm-cache
passed: [cache]
- task: run build
file: source/ci/build.yml
build.yml
---
platform: linux
image_resource:
type: docker-image
source: { repository: alexsuch/angular-cli, tag: '7.3' }
inputs:
- name: source
- name: npm-cache
path: /cache
outputs:
- name: artifact
run:
path: source/ci/build.sh
build.sh
#!/bin/sh
mv cache/node_modules source
cd source
npm rebuild node-saas # temporary fix
npm run build_prod
cp -R dist ../artifact/
I have mentioned output as artifact where I am storing the dist content.
But when I am trying to use this in next job, it doesn't work. Failed with missing input error.
Here is the next job that supposed to consume this dist folder:
jobs:
...
...
- name: list
plan:
- get: npm-cache
passed: [cache, test, build]
trigger: true
- task: list-files
config:
platform: linux
image_resource:
type: registry-image
source: { repository: busybox }
inputs:
- name: artifact
run:
path: ls
args: ['-la', 'artifact/']
Can anyone please help me with this. How I can use the dist folder in above job.
I'm not quite sure why would you want to have different plan definitions for each task but here is the simplest way of doing what you want to do:
jobs:
- name: deploying-my-app
plan:
- get: source
trigger: true
passed: []
- get: npm-cache
passed: []
- task: run build
file: source/ci/build.yml
- task: list-files
file: source/ci/list-files.yml
build.yml
---
platform: linux
image_resource:
type: docker-image
source: { repository: alexsuch/angular-cli, tag: '7.3' }
inputs:
- name: source
- name: npm-cache
path: /cache
outputs:
- name: artifact
run:
path: source/ci/build.sh
list-files.yml
---
platform: linux
image_resource:
type: registry-image
source: { repository: busybox }
inputs:
- name: artifact
run:
path: ls
args: ['-la', 'artifact/']
build.sh
#!/bin/sh
mv cache/node_modules source
cd source
npm rebuild node-saas # temporary fix
npm run build_prod
cp -R dist ../artifact/
Typically you would pass folders as inputs and outputs between TASKS instead of JOBS (althought there's some alternatives)
Concourse is statelessness and that is the idea behind it. But if you want to pass something between jobs the only way to do that is to use a concourse resource and depending on the nature of the project that could be anything from a git repo to a s3 bucket, docker image etc. You can create your own custom resources too.
Using something like s3 concourse resource for example
This way you can push your artifact to an external storage and then use it again on the next jobs on the get step as a resource. But that just may create some unnecesary complexity understanding that what you want to do is pretty straightforward
In my experience I found that sometimes the visual aspect of a job plan in the concourse dashboard gives the impression that a job-plan should by task atomic, which is not always needed
Hope that helps.
I'm experimenting with building a gradle based java app. My pipeline looks like this:
---
resources:
- name: hello-concourse-repo
type: git
source:
uri: https://github.com/ractive/hello-concourse.git
jobs:
- name: gradle-build
public: true
plan:
- get: hello-concourse-repo
trigger: true
- task: build
file: hello-concourse-repo/ci/build.yml
- task: find
file: hello-concourse-repo/ci/find.yml
The build.yml looks like:
---
platform: linux
image_resource:
type: docker-image
source:
repository: java
tag: openjdk-8
inputs:
- name: hello-concourse-repo
outputs:
- name: output
run:
path: hello-concourse-repo/ci/build.sh
caches:
- path: .gradle/
And the build.sh:
#!/bin/bash
export ROOT_FOLDER=$( pwd )
export GRADLE_USER_HOME="${ROOT_FOLDER}/.gradle"
export TERM=${TERM:-dumb}
cd hello-concourse-repo
./gradlew --no-daemon build
mkdir -p output
cp build/libs/*.jar output
cp src/main/docker/* output
ls -l output
And finally find.yml
---
platform: linux
image_resource:
type: docker-image
source: {repository: busybox}
inputs:
- name: output
run:
path: ls
args: ['-alR']
The output of ls at the end of the bash.sh script shows me that the output folder contains the expected files, but the find task only shows empty folders:
What am I doing wrong that the output folder that I'm using as an input in the find task is empty?
The complete example can be found here with the concourse files in the ci subfolder.
You need to remember some things:
There is an initial working directory for your tasks, lets call it '.' (Unless you specify 'dir'). In this initial directory you will find a directory for all your inputs and outputs.
i.e.
./hello-concourse-repo
./output
When you declare an output, there's no need to create a folder 'output' from your script, it will be created automatically.
If you navigate to a different folder in your script, you need to return to the initial working directory or use relative paths to find other folders.
Below you will find the updated script with some comments to fix the problem:
#!/bin/bash
export ROOT_FOLDER=$( pwd )
export GRADLE_USER_HOME="${ROOT_FOLDER}/.gradle"
export TERM=${TERM:-dumb}
cd hello-concourse-repo #You changed directory here, so your 'output' folder is in ../output
./gradlew --no-daemon build
# Add this line to return to the initial working directory or use ../output or $ROOT_FOLDER/output when compiling.
#mkdir -p output <- This line is not required, you already defined an output with this name
cp build/libs/*.jar ../output
cp src/main/docker/* ../output
ls -l ../output
Since you are defining ROOT_FOLDER variable you can use it to navigate.
You are still inside hello-concourse-repo and need to move output up one level.