How to extend template script? - docker-compose

I have the following template in my .gitlab-ci.yml file:
x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
I want to extend the script part to make actual builds. For example:
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug
It works well, but it has a problem. Docker launches the both jobs instead of ignoring the template.
Is there way to run only the "android-stage-build" job which triggers the template job when it will be needed?

In order to make gitlab ci ignore the first entry, you need to add a dot (.) in front of the definition.
.x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug
Besides that, I think, based on what I read here, you don't want to use after_script.
I think you want to use the before_script in the template, and on the build stage specific the script-key.
The main difference is that after_script also runs if the script fails. And by what I read here, it does not look like you would like that to happen.

Yes. You're simply missing a . :)
See https://docs.gitlab.com/ee/ci/yaml/yaml_optimization.html#anchors
It should work if you write it like this:
.x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug

Related

CircleCI "Could not ensure that workspace directory exists"

I am using CircleCI with a GameCI docker image in order to build a Unity project. The build works, but I am trying to make use of the h-matsuo/github-release orb in order to create a release on GitHub for the build. I have created a new separate job for this, so I needed to share data between the jobs. I am using persist_to_workspace in order to do that, as specified in the documentation, but the solution doesn't seem to work. I get the following error:
Could not ensure that workspace directory /root/project/Zipped exists
For the workspace persist logic, I've added the following lines of code in my config.yml file:
working_directory: /root/project - Inside the executor of the main job
persist_to_workspace - As a last command inside my main job's steps
attach_workspace - As a beginning command inside my second job's steps
Here's my full config.yml file:
version: 2.1
orbs:
github-release: h-matsuo/github-release#0.1.3
executors:
unity_exec:
docker:
- image: unityci/editor:ubuntu-2019.4.19f1-windows-mono-0.9.0
environment:
BUILD_NAME: speedrun-circleci-build
working_directory: /root/project
.build: &build
executor: unity_exec
steps:
- checkout
- run: mkdir -p /root/project/Zipped
- run:
name: Git submodule recursive
command: git submodule update --init --recursive
- run:
name: Remove editor folder in shared project
command: rm -rf ./Assets/Shared/Movement/Generic/Attributes/Editor/
- run:
name: Converting Unity license
command: chmod +x ./ci/unity_license.sh && ./ci/unity_license.sh
- run:
name: Building game binaries
command: chmod +x ./ci/build.sh && ./ci/build.sh
- run:
name: Zipping build
command: apt update && apt -y install zip && zip -r "/root/project/Zipped/build.zip" ./Builds/
- store_artifacts:
path: /root/project/Zipped/build.zip
- run:
name: Show all files
command: find "$(pwd)"
- persist_to_workspace:
root: Zipped
paths:
- build.zip
jobs:
build_windows:
<<: *build
environment:
BUILD_TARGET: StandaloneWindows64
release:
description: Build project and publish a new release tagged `v1.1.1`.
executor: github-release/default
steps:
- attach_workspace:
at: /root/project/Zipped
- run:
name: Show all files
command: sudo find "/root/project"
- github-release/create:
tag: v1.1.1
title: Version v1.1.1
description: This release is version v1.1.1.
file-path: ./build.zip
workflows:
version: 2
build:
jobs:
- build_windows
- release:
requires:
- build_windows
Can somebody help me with this please?
If somebody ever encounters the same issue, try to avoid making use of the /root path. I've stored the artifacts somewhere inside /tmp/, and before storing artifacts, I've manually created the folder with chmod 777 by using mkdir with the -m flag to specify chmod permissions.

How to trigger pipelines in GitLab CI

I have the problem, that I want to trigger another pipeline (B) in antoher project (B), only when the deploy job in pipeline (A) is finished. But my configuration starts the second pipeline as soon as the deploy job in pipeline (A) starts. How can I do it, that the second pipeline is triggered, only when the deploy job in pipeline (A) in projet (A) is finished?
Here is my gitlab-ci.yml
workflow:
rules:
- if: '$CI_COMMIT_BRANCH'
before_script:
- gem install bundler
- bundle install
pages:
stage: deploy
script:
- bundle exec jekyll build -d public
artifacts:
paths:
- public
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
staging:
variables:
ENVIRONMENT: staging
stage: build
trigger: example/example
test:
stage: test
script:
- bundle exec jekyll build -d test
artifacts:
paths:
- test
rules:
- if: '$CI_COMMIT_BRANCH != "master"'
You don't declare stages order, so gitlab pipeline don't know what order are expected.
At the beginning of .gitlab-ci.yaml file add something like this (or whatever order you want):
stages:
- deploy
- test
- build
# rest of you file...
Alternatively you can use needs to build jobs relation.

How to concatenate commands in a Concourse job?

I have a Concourse job that pulls a repo into a docker image and then executes a command on it, now I need to execute a script that comes form the docker image and after it is done execute a command inside the repo, something like this:
run:
dir: my-repo-resource
path: /get-git-context.sh && ./gradlew
args:
- build
get-git-context.sh is the script coming from my docker image and .gradlew is the standard gradlew inside my repo with the build param, I am getting the following error with this approach:
./gradlew: no such file or directory
Meaning the job cd'd into / when executing the first command, executing only one command works just fine.
I've also tried adding two run sections:
run:
path: /get-git-context.sh
run:
dir: my-repo-resource
path: ./gradlew
args:
- build
But only the second part is executed, what is the correct way to concat these two commands?
We usually solve this by wrapping the logic in a shell script and setting the path: /bin/bash with corresponding args (path to the script).
run:
path: /bin/sh
args:
- my-repo_resource/some-ci-folder/build_script.sh
The other option would be to define two tasks and pass the resources through the job's workspace, but we usually do more steps than just two and this would result in complex pipelines:
plan:
- task: task1
config:
...
outputs:
- name: taskOutput
run:
path: /get-git-context.sh
- task: task2
config:
inputs:
## directory defined in task1
- name: taskOutput
run:
path: ./gradlew
args:
- build

Pass build directory (/dist) from a job to next job in concourse

I know its not quite simple to do this and tried to explore many approaches but either I couldn't understand it properly or didn't work for me.
I have a concourse job which runs angular build (ng build) and creates /dist folder. This works well.
jobs:
- name: cache
plan:
- get: source
trigger: true
- get: npm-cache
- name: build
plan:
- get: source
trigger: true
passed: [cache]
- get: npm-cache
passed: [cache]
- task: run build
file: source/ci/build.yml
build.yml
---
platform: linux
image_resource:
type: docker-image
source: { repository: alexsuch/angular-cli, tag: '7.3' }
inputs:
- name: source
- name: npm-cache
path: /cache
outputs:
- name: artifact
run:
path: source/ci/build.sh
build.sh
#!/bin/sh
mv cache/node_modules source
cd source
npm rebuild node-saas # temporary fix
npm run build_prod
cp -R dist ../artifact/
I have mentioned output as artifact where I am storing the dist content.
But when I am trying to use this in next job, it doesn't work. Failed with missing input error.
Here is the next job that supposed to consume this dist folder:
jobs:
...
...
- name: list
plan:
- get: npm-cache
passed: [cache, test, build]
trigger: true
- task: list-files
config:
platform: linux
image_resource:
type: registry-image
source: { repository: busybox }
inputs:
- name: artifact
run:
path: ls
args: ['-la', 'artifact/']
Can anyone please help me with this. How I can use the dist folder in above job.
I'm not quite sure why would you want to have different plan definitions for each task but here is the simplest way of doing what you want to do:
jobs:
- name: deploying-my-app
plan:
- get: source
trigger: true
passed: []
- get: npm-cache
passed: []
- task: run build
file: source/ci/build.yml
- task: list-files
file: source/ci/list-files.yml
build.yml
---
platform: linux
image_resource:
type: docker-image
source: { repository: alexsuch/angular-cli, tag: '7.3' }
inputs:
- name: source
- name: npm-cache
path: /cache
outputs:
- name: artifact
run:
path: source/ci/build.sh
list-files.yml
---
platform: linux
image_resource:
type: registry-image
source: { repository: busybox }
inputs:
- name: artifact
run:
path: ls
args: ['-la', 'artifact/']
build.sh
#!/bin/sh
mv cache/node_modules source
cd source
npm rebuild node-saas # temporary fix
npm run build_prod
cp -R dist ../artifact/
Typically you would pass folders as inputs and outputs between TASKS instead of JOBS (althought there's some alternatives)
Concourse is statelessness and that is the idea behind it. But if you want to pass something between jobs the only way to do that is to use a concourse resource and depending on the nature of the project that could be anything from a git repo to a s3 bucket, docker image etc. You can create your own custom resources too.
Using something like s3 concourse resource for example
This way you can push your artifact to an external storage and then use it again on the next jobs on the get step as a resource. But that just may create some unnecesary complexity understanding that what you want to do is pretty straightforward
In my experience I found that sometimes the visual aspect of a job plan in the concourse dashboard gives the impression that a job-plan should by task atomic, which is not always needed
Hope that helps.

Is there a way to configure Travis GitHub Pages Deployment within stages?

Expected
To be able to deploy Pages to GitHub within Travis using stages.
Result
Fail, I must not have the proper synthax or it is not possible.
Reproduction
Source:
https://docs.travis-ci.com/user/deployment/pages/
https://docs.travis-ci.com/user/build-stages/
Project:
https://travis-ci.org/kopax/deleteme
https://github.com/kopax/deleteme
This work for building the page without stages: See the passing travis job : travis-ci.org/kopax/deleteme/builds/380660202:
dist: trusty
# Blocklist
branches:
except:
- gh-pages # will be deployed to, no need to build it
cache:
directories:
- node_modules
node_js:
- "10"
before_install:
- npm install -g npm
# const
- export PACKAGE_NAME=$(node -p "require('./package.json').name")
- export PACKAGE_VERSION=$(node -p "require('./package.json').version")
- export NODE_VERSION=$(node --version)
- export NPM_VERSION=$(npm --version)
# logging
- npm --version || echo npm not installed
- node --version|| echo node not installed
- npx rollup-umd-scripts --version || echo npx not installed
- echo "package version $PACKAGE_VERSION"
language: node_js
sudo: required
script:
# execute all of the commands which need to be executed
# before running actual tests
- npm run styleguide:build
deploy:
provider: pages
skip_cleanup: true
github_token: $GH_TOKEN # Set in the settings page of your repository, as a secure variable
keep_history: true
local_dir: public/
on:
branch: master
But this is failing when added the Page deployement as a stage See this failed travis job: travis-ci.org/kopax/deleteme/jobs/380983577:
language: node_js
sudo: required
#env:
# global:
# - DISPLAY=:99.0
# - NODE_ENV=test
dist: trusty
# Blocklist
branches:
except:
- gh-pages # will be deployed to, no need to build it
cache:
directories:
- node_modules
node_js:
- "10"
before_install:
- npm install -g npm
# const
- export PACKAGE_NAME=$(node -p "require('./package.json').name")
- export PACKAGE_VERSION=$(node -p "require('./package.json').version")
- export NODE_VERSION=$(node --version)
- export NPM_VERSION=$(npm --version)
# logging
- npm --version || echo npm not installed
- node --version|| echo node not installed
- npx rollup-umd-scripts --version || echo npx not installed
- echo "package version $PACKAGE_VERSION"
stages:
- build
- test
- release
- deploy
script:
# execute all of the commands which need to be executed
# before running actual tests
- npm run styleguide:build
jobs:
include:
# Job: Build
- stage: build
node_js:
- lts/*
# - 10
# - 8
script:
- npm run build
branches:
only:
- release
- dev
- master
# Job: Test
- stage: test
node_js:
- lts/*
# - 10
# - 8
script:
- npm run test
branches:
only:
- release
- dev
- master
# Job: Release
- stage: release
node_js:
- lts/*
skip_cleanup: true
script:
- npx semantic-release
branches:
only:
- master
# Job: Page
- stage: deploy
provider: pages
skip_cleanup: true
github_token: $GH_TOKEN # Set in the settings page of your repository, as a secure variable
keep_history: true
local_dir: public/
on:
branch: master
Does anybody know how I can have stages deployment with page working in Travis?
Your deploy stage had no install/script defined, thus it took default one.
You need to define in stage what you want to do, you forgot about deploy level.
To have dedicated stage for deployment only, configure it like this:
- stage: deploy
if: type = push AND branch = master # or whenever you want to deploy
script: skip # to not run Travis' default script
deploy: # <-- that was missing !!!
- provider: pages
skip_cleanup: true
github_token: $GH_TOKEN # Set in the settings page of your repository, as a secure variable
keep_history: true
local_dir: public/