Gitlab Runner - New folder for each build - deployment

I'm using Gitlab CI for my project. When I push on develop branch, it runs tests and update the code on my test environment (a remote server).
But the gitlab runner is already using the same build folder : builds/a3ac64e9/0/myproject/myproject
But I would like to create a now folder every time :
builds/a3ac64e9/1/yproject/myproject
builds/a3ac64e9/2/yproject/myproject
builds/a3ac64e9/3/yproject/myproject
and so on
Using this, I could just update my website by changing a symbolic link pointing to the last runner directory.
Is there a way to configure Gitlab Runner this way ?

While it doesn't make sense to use your build directory as your deployment directory, you can setup a custom build directory
Open config.toml in a text editor: (more info on where to find it here)
Set enabled = true under [runners.custom_build_dir] (more info here)
[runners.custom_build_dir]
enabled = true
In your .gitlab-ci.yml file, under variables set GIT_CLONE_PATH. It must start with $CI_BUILDS_DIR/, e.g. $CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME, which will probably give you what you're looking for, although if you have multiple stages, they will have different job IDs. Alternatively, you could try $CI_BUILDS_DIR/$CI_COMMIT_SHA, which would give you a unique folder for each commit. (More info here)
variables:
GIT_CLONE_PATH: '$CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME'
Unfortunately there is currently an issue with using GIT_BUILDS_DIR in GIT_CLONE_PATH, if you're using Windows and Powershell, so you may have to do something like this as a work-around, if all your runners have the same build directory: GIT_CLONE_PATH: 'C:\GitLab-Runner/builds/$CI_JOB_ID/$CI_PROJECT_NAME'
You may want to take a look at the variables available to you (predefined variables) to find the most suitable variables for your path.

You might want to read the following answer Changing the build intermediate paths for gitlab-runner
I'll repost my answer here:
Conceptually, this approach is not the way to go; the build directory is not a deployment directory, it's a temporary directory, to build or to deploy from, whereas on a shell executor this could be fixed.
So what you need is to deploy from that directory with a script as per gitlab-ci.yml below, to the correct directory of deployment.
stages:
- deploy
variables:
TARGET_DIR: /home/ab12/public_html/$CI_PROJECT_NAME
deploy:
stage: deploy
script:
mkdir -pv $TARGET_DIR
rsync -r --delete ./ $TARGET_DIR
tags:
- myrunner
This will move your projectfiles in /home/ab12/public_html/
naming your projects as project1 .. projectn, all your projects could use this same .gitlab-ci.yml file.

You can not achieve this only with Gitlab CI runner configuration, but you can create 2 runners, and assign them exclusively to each branch by using a combination of only and tags keywords.
Assuming your two branches are named master and develop and two runners have been tagged with master_runner and develop_runner tags, your .gitlab-ci.yml can look like this:
master_job:
<<: *your_job
only:
- master
tags:
- master_runner
develop_job:
<<: *your_job
only:
- develop
tags:
- develop_runner
(<<: *your_job is your actual job that you can factorize)

Related

Gitlab CI/CD - How to deploy multiple environments from one branch?

There is a need to deploy project to kubernetes with multiple environments "development,staging,production" and to multiple countries "Germany, France, Spain" (examples).
Until now I have been using pipeline in original project that deploys to germany and it is done with gitlab ci "environment" for "development,staging,production". That part is ok.
For other countries, I mirror the project and add different variables in the mirror project. Then both countries are deployed.
Problem is that each country builds a separate image even though the code base is the same. So for 5 countries, 5 images are built instead of 1 for each environment.
Current solution:
I have determined for now that I will be adding extra_{{country_id}}_{{environment}} environment for each country in the same project.
So then for example, I will build image for development environment and deploy it to extra_GER_development, extra_FRA_development and extra_SPA_development.
For now I have found a lot of info on how to deploy for "development, staging, production", but none how to split it even further.
Question: Is there a better way on how to deploy single image for multiple environments?
Any info about possible solution articles is also appreciated. thank you.
If by image you mean Docker Image, you can build a single image and set 5 or more tags for countries and push different tags to the repository or deploy every tag to their environment.
Since it's a repetitive process, you can also write a shell script and automate the process. You can pass the environments or countries list, iterate over the list and set appropriate tags or deploy them one by one.
Consider having an environment variable that includes all countries; you can write a shell script like the following:
#!/bin/bash
for country in "$#"
do
# set taga
docker tag SOURCE_IMAGE:tag TARGET_IMAGE:tag
# or maybe deploy
...
done
and call this script in the Gitlab CI script:
...
deploy:
stage: deploy
script:
- ./set-country-tags ${COUNTRIES}
...

Set working directory of a project in mono repo in Azure Devops

My project is using microservices and in one repos we have multiple applications in Azure DevOps.
For Example, we have Repos named Microservice, where we have .NetProject, AngularUI Project, and Java Project code.The structure looks like this:
While setting up the CI pipeline, I have included the path like the below:
variables:
- name: working-dir
value: 'MicroserviceProject/AngularUI/ClientApp/'
trigger:
branches:
include:
- master
paths:
include:
- 'MicroserviceProject/AngularUI/ClientApp/*'
I don't see the code of AngularUI project being checkout properly and encountering the error, that they cannot locate the package.json file.
How can I set the working directory for different projects in a repo?
Update:
I am able to locate the file but the build isnot giving me any output files.
How I fixed this issue:
Initially I was not sure if the working directory was set properly.Even if it was , I was not sure whether the package.json file was read properly. To check that, I added the below script to the Azure CI pipeline, for example:
inputs:
targetType: 'inline'
script: dir
$(Build.ArtifactStagingDirectory)
displayName: 'Check'
This showed me that after building , the artifacts are not stored anywhere. hence I had to explicitly mention the outputpath. For that I ran the below command for build:
run-script build -- --output-path=$(Build.ArtifactStagingDirectory
This fixed the issue I was facing.

GitHub PR doesn't trigger GitLab pipeline

I'm trying to use GitHub to trigger on PR a GitLab pipeline.
Practically when a developer creates a PR in GitHub, his/her code get tested against a GitLab pipeline.
I'm trying to follow this user guide: https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/github_integration.html
and we have a silver account, but it won't work. When creating the PR, the GitLab pipeline is not triggered.
Anyone with this kind of experience who can help?
Thanks
Joe
I've found the cause of the issue.
In order for GitHub to trigger GitLab as CD/CI mostly in PR request, you need to have a Silver/Premium account AND, very important, being the root owner.
Any other case, you won't be able to see github in the integration list on GitLab. People from gitlab had the brilliant idea to hide it instead of showing it disabled (which would had been a tip to understand that you needed an upgraded license)
In the video above it's not explained.
Firstly, you need to give us the content of your .gitlab-ci.yaml file. In your question you asked about GitHub but you're following Gitlab documentation which is completely different. Both are using git commands to commit and push repos but Github & Gitlab are different.
For Github pipelines, you need to create a repository, then you go to Actions. Github will propose you to configure a .github/workflows directory which contain a file.yaml. In this .yaml file you can code your pipelines. According to your project, Github will propose you several linux machines with the adequate configuration to run your files (If it's a Java Project --> you'll be proposed maven machines, Python --> Python Machines, React/Angular -> machines with npm installed, Docker, Kubernetes for deployments...) and you're limited to 4 private project as far as I know (check this last information).
For Gitlab you have two options, you can use preconfigured machines like github, and you call them by adding for example atag: npm in your .gitlab-ci.yaml file, to call a machine with npm installed, but you need to pay an amount of money. Or you can configure your own runners by following the Gitlab documentation with gitlab commands (which is the best option), but you'll need good machines and servers to run npm - mvn - python3 - ... commands
Of course, in your Gitlab repository, and finally to answer your question this an example, of .gitlab-ci.yaml file with two simple stages: build & test, the only statement specifies that these pipelines will run if there is a merge request ( I use the preconfigured machines of Gitlab as a sample here) More details on my python github project https://github.com/mehdimaaref7/Scrapping-Sentiment-Analysis and for gitlab https://docs.gitlab.com/runner/
stages:
- build
- test
build:
tags:
- shell
- linux
stage: build
script:
- echo "Building"
- mkdir build
- touch build/info.txt
artifacts:
paths:
- build/
only:
- merge_requests
test:
tags:
- shell
- linux
stage: test
script:
- echo "Testing"
- test -f "build/info.txt"
only:
- merge_requests

Deploy individual services from a monorepo using github actions

I have around 10 individual micro-services which are mostly cloud functions for various data processing jobs, which all live in a single github repository.
The goal is to trigger the selective deployment of these service to Google Cloud Functions, on push to a branch - when an individual function has been updated.
I must avoid the situation in which update of a single service causes the deployment of all the cloud functions.
My current repository structure:
/repo
--/service_A
----/function
----/notebook
--/service_B
----/function
----/notebook
On a side note, what are the pros/cons of using Github Actions VS Google Cloud Build for such automation?
GitHub Actions supports monorepos with path filtering for workflows. You can create a workflow to selectively trigger when files on a specific path change.
https://help.github.com/en/articles/workflow-syntax-for-github-actions#onpushpull_requestpaths
For example, this workflow will trigger on a push when any files under the path service_A/ have changed (note the ** glob to match files in nested directories).
on:
push:
paths:
- 'service_A/**'
You could also run some script to discover which services were changed based on git diff and trigger corresponding job via GitHub REST API.
There could be two workflows main.yml and services.yml.
Main workflow will be configured to be started always on push and it will only start script to find out which services were changed. For each changed service repository dispatch event will be triggered with service name in payload.
Services workflow will be configured to be started on repository_dispatch and it will contain one job for each service. Jobs would have additional condition based on event payload.
See showcase with similar setup:
https://github.com/zladovan/monorepo
It's not a Monorepo
If you only have apps, then I'm sorry... but all you have is a repo of many apps.
A monorepo is a collection of packages that you can map a graph of dependencies between.
Aha, I have a monorepo
But if you have a collection of packges which depend on each other, then read on.
apps/
one/
depends:
pkg/foo
two/
depends:
pkg/bar
pkg/foo
pkg/
foo/
bar/
baz/
The answer is that you switch to a tool that can describe which packages have changed between the current git ref and some other git ref.
The following two examples runs the release npm script on each package that changed under apps/* and all the packges they would depend on.
I'm unsure if the pnpm method silently skips packages that don't have a release target/command/script.
Use NX Dev
Using NX.dev, it will work it out for you with its nx affected command.
you need a nx.json in the root of your monorepo
it assumes you're using the package.json approach with nx.dev, if you have project.json in each package, then the target would reside there.
your CI would then look like:
pnpx nx affected --target=release
Pnpm Filtering
Your other option is to switch to pnpm and use its filtering syntax:
pnpm --filter "...{apps/**}[origin/master]" release
Naive Path Filtering
If you just try and rely on "which paths" changed in this git commit, then you miss out on transient changes that affect the packages you actually want to deploy.
If you have a github action like:
on:
push:
paths:
- 'app/**'
Then you won't ever get any builds for when you only push commits that change anything in pkg/**.
Other interesting github actions
https://github.com/marketplace/actions/nx-check-changes
https://github.com/marketplace/actions/nx-affected-dependencies-action
https://github.com/marketplace/actions/nx-affected-list (a non nx alternative here is dorny/paths-filter
https://github.com/marketplace/actions/nx-affected-matrix
Has Changed Path Action might be worth a try:
name: Conditional Deploy
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
with:
fetch-depth: 100
- uses: marceloprado/has-changed-path#v1
id: service_A_deployment
with:
paths: service_A
- name: Deploy front
if: steps.service_A_deployment.outputs.changed == 'true'
run: /deploy-service_A.sh

Rustdoc on gh-pages with Travis

I have generated documentation for my project with cargo doc, and it is made in the target/doc directory. I want to allow users to view this documentation without a local copy, but I cannot figure out how to push this documentation to the gh-pages branch of the repository. Travis CI would help me automatically do this, but I cannot get it to work either. I followed this guide, and set up a .travis.yml file and a deploy.sh script. According to the build logs, everything goes fine but the gh-pages branch never gets updated. My operating system is Windows 7.
It is better to use travis-cargo, which is intended to simplify deploying docs and which also has other features. Its readme provides an example of .travis.yml file, although in the simplest form it could look like this:
language: rust
sudo: false
rust:
- nightly
- beta
- stable
before_script:
- pip install 'travis-cargo<0.2' --user && export PATH=$HOME/.local/bin:$PATH
script:
- |
travis-cargo build &&
travis-cargo test &&
travis-cargo --only beta doc
after_success:
- travis-cargo --only beta doc-upload
# needed to forbid travis-cargo to pass `--feature nightly` when building with nightly compiler
env:
global:
- TRAVIS_CARGO_NIGHTLY_FEATURE=""
It is very self-descriptive, so it is obvious, for example, what to do if you want to use another Rust release train for building docs.
In order for the above .travis.yml to work, you need to set your GH_TOKEN somehow. There are basically two ways to do it: inside .travis.yml via an encrypted string, or by configuring it in the Travis itself, in project options. I prefer the latter way, so I don't need to install travis command line tool or pollute my .travis.yml (and so the above config file does not contain secure option), but you may choose otherwise.