Offline git clone Drone CI using proxy as Nexus - kubernetes

I am running :
drone-server on top of kubernetes
and drone-kubernetes-runner to dynamically provisioning runner as pods.
After investigation, i found the Pod YAML of each runner defines the 1st step "git clone" using the image drone/git.
I am running the pipeline in offline environment. I have to specify nexus.company.local/drone/git instead of drone/git to avoid fetching from the public registry.
I search everywhere, but no way.
Even image_pull_secrets is valuable for explicit steps that i can define.
It's NOT valuable for implicit steps like the "clone" step

You could disable the automatic cloning and add an explicit step, specifying your own image with the nexus mirror origin.
For example:
kind: pipeline
clone:
disable: true
steps:
- name: clone
image: nexus.company.local/drone/git

DRONE_RUNNER_CLONE_IMAGE=nexus.company.local/drone/git
REF: https://docs.drone.io/runner/docker/configuration/reference/drone-runner-clone-image/

Related

How to persistently set a gitlab runner tag?

I have 2 Kubernetes instances:
Production
Testing
Both with a gitlab runner for CI/CD pipelines.
Since some jobs are only for production and others only for testing I tagged the runners(in values.yaml).
helm get values gitlab-runner -n gitlab-runner for the testing runner shows this:
USER-SUPPLIED VALUES:
gitlabUrl: https://...
runnerRegistrationToken: ...
tags: testing
This is not working however and I have to manually set the tag in the UI (Group > CI/CD > Runners).
The problem with this is that the servers frequently reboot, which resets tags requiring a lot of manual upkeep.
Why is the setting in values.yaml not working? And is there a way to set tags which persist after reboots?

Trigger Gitlab Pipeline Jobs when Merge Succeeds

I have three K8s clusters; staging, sandbox, and production. I would like to:
Trigger a pipeline to build and deploy an image to staging, if a merge request to master is created
Upon a successful deploy of staging, I would like the branch to be merged into master
I would like to use the same image I already built in the build job before the staging deploy, to be used to deploy to sandbox and production
Something like this:
build:
... (stuff that builds and pushes "$CI_REGISTRY_IMAGE:$IMAGE_TAG")
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
staging:
...
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
sandbox:
...
?
production:
...
?
What I can't figure out is how to both have a successful MR at the end of the staging job and thereby have the pipeline merge the branch into master, and also then pass down whatever $CI_REGISTRY_IMAGE:$IMAGE_TAG was to continue with the jobs for the sandbox and production deploys.
Trigger a pipeline to build and deploy an image to staging, if a merge
request to master is created
For first you can create rules like
only:
- merge_requests
except:
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME != "master"
You can run the curl command or hit API to approve the MR
https://gitlab.example.com/api/v4/projects/:id/merge_requests/:merge_request_iid/approve
Reference : https://stackoverflow.com/a/58036578/5525824
Document: https://docs.gitlab.com/ee/api/merge_requests.html#accept-mr
I would like to use the same image I already built in the build job
before the staging deploy, to be used to deploy to sandbox and
production
You can use the TAG_NAME: $CI_COMMIT_REF_NAME passing across the stages as Environment variable
You are making it really complicated ideally you can use the TAG and make it easy to manage and deploy using the CI.
Merge when MR gets merged and create TAG and build docker images with TAG name, deploy that same TAG across environment simple.

angular-cli caches bitbucket pipeline

I work with angular 4 and angular-cli. I'm trying to cache node_modules in bitbucket's pipelines.
I tried this
definitions:
caches:
nodemodules: ~/node_modules
or, this
definitions:
caches:
nodemodules: /opt/atlassian/pipelines/agent/build/node_modules
but they did not work, any idea?
According to the docs, it’s simply:
pipelines:
default:
- step:
caches:
- node
In other words: simply use “node”. This works for myself in my Bb Pipelines.

Concourse: how to pass job's output to a different job

It's not clear for me from the documentation if it's even possible to pass one job's output to the another job (not from task to task, but from job to job).
I don't know if conceptually I'm doing the right thing, maybe it should be modeled differently in Concourse, but what I'm trying to achieve is having pipeline for Java project split into several granular jobs, which can be executed in parallel, and triggered independently if I need to re-run some job.
How I see the pipeline:
First job:
pulls the code from github repo
builds the project with maven
deploys artifacts to the maven repository (mvn deploy)
updates SNAPSHOT versions of the Maven project submodules
copies artifacts (jar files) to the output directory (output of the task)
Second job:
picks up jar's from the output
builds docker containers for all of them (in parallel)
Pipeline goes on
I was unable to pass the output from job 1 to job 2.
Also, I am curious if any changes I introduce to the original git repo resource will be present in the next job (from job 1 to job 2).
So the questions are:
What is a proper way to pass build state from job to job (I know, jobs might get scheduled on different nodes, and definitely in different containers)?
Is it necessary to store the state in a resource (say, S3/git)?
Is the Concourse stateless by design (in this context)?
Where's the best place to get more info? I've tried the manual, it's just not that detailed.
What I've found so far:
outputs are not passed from job to job
Any changes to the resource (put to the github repo) are fetched in the next job, but changes in working copy are not
Minimal example (it fails if commented lines are uncommented with error: missing inputs: gist-upd, gist-out):
---
resources:
- name: gist
type: git
source:
uri: "git#bitbucket.org:snippets/foo/bar.git"
branch: master
private_key: {{private_git_key}}
jobs:
- name: update
plan:
- get: gist
trigger: true
- task: update-gist
config:
platform: linux
image_resource:
type: docker-image
source: {repository: concourse/bosh-cli}
inputs:
- name: gist
outputs:
- name: gist-upd
- name: gist-out
run:
path: sh
args:
- -exc
- |
git config --global user.email "nobody#concourse.ci"
git config --global user.name "Concourse"
git clone gist gist-upd
cd gist-upd
echo `date` > test
git commit -am "upd"
cd ../gist
echo "foo" > test
cd ../gist-out
echo "out" > test
- put: gist
params: {repository: gist-upd}
- name: fetch-updated
plan:
- get: gist
passed: [update]
trigger: true
- task: check-gist
config:
platform: linux
image_resource:
type: docker-image
source: {repository: alpine}
inputs:
- name: gist
#- name: gist-upd
#- name: gist-out
run:
path: sh
args:
- -exc
- |
ls -l gist
cat gist/test
#ls -l gist-upd
#cat gist-upd/test
#ls -l gist-out
#cat gist-out/test
To answer your questions one by one.
All build state needs to be passed from job to job in the form of a resource which must be stored on some sort of external store.
It is necessary to store on some sort of external store. Each resource type handles this upload and download itself, so for your specific case I would check out this maven custom resource type, which seems to do what you want it to.
Yes, this statelessness is the defining trait behind concourse. The only stateful element in concourse is a resource, which must be strictly versioned and stored on an external data store. When you combine the containerization of tasks with the external store of resources, you get the guaranteed reproducibility that concourse provides. Each version of a resource is going to be backed up on some sort of data store, and so even if the data center that your ci runs on is to completely fall down, you can still have strict reproducibility of each of your ci builds.
In order to get more info I would recommend doing a tutorial of some kind to get your hands dirty and build a pipeline yourself. Stark and wayne have a tutorial that could be useful. In order to help understand resources there is also a resources tutorial, which might be helpful for you specifically.
Also, to get to your specific error, the reason that you are seeing missing inputs is because concourse will look for directories (made by resource gets) named each of those inputs. So you would need to get resource instances named gist-upd and gist-out prior to to starting the task.

Is it possible to build a docker image without pushing it?

I want to build a docker image in my pipeline and then run a job inside it, without pushing or pulling the image.
Is this possible?
It's by design that you can't pass artifacts between jobs in a pipeline without using some kind of external resource to store it. However, you can pass between tasks in a single job. Also, you specify images on a per-task level rather than a per-job level. Ergo, the simplest way to do what you want may be to have a single job that has a first task to generate the docker-image, and a second task which consumes it as the container image.
In your case, you would build the docker image in the build task and use docker export to export the image's filesystem to a rootfs which you can put into the output (my-task-image). Keep in mind the particular schema to the rootfs output that it needs to match. You will need rootfs/... (the extracted 'docker export') and metadata.json which can just contain an empty json object. You can look at the in script within the docker-image-resource for more information on how to make it match the schema : https://github.com/concourse/docker-image-resource/blob/master/assets/in. Then in the subsequent task, you can add the image parameter in your pipeline yml as such:
- task: use-task-image
image: my-task-image
file: my-project/ci/tasks/my-task.yml
in order to use the built image in the task.
UDPATE: the PR was rejected
This answer doesn't currently work, as the "dry_run" PR was rejected. See https://github.com/concourse/docker-image-resource/pull/185
I will update here if I find an approach which does work.
The "dry_run" parameter which was added to the docker resource in Oct 2017 now allows this (github pr)
You need to add a dummy docker resource like:
resources:
- name: dummy-docker-image
type: docker-image
icon: docker
source:
repository: example.com
tag: latest
- name: my-source
type: git
source:
uri: git#github.com:me/my-source.git
Then add a build step which pushes to that docker resource but with "dry_run" set so that nothing actually gets pushed:
jobs:
- name: My Job
plan:
- get: my-source
trigger: true
- put: dummy-docker-image
params:
dry_run: true
build: path/to/build/scope
dockerfile: path/to/build/scope/path/to/Dockerfile