Different Build steps according to external variable in Drone CI - kubernetes

I use Drone CI for handling CI/CD process.
I am working on a use case where I take input variables and run different pipelines according to the key value pair.
Inputs to the deploy pipeline.
Currently in my pipeline I use Ansible Plugin to push the changes to the destination. Something like this
- name: pipeline1
image: plugins/ansible:3
environment:
<<: *creds
settings:
playbook: .ci/.ansible/playbook.yml
inventory: .ci/.ansible/inventory
user: admin_user
private_key:
from_secret: admin_key
become: true
verbosity: 3
when:
KEY1 = True
- name: pipeline2
image: plugins/ansible:3
environment:
<<: *creds
settings:
playbook: .ci/.ansible/playbook.yml
inventory: .ci/.ansible/inventory
user: admin_user
private_key:
from_secret: admin_key
become: true
verbosity: 3
when:
KEY2 = True
.
.
.
How can I deploy such a pipeline?
when keyword does not have any example in this regard

As per drone conditions documentation (https://docs.drone.io/pipeline/conditions/) you can't use environments in when block. Only repos/promotions could be used there.
In your case you can try to use dependencies for steps, via depends_on parameter in parallelism (https://discourse.drone.io/t/how-to-setup-parallel-pipeline-steps-1-0/3251)

Related

Gitlab CI failure but continue to run problem

In gitLab, I create CI to build the project, for each stage I have 2 job seperately
Build BookProject:
stage: build
<<: *dotnetbuild_job
when: manual
Build ShopProject:
stage: build
<<: *dotnetbuild_job
when: manual
Deploy BookProject:
stage: Deploy
needs: ["Build BookProject"]
<<: *dotnetdeploy_job
when: on_success
Deploy ShopProject:
stage: Deploy
needs: ["Build ShopProject"]
<<: *dotnetdeploy_job
when: on_success
I find that when Build BookProject: job return ERROR: Job failed: exit code 1, which the job icon show !, the Deploy BookProject:
are still continue to run, even I set when: on_success, can I know how to prevent it ?
When jobs specify when: manual, it implies allow_failure: true.
To avoid this behavior, specify allow_failure: false on your manual build jobs.

Azure yaml pipeline group variables not seen by task in a template file

I have a pipeline stage that is using a template as follows:
# Deploy to AKS
- stage: DeployTEST
displayName: Test env for my-app
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
variables:
- group: 'my-app-var-group-test'
- group: 'package-variables'
- template: templates/shared-template-vars.yml#templates
jobs:
- deployment: TestDeployment
displayName: Deploy to AKS - Test
pool:
vmImage: $(vmImageName)
environment: env-test
strategy:
runOnce:
deploy:
steps:
- template: ./aks/deployment-steps.yml
...and the content of the template deployment-steps.yml is:
steps:
- script: |
echo AzureSubscription: '$(azureSubscription)'
echo KubernetesServiceConnection: '$(kubernetesServiceConnection)' # this is working
- task: KubernetesManifest#0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
kubernetesServiceConnection: $(kubernetesServiceConnection) # this is causing an error
I get an error like this:
There was a resource authorization issue: "The pipeline is not valid. Job TestDeployment: Step input kubernetesServiceConnection references service connection $(kubernetesServiceConnection) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
and like this when I try to select individual stages prior manual pipeline run:
Encountered error(s) while parsing pipeline YAML:
Job TestDeployment: Step input kubernetesServiceConnection references service connection $(kubernetesServiceConnection) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz.
The errors above are misleading, because it is not an authorization issue:
the referenced K8s service connection is authorized
when I hardcode the value of the $(kubernetesServiceConnection) variable the pipeline runs just fine - no errors
variable group my-app-var-group-test is authorized - IMPORTANT: this is where the $(kubernetesServiceConnection) variable is defined
NOTE: The variable kubernetesServiceConnection is defined in the my-app-var-group-test variable group & when I comment out the KubernetesManifest task, the value of the $(kubernetesServiceConnection) variable is properly printed to the pipeline console output without any issues and the pipeline runs successfully!?
I know I could use parameters to pass values into the template, but this setup is already used by all other pipelines (variable group vars are used/references in templates) and this issue appeared on a newly created pipeline. I have used file comparison to compare the yaml of a working pipeline and this one and failed to spot anything...
I might be missing something obvious, but I spent hours on this failing to resolve the error...

How to access a multi branch resource attribute in a concourse job?

I'm using multi branch resourcing in a concourse pipeline like so:
resources:
- name: my-resource
type: git-multibranch
source:
uri: git#github.com.../my-resource
branches: 'feature/.*'
private_key: ...
ignore-branches: ''
How can I access the branch the resource is on at the time the job runs? like so:
jobs:
...
outputs:
- name: my-resource
params:
GIT_BRANCH: {BRANCH-GOES-HERE}
I'm looking to access it via something like my-resource.branch but haven't found any thing that works yet

Is there a way to put a lock on Concourse git-resource?

I have setup pipeline in Concourse with some jobs that are building Docker images.
After the build I push the image tag to the git repo.
The problem is when the builds come to end at the same time, one jobs pushes to git, while the other just pulled, and when second job tries push to git it gets error.
error: failed to push some refs to 'git#github.com:*****/*****'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
So is there any way to prevent concurrent push?
So far I've tried applying serial and serial_groups to jobs.
It helps, but all the jobs got queued up, because we have a lot of builds.
I expect jobs to run concurrently and pause before doing operations to git if some other job have a lock on it.
resources:
- name: backend-helm-repo
type: git
source:
branch: master
paths:
- helm
uri: git#github.com:******/******
-...
jobs:
-...
- name: some-hidden-api-build
serial: true
serial_groups:
- build-alone
plan:
- get: some-hidden-api-repo
trigger: true
- get: golang
- task: build-image
file: somefile.yaml
- put: some-hidden-api-image
- get: backend-helm-repo
- task: update-helm-tag
config:
platform: linux
image_resource:
type: registry-image
source:
repository: mikefarah/yq
tag: latest
run:
path: /bin/sh
args:
- -xce
- "file manipulations && git commit"
inputs:
- name: some-hidden-api-repo
- name: backend-helm-repo
outputs:
- name: backend-helm-tag-bump
- put: backend-helm-repo
params:
repository: backend-helm-tag-bump
- put: some-hidden-api-status
params:
commit: some-hidden-api-repo
state: success
- name: some-other-build
serial: true
serial_groups:
- build-alone
plan:
- get: some-other-repo
trigger: true
- get: golang
- task: build-image
file: somefile.yaml
- put: some-other-image
- get: backend-helm-repo
- task: update-helm-tag
config:
platform: linux
image_resource:
type: registry-image
source:
repository: mikefarah/yq
tag: latest
run:
path: /bin/sh
args:
- -xce
- "file manipulations && git commit"
inputs:
- name: some-other-repo
- name: backend-helm-repo
outputs:
- name: backend-helm-tag-bump
- put: backend-helm-repo
params:
repository: backend-helm-tag-bump
- put: some-other-status
params:
commit: some-other-repo
state: success
-...
So if jobs come finish image build at the same time and make git commit in parallel, one pushes faster, than second one, second one breaks.
Can someone help?
note that your description is too vague to give detailed answer.
I expect jobs to concurrently and stop before pushing to git if some other job have a lock on git.
This will not be enough, if they stop just before pushing, they are already referencing a git commit, which will become stale when the lock is released by the other job :-)
The jobs would have to stop, waiting on the lock, before cloning the git repo, so at the very beginning.
All this is speculation on my part, since again it is not clear what you want to do, for these kind of questions posting a as-small-as-possible pipeline image and as-small-as-possible configuration code is helpful.
You can consider https://github.com/concourse/pool-resource as locking mechanism.

Conditionally create CodePipeline actions based on CloudFormation conditions

Enable / disable sections of a CloudFormation for CodePipeline using Conditionals:
This creates a manual notification action once staging has been built and passed Runscope tests:
- InputArtifacts: []
Name: !Join ["",[!Ref GitHubRepository, "-prd-approval"]]
ActionTypeId:
Category: Approval
Owner: AWS
Version: '1'
Provider: Manual
OutputArtifacts: []
Configuration:
NotificationArn: !GetAtt ["SNSApprovalNotification", "Outputs.SNSTopicArn"]
ExternalEntityLink: OutputTestUrl
RunOrder: 3
How to enable/disable this like other CloudFormation resources with a Condition: .
Action steps don't recognize Condition: param
I could make 2 copies of the whole pipeline code one with and one without and then toggle which pipeline I create but it seems like there should be a better way.
You should be able to accomplish this by conditionally inserting the AWS::CodePipeline::Pipeline Resource's Action into the Actions list using the Fn::If Intrinsic Function referencing your Conditions element, returning the Action when the Condition is true and AWS::NoValue (which removes the property, in this case removing the item from the list) when it is not true:
- !If
- IsProdCondition
- InputArtifacts: []
Name: !Join ["",[!Ref GitHubRepository, "-prd-approval"]]
ActionTypeId:
Category: Approval
Owner: AWS
Version: '1'
Provider: Manual
OutputArtifacts: []
Configuration:
NotificationArn: !GetAtt ["SNSApprovalNotification", "Outputs.SNSTopicArn"]
ExternalEntityLink: OutputTestUrl
RunOrder: 3
- !Ref AWS::NoValue