How to access a multi branch resource attribute in a concourse job? - concourse

I'm using multi branch resourcing in a concourse pipeline like so:
resources:
- name: my-resource
type: git-multibranch
source:
uri: git#github.com.../my-resource
branches: 'feature/.*'
private_key: ...
ignore-branches: ''
How can I access the branch the resource is on at the time the job runs? like so:
jobs:
...
outputs:
- name: my-resource
params:
GIT_BRANCH: {BRANCH-GOES-HERE}
I'm looking to access it via something like my-resource.branch but haven't found any thing that works yet

Related

Jfrog Pipeline - Does cronTrigger resource supports triggering a pipeline with predefined variables?

resources:
- name: nightly_cron_trigger
type: CronTrigger
configuration:
interval: "30 03 * * *" # Every day at 03:30AM UTC
branches:
include: *serviceBranchRegexp
pipelines:
- name: commons_nightly
steps:
- name: prepare_nightly_run
type: Bash
configuration:
nodePool: ci_c5large
inputResources:
- name: nightly_cron_trigger
- name: commons_bitbucket
trigger: false
outputResources:
- name: commons_property_bag
environmentVariables:
GIT_REPO_PATH:
default: *serviceGitRepoPath
execution:
onStart:
- source
currently we have a pipeline (runs with cron each night) where each step triggers an embedded pipeline and each step does the same - only the resources and names are changing. So I thought maybe the cron can run the main pipeline a few times at night but every run will have different params.
cron resource does not support this, meaning you cant trigger a pipeline with predefined variables using cronTrigger resource.
But may be you can use property bag resource. May be you can configure like this:
input cronTrigger will trigger a pipeline step and that pipelineStep will update the output propertyBag resource with different parameters.
cronTrigger -> pipelineStep -> propertyBag
which this propertyBag resource can be a input to a different pipeline now.

Azure Pipelines - Handling builds for Dependent downstream pipelines

We have more number of common upstream pipelines - pipleline-a, pipleline-b, pipeline-c, pipeline-d … each in its own repository - repository-a, repository-b, repository-c, repository-d…
My target pipeline, say pipeline-y in repository-y, has a dependency on these upstream pipelines artifacts and the target pipeline needs to build when there is a change to any of the upstream libraries and the corresponding upstream pipeline builds successfully.
In other words, target pipeline-y needs to be triggered if any of the upstream pipelines completed successfully due to changes in them (CI triggers for upstream libraries work fine in their own pipelines).
We currently achieved this, using the resources pipelines trigger in the target pipeline-y, as below:
Upstream Pipeline - pipeline-a.yml
trigger:
- repository-a*
steps
- task: Maven#3
inputs:
mavenPomFile: 'pom.xml'
publishJUnitResults: false
javaHomeOption: 'JDKVersion'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: true
effectivePomSkip: false
sonarQubeRunAnalysis: false
goals: 'package deploy'
Target pipeline-y.yml resources section
resources:
pipelines:
- pipeline: pipeline-a
source: pipeline-a
trigger:
branches:
- 'pipeline-a-v1*'
- pipeline: pipeline-b
source: pipeline-b
trigger:
branches:
- 'pipeline-b-v1*'
- pipeline: pipeline-c
source: pipeline-c
trigger:
branches:
- 'pipeline-c-v1*'
- pipeline: pipeline-d
source: pipeline-d
trigger:
branches:
- 'pipeline-d-v1*'
- pipeline: pipeline-e
source: pipeline-e
trigger:
branches:
- 'pipeline-e-v1*'
This works fine.
My question is, as we add more upstream common libraries, we have to update the resources section in the target downstream. When there are new versions of upstream libraries, we have to modify the version in resources-pipelines-pipiline-trigger - branches from “pipeline-a-v1” to “pipeline-a-v2”.
Is there a better way to do this? Can a variable be used in the resources-pipelines-pipeline-trigger - branches - example pipeline-a-$(version) . Can version be derived using Build system variables as below:
I tried
variables:
version: $[replace(variables['Build.SourceBranchName'], variables['Build.Repository.Name'], '')]
It did not seem to work.
It's not possible to dynamically specify resources in YAML.
A suggestion could be to use REST API hooks when new pipelines are added. Then trigger a program that generates new YAML for pipeline-y.yml.

How to have event configuration with string for GitHub Actions On?

I have the following GitHub Actions YML file.
name: CI
on:
- push
- release:
- types: [published]
#...
But I'm getting an error: Invalid Workflow File Invalid type for on.
The only other way to do what I want here is to do on: [push, release]. But then I can't filter by type published.
How can I fix this error?
The yaml doesn't look valid to me. Try this:
name: CI
on:
push:
release:
types: [published]

Can CloudFormation Create a PipeLine Manual Approval Action through Template?

Reading through this https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals-action-add.html
it sounds like you can only create a manual approval step through the UI Console or through CLI BUT NOT through CloudFormation Template?
Edgar
Actually, CloudFormation does support this.
You just need to set Provider for resource ActionTypeId (Pipeline -> Stage -> Action -> ActionTypeId) as Manual and that's it. More info about provider type - here.
Examle:
DeliveryPipeline:
Properties:
...
Stages:
...
- Actions:
- ActionTypeId:
Category: Approval
Owner: AWS
Provider: Manual
Version: '1'
Configuration:
NotificationArn: <<arn>>
InputArtifacts: []
Name: TestApproval
RunOrder: 1
Name: Development_Approval
...
Type: AWS::CodePipeline::Pipeline

Is there a way to put a lock on Concourse git-resource?

I have setup pipeline in Concourse with some jobs that are building Docker images.
After the build I push the image tag to the git repo.
The problem is when the builds come to end at the same time, one jobs pushes to git, while the other just pulled, and when second job tries push to git it gets error.
error: failed to push some refs to 'git#github.com:*****/*****'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
So is there any way to prevent concurrent push?
So far I've tried applying serial and serial_groups to jobs.
It helps, but all the jobs got queued up, because we have a lot of builds.
I expect jobs to run concurrently and pause before doing operations to git if some other job have a lock on it.
resources:
- name: backend-helm-repo
type: git
source:
branch: master
paths:
- helm
uri: git#github.com:******/******
-...
jobs:
-...
- name: some-hidden-api-build
serial: true
serial_groups:
- build-alone
plan:
- get: some-hidden-api-repo
trigger: true
- get: golang
- task: build-image
file: somefile.yaml
- put: some-hidden-api-image
- get: backend-helm-repo
- task: update-helm-tag
config:
platform: linux
image_resource:
type: registry-image
source:
repository: mikefarah/yq
tag: latest
run:
path: /bin/sh
args:
- -xce
- "file manipulations && git commit"
inputs:
- name: some-hidden-api-repo
- name: backend-helm-repo
outputs:
- name: backend-helm-tag-bump
- put: backend-helm-repo
params:
repository: backend-helm-tag-bump
- put: some-hidden-api-status
params:
commit: some-hidden-api-repo
state: success
- name: some-other-build
serial: true
serial_groups:
- build-alone
plan:
- get: some-other-repo
trigger: true
- get: golang
- task: build-image
file: somefile.yaml
- put: some-other-image
- get: backend-helm-repo
- task: update-helm-tag
config:
platform: linux
image_resource:
type: registry-image
source:
repository: mikefarah/yq
tag: latest
run:
path: /bin/sh
args:
- -xce
- "file manipulations && git commit"
inputs:
- name: some-other-repo
- name: backend-helm-repo
outputs:
- name: backend-helm-tag-bump
- put: backend-helm-repo
params:
repository: backend-helm-tag-bump
- put: some-other-status
params:
commit: some-other-repo
state: success
-...
So if jobs come finish image build at the same time and make git commit in parallel, one pushes faster, than second one, second one breaks.
Can someone help?
note that your description is too vague to give detailed answer.
I expect jobs to concurrently and stop before pushing to git if some other job have a lock on git.
This will not be enough, if they stop just before pushing, they are already referencing a git commit, which will become stale when the lock is released by the other job :-)
The jobs would have to stop, waiting on the lock, before cloning the git repo, so at the very beginning.
All this is speculation on my part, since again it is not clear what you want to do, for these kind of questions posting a as-small-as-possible pipeline image and as-small-as-possible configuration code is helpful.
You can consider https://github.com/concourse/pool-resource as locking mechanism.