Not able to trigger jobs one after the other using gcs-resource in concourse - concourse

I have two jobs viz. build and publish. I want publish to trigger after build is done. So, I am using an external resource gcs-resourcehttps://github.com/frodenas/gcs-resource
Following is my pipeline.yml:
---
resource_types:
- name: gcs-resource
type: docker-image
source:
repository: frodenas/gcs-resource
resources:
- name: proj-repo
type: git
source:
uri: <my uri>
branch: develop
username: <username>
password: <password>
- name: proj-gcr
type: docker-image
source:
repository: asia.gcr.io/myproject/proj
tag: develop
username: _json_key
password: <my password>
- name: proj-build-output
type: gcs-resource
source:
bucket: proj-build-deploy
json_key: <my key>
regexp: Dockerfile
jobs:
- name: build
serial_groups: [proj-build-deploy]
plan:
- get: proj
resource: proj-repo
- task: build
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: 10.13.0}
inputs:
- name: proj
run:
path: sh
args:
- -exc
- |
<do something>
- put: proj-build-output
params:
file: proj/Dockerfile
content_type: application/octet-stream
- name: publish
serial_groups: [proj-build-deploy]
plan:
- get: proj-build-output
trigger: true
passed: [build]
- put: proj-gcr
params:
build: proj-build-output
I am using the external resource proj-build-output to trigger the next job. I can run the individual jobs without any problem, however the the publish job doesn't automatically get triggered after completion of build job.
Am I missing something?

The regexp of the gcs-resource is misconfigured:
...
regexp: Dockerfile
...
while regexp, as the original S3 resource from which it comes from, wants:
regexp: the pattern to match filenames against within GCS. The first grouped match is used to extract the version, or if a group is explicitly named version, that group is used.
The https://github.com/frodenas/gcs-resource#example-configuration shows its correct usage:
regexp: directory_on_gcs/release-(.*).tgz
This is not specific to the GCS or S3 resource; Concourse needs a "version" to move artifacts from jobs to storage and back. It is one of the fundamental concepts of Concourse. See https://web.archive.org/web/20171205105324/http://concourse.ci:80/versioned-s3-artifacts.html for an example.

As Marco mentioned, the problem was with versioning.
I solved my issue using these two steps:
Enabled versioning on my GCS Bucket https://cloud.google.com/storage/docs/object-versioning#_Enabling
Replaces regexp with versioned_file as mentioned in the docs https://github.com/frodenas/gcs-resource#file-names

Related

terraform-cli : Duplicate value: "step-init". Maybe missing or invalid Task default/terraform-cli error

I've been attempting to use Tekton to deploy some AWS infrastructure via Terraform but not having much success.
The pipeline clones a Github repo containing TF code , it then attempts to use the terraform-cli task to provision the AWS infrastructure. For initial testing I just want to perform the initial TF init and provision the AWS VPC.
Expected behaviour
Clone Github Repo
Perform Terraform Init
Create the VPC using targeted TF apply
Actual Result
task terraform-init has failed: failed to create task run pod "my-infra-pipelinerun-terraform-init": Pod "my-infra-pipelinerun-terraform-init-pod" is invalid: spec.initContainers[1].name: Duplicate value: "step-init". Maybe missing or invalid Task default/terraform-cli
pod for taskrun my-infra-pipelinerun-terraform-init not available yet
Tasks Completed: 2 (Failed: 1, Cancelled 0), Skipped: 1
Steps to Reproduce the Problem
Prerequisites: Install Tekton command line tool, git-clone and terraform-cli
create this pipeline in Minikube
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: my-infra-pipeline
spec:
description: Pipeline for TF deployment
params:
- name: repo-url
type: string
description: Git repository URL
- name: branch-name
type: string
description: The git branch
workspaces:
- name: tf-config
description: The workspace where the tf config code will be stored
tasks:
- name: clone-repo
taskRef:
name: git-clone
workspaces:
- name: output
workspace: tf-config
params:
- name: url
value: $(params.repo-url)
- name: revision
value: $(params.branch-name)
- name: terraform-init
runAfter: ["clone-repo"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- init
- name: build-vpc
runAfter: ["terraform-init"]
taskRef:
name: terraform-cli
workspaces:
- name: source
workspace: tf-config
params:
- name: terraform-secret
value: "tf-auth"
- name: ARGS
value:
- apply
- "-target=aws_vpc.vpc -auto-approve"
Run the pipeline by creating a pipelinerun resource in k8s
Review the logs > tkn pipelinerun logs my-tf-pipeline -a
Additional Information
Pipeline version: v0.35.1
There is a known issue regarding "step-init" in some earlier versions - I suggest you upgrade to latest version (0.36.0) and try again.

Concourse CI git clone multiple repos to same directory

Can someone help me to achieve my request. I would like to git clone multiple repos into the same directory where the first git repo is downloaded. Below is my pipeline.yml file. Any help is much appreciated.
resources:
- name: workspace-repo1
type: git
source:
uri: <git-repo1-url>
branch: master
private_key: ((publishing-outputs-private-key))
- name: workspace-repo2
type: git
source:
uri: <git-repo2-url>
branch: master
private_key: ((publishing-outputs-private-key))
- name: workspace-repo3
type: git
source:
uri: <git-repo3-url>
branch: master
private_key: ((publishing-outputs-private-key))
jobs:
- name: job-test
public: true
plan:
- get: workspace-repo1
- get: workspace-repo2
- get: workspace-repo3
- task: app-tests
config:
platform: linux
image_resource:
type: docker-image
source: {repository: golang, tag: 1.14.15-stretch}
inputs:
- name: workspace-repo1 (git repo1, repo2,repo3 should be downloaded here)
outputs:
- name: workspace-repo-deb
run:
path: workspace-repo1/local_build.sh
You could potentially do something like
- task: prep
file: scripts/merge.yml
input_mapping: {repo1: workspace-1, repo1: workspace-2, ...}
---
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
params:
#this could whatever path you want
SRC_PATH: src
inputs:
- name: ci-scripts
- name: repo1
- name: repo2
run:
path: scripts/merge.sh
#this output could be use in the next step, it contains the directories you copied in the script below
outputs:
- name: build
#!/bin/sh
set -e
mkdir -p build
#copy around anything you need
cp -rf repo1/../../. build/
cp -rf repo2/../../. build/
cp -rf repo3/../../. build/
#listing content of build folder just checking
ls -la build/

concourse pipeline - throwing error for sub folders "not a valid repository name"

I have repository which has two frontend application and one server folder. i need to create pipeline for two frontend(angular) and one server(nodejs) folder. if I create a pipeline for main folder(concourse-pipeline) its working fine. but when I try to create to pipeline for subfolders(frontend) its throwing an error as "not a valid repository name". I'm not sure whats going wrong here.
- name: repo
type: git
source:
uri: git#github.com:test-repo/concourse-pipeline.git
branch: master
private_key: ((repo.private-key))
- name: frontend
type: git
source:
uri: git#github.com:test-repo/concourse-pipeline/frontend.git
branch: master
private_key: ((repo.private-key))
- name: version
type: semver
source:
driver: git
initial_version: 0.0.1
uri: git#github.com:test-repo/concourse-pipeline.git
private_key: ((repo.private-key))
branch: master
file: version
- name: run-server
type: git
source:
uri: git#github.com:test-repo/concourse-pipeline.git
branch: master
private_key: ((repo.private-key))
jobs:
- name: run-server
build_logs_to_retain: 20
max_in_flight: 1
plan:
- get: run-server
trigger: true
- task: run-tests
config:
platform: linux
image_resource:
type: registry-image
source:
repository: node
inputs:
- name: run-server
run:
path: /bin/sh
args:
- -c
- |
echo "Node Version: $(node --version)"
echo "NPM Version: $(npm --version)"
cd run-server
npm install
npm test
- name: run-frontend
build_logs_to_retain: 20
max_in_flight: 1
plan:
- get: frontend
trigger: true
- task: run-tests
config:
platform: linux
image_resource:
type: registry-image
source:
repository: node
inputs:
- name: frontend
run:
path: /bin/sh
args:
- -c
- |
echo "Node Version: $(node --version)"
echo "NPM Version: $(npm --version)"
cd frontend
npm install
ng test
- name: bump-version
plan:
- get: repo
trigger: true
- put: version
params:
bump: patch
- name: build-repo
plan:
- get: repo
trigger: true
- get: version
params:
build: repo
tag_file: version/version
tag_as_latest: true
Any help would be appreciated
The uri of the frontend resource seems invalid.
uri: git#github.com:test-repo/concourse-pipeline/frontend.git
The Github address should be only git#github.com:(user):(repository).git

How to use a shared file from an azure devops repo within an azure devops pipeline

The issue is that I'm not able to load just a normal file.
The below code works for extending a azure pipeline file, but I can't use one .runsettings file from the same repository within my vstest step which is in the extended template. Any ideas, how I can share the .runsettings file?
resources:
repositories:
- repository: service
type: git
name: proj/service
ref: feature/myfeature
extends:
template: service-template1.0.yml#service
You need to add checkout step:
resources:
repositories:
- repository: service
type: git
name: proj/service
ref: feature/myfeature
extends:
template: service-template1.0.yml#service
paramaters:
repoName: self
and then in template
# File: simple-param.yml
parameters:
- name: repoName
type: string
steps:
- checkout: ${{ parameters.repoName }}
......

Concourse CI: static_buildpack issue

I am deploying a simple angular4 application to cloud foundry using static_buildpack. While accessing the application I am always getting nginx 403 issue.
jobs:
- name: app
serial: true
plan:
- get: develop-repo
- task: npm-build
config:
platform: linux
image_resource:
type: docker-image
source:
repository: node
run:
path: sh
args:
- -exec
- |
cd develop-repo
npm install
npm run dist
inputs:
- name: develop-repo
outputs:
- name:
- put: develop
params:
manifest: develop-repo/manifest.yml
current_app_name: app
path: develop-repo
resources:
- name: develop-repo
type: git
- name: develop
type: cf
manifest.yml:
---
applications:
- name: app
instances: 1
memory: 512M
disk_quota: 512M
buildpack: staticfile_buildpack
stack: cflinuxfs2
All I am doing is git clone -> npm build -> cf deploy
Note: All resource variables are rightly set. Just ignored for better readability
After trying out couple of options, I found that by publishing the artifacts to the output folder we can push the app from the output folder
---
jobs:
- name: app
serial: true
plan:
- get: develop
- task: npm-build
config:
platform: linux
image_resource:
type: docker-image
source:
repository: node
inputs:
- name: develop
outputs:
- name: artifacts
run:
path: sh
args:
- -exec
- |
cd develop
npm install
npm run dist
ls
cp -R dist ../artifacts/
- put: deploy-cf
params:
manifest: develop/ci/manifests/manifest-int.yml
path: artifacts/dist
resources:
- name: develop
type: git
source:
uri: <<GITHUB-URI>>
branch:<<GITHUB-BRANCH>>
username:<<GITHUB-USERNAME>>
password: <<GITHUB-PASSWORD>>
- name: deploy-cf
type: cf
source:
api: <<CF-API>>
username: <<CF-USERNAME>>
password: <<CF-PASSWORD>>
organization: <<CF-ORG>>
space: <<CF-SPACE>>