Concourse: What is the difference between "Resource Types" and "Resource"? - concourse

When i developing pipeline i can't understand the difference between "Resource Types" and "Resource".
According to documentation Resource type is there only to provide the type of the resource and check for the tags. Like in example bellow:
---
resource_types:
- name: rss
type: docker-image
source:
repository: suhlig/concourse-rss-resource
tag: latest
resources:
- name: booklit-releases
type: rss
source:
url: http://www.qwantz.com/rssfeed.php
jobs:
- name: announce
plan:
- get: booklit-releases
trigger: true
Why do we need both of them? isn't it enough just to use resources?

I'm also new to this project. Please correct me if I am wrong.
I think in the term of the container:
A resource type is an image and we need to config the repository and tag in its source so that the concourse can locate/download it.
A resource is a container which is an instance of that image and can be used in the jobs when the pipeline is running. Its source that we configure is the common parameters which will be passed on the stdin to the check, in and out scripts when the resource is configured in a get or put step.
I think it's a little similar to the comparison between the class and object.

Related

403 forbidden when trying to create a bucket using Deployment Manager

I am trying to create a GCS bucket using Deployment Manager using the following resource config:
resources:
- type: storage.v1.bucket
name: upload-bucket
properties:
project: <project-id>
name: <unique-bucket-name>
However, I get the following error:
- code: RESOURCE_ERROR
location: /deployments/the-bucket/resources/upload-bucket
message: '{"ResourceType":"storage.v1.bucket","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"errors":[{"domain":"global","message":"205531008256#cloudservices.gserviceaccount.com
does not have storage.buckets.get access to upload-bucket.","reason":"forbidden"}],"message":"205531008256#cloudservices.gserviceaccount.com
does not have storage.buckets.get access to upload-bucket.","statusMessage":"Forbidden","requestPath":"https://www.googleapis.com/storage/v1/b/upload-bucket","httpMethod":"GET","suggestion":"Consider
granting permissions to 205531008256#cloudservices.gserviceaccount.com"}}'
The role of 205531008256#cloudservices.gserviceaccount.com is Project Editor by default (which surely has enough permissions?), however I've also tried adding Storage Admin and Project Owner - neither seems to help.
My 2 questions are:
Why it is trying to use this service account?
How can I get Deployment Manager to be able to create a bucket?
Thanks
I ran into the exact same problem. Allow me to restate Andres S's answer more clearly.
When you wrote
resources:
- type: storage.v1.bucket
name: upload-bucket
properties:
project: <project-id>
name: <unique-bucket-name>
you probably intended create a bucket called <unique-bucket-name> and figured that upload-bucket would just be a name to refer to this bucket in the Deployment Manager. What GCP actually did was attempt to use upload-bucket as the actual bucket name. As far as I can tell, <unique-bucket-name> is never used. This caused a problem, since someone else already owns the bucket upload-bucket.
Try this code, I think you are specifying the name twice.
resources:
- type: storage.v1.bucket
name: <unique-bucket-name>
properties:
project: <project-id>
I recently run into similar issue, where Deployment Manager failed to create the bucket.
I have verified that:
the permissions are not an issue as the same deployment contained other bucket that was created.
the bucket name is not an issue as I was able to create the bucket manually.
After some googling I found there is other way to create the bucket. Instead of using type: storage.v1.bucket you can also use type: gcp-types/storage-v1:buckets.
So my final solution was to create the bucket like this:
- name: images-bucket
type: gcp-types/storage-v1:buckets
properties:
name: images-my-project-name
location: "eu"

Create cloudformation resource multiply times

I've just moved to cloud formation and I am starting with creating ECR repositories for docker,
I need all repositories to have the same properties except the repository name.
Since this is micro-services I will need at least 40 repo's so I want to create a stack that will create the repo's for me in a loop, and just change the name.
I started looking at nested stacks and this is what I got so far:
ecr-root.yaml:
---
AWSTemplateFormatVersion: '2010-09-09'
Description: ECR docekr repository
Parameters:
ECRRepositoryName:
Description: ECR repository name
Type: AWS::ECR::Repository::RepositoryName
Resources:
ECRStack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: https://cloudformation.s3.amazonaws.com/ecr-stack.yaml
TimeoutInMinutes: '20'
Parameters:
ECRRepositoryName: !GetAtt 'ECRStack.Outputs.ECRRepositoryName'
And ecr-stack.yaml:
---
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
ECRRepositoryName:
Description: ECR repository name
Default: panpwr-mysql-base
Type: String
Resources:
MyRepository:
Type: AWS::ECR::Repository
Properties:
RepositoryName:
ref: ECRRepositoryName
RepositoryPolicyText:
Version: "2012-10-17"
Statement:
-
Sid: AllowPushPull
Effect: Allow
Principal:
AWS:
- "arn:aws:iam::123456789012:user/Bob"
- "arn:aws:iam::123456789012:user/Alice"
Action:
- "ecr:GetDownloadUrlForLayer"
- "ecr:BatchGetImage"
- "ecr:BatchCheckLayerAvailability"
- "ecr:PutImage"
- "ecr:InitiateLayerUpload"
- "ecr:UploadLayerPart"
- "ecr:CompleteLayerUpload"
RepositoryNameExport:
Description: RepositoryName for export
Value:
Ref: ECRRepositoryName
Export:
Name:
Fn::Sub: "ECRRepositoryName"
Everything is working fine,
But when I'm running the stack it asks me for the repository name I want to give it, and it creates one repository.
And then I can have as many stacks that I want with a different name but that is not my purpose.
How do I get it all in one stack that creates as many repositories that I want?
Sounds like you want to loop through a given list of parameters. Looping is not possible in a CloudFormation template. Few things you can try
You could programmatically generate a template. The troposphere Python library provides a nice abstraction to generate templates.
Write custom resource backed by AWS lambda. You can handle your custom logic in the AWS lambda function .
The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define cloud infrastructure in code and provision it through AWS CloudFormation. Use AWS CDK to write custom script for your usecase.

Triggering tasks on Semver change: triggers jobs out or order

Here's what I'm trying to achieve:
I have a project with a build job for a binary release. The binary takes a while to cross-compile for each platform, so I only want to release build to be done when a release is tagged, but I want the local-native version to build and tests to run for each checked-in version.
Based on the flight-school demo... so far, my pipeline configuration looks like this:
resources:
- name: flight-school
type: git
source:
uri: https://github.com/nbering/flight-school
branch: master
- name: flight-school-version
type: semver
source:
driver: git
uri: https://github.com/nbering/flight-school
branch: master
file: version
jobs:
- name: test-app
plan:
- get: flight-school
trigger: true
- task: tests
file: flight-school/build.yml
- name: release-build
plan:
- aggregate:
- get: flight-school-version
trigger: true
- get: flight-school
passed: [test-app]
- task: release-build
file: flight-school/ci/release.yml
This produces a pipeline in the Web UI that looks like this:
The problem is that when I update the "release" file in the git repository, the semver resource, "flight-school-version" can check before the git resource "flight-school", causing the release build to be processed from the git version assigned to the previous check-in.
I'd like a way to work around this so that the release build appears as a separate task, but only triggers when the version is bumped.
Some things I've thought of so far
Create a separate git resource with a tag_filter set so that it only runs when a semver tag has been push to master
Pro: Jobs only run when tag is pushed
Con: Has the same disconnected-inheritance problem for tests as the semver-based example above
Add the conditional check for a semver tag (or change diff on a file) using the git history in the checkout as part of the build script
Pro: Will do basically what I want without too much wrestling with Concourse
Con: Can't see the difference in the UI without actually reading the build output
Con: Difficult to compose with other tasks and resource types to do something with the binary release
Manually trigger release build
Pro: Simple to set up
Con: Requires manual intervention.
Use the API to trigger a paused build step on completion of tests when a version change is detected
Con: Haven't seen any examples of others doing this, seems really complicated.
I haven't found a way to trigger a task when both the git resource and semver resource change.
I'm looking for either an answer to solve the concurrency problem in my above example, or an alternative pattern that would produce a similar release workflow.
Summary
Here's what I came up with for a solution, based on suggestions from the Concourse CI slack channel.
I added a parallel "release" track, which filters on tags resembling a semantic versioning versions. The two tracks share task configuration files and build scripts.
Tag Filtering
The git resource supports a tag_filter option. From the README:
tag_filter: Optional. If specified, the resource will only detect commits
that have a tag matching the expression that have been made against
the branch. Patterns are glob(7)
compatible (as in, bash compatible).
I used a simple glob pattern to match my semver tags (like v0.0.1):
v[0-9]*
At first I tried an "extglob" pattern, matching semantic versions exactly, like this:
v+([0-9]).+([0-9]).+([0-9])?(\-+([-A-Za-z0-9.]))?(\++([-A-Za-z0-9.]))
That didn't work, because the git resource isn't using the extglob shell option.
The end result is a resource that looks like this:
resource:
- name: flight-school-release
type: git
source:
uri: https://github.com/nbering/flight-school
branch: master
tag_filter: 'v[0-9]*'
Re-Using Task Definitions
The next challenge I faced was avoiding re-writing my test definition file for the release track. I would have to do this because all the file paths use the resource name, and I now have a resource for release, and development. My solution is to override the resource with an option on the get task.
jobs:
- name: test-app-release
plan:
- get: flight-school
resource: flight-school-release
trigger: true
- task: tests
file: flight-school/build.yml
Build.yml above is the standard example from the flight school tutorial.
Putting It All Together
My resulting pipeline looks like this:
My complete pipeline config looks like this:
resources:
- name: flight-school-master
type: git
source:
uri: https://github.com/nbering/flight-school
branch: master
- name: flight-school-release
type: git
source:
uri: https://github.com/nbering/flight-school
branch: master
tag_filter: 'v[0-9]*'
jobs:
- name: test-app-dev
plan:
- get: flight-school
resource: flight-school-master
trigger: true
- task: tests
file: flight-school/build.yml
- name: test-app-release
plan:
- get: flight-school
resource: flight-school-release
trigger: true
- task: tests
file: flight-school/build.yml
- name: build-release
plan:
- get: flight-school
resource: flight-school-release
trigger: true
passed: [test-app-release]
- task: release-build
file: flight-school/ci/release.yml
In my opinion you should manually click the release-build button, and let everything else be automated. I'm assuming you are manually bumping your version number, but it seems better to move that manual intervention to releasing.
What I would do is have put at the end of release-build that bumps your minor version. Something like:
- put: flight-school-version
params:
bump: minor
That way you will always be on the correct version, once you release 0.0.1, you are done with it forever, you can only go forward.

How to configure a custom resource type in a concourse pipeline?

I've already done a google search to find a way to setup a custom resource in concourse pipeline but the answers/documentation do not work.
Can someone provide a working example of custom resource type that is pulled from a local registry and used in a build plan?
For example, say I were to clone the git resource and slightly modify it and pushed it to my local registry.
The git resource image would be name: localhost:5000/local_git:latest
How would you be able to use this custom resource (local_git:latest) in a pipeline definition?
There are two main settings to consider here when running a local registry:
Must use insecure_registries:
insecure_registries: ["my.local.registry:8080"]
If you are running your registry in "localhost", you shouldn't use localhost as the address for your registry, if you do, the docker image will try to resolve to the localhost of the docker image instead of your local machine, in order to avoid this problem, use the IP address of your local machine. (DON'T use 127.0.0.1)
You can define your custom resource type in your pipeline under the resource_types key in the pipeline yml.
Eg:
resource_types:
- name: custom-git
type: docker-image
source:
repository: localhost:5000/local_git
An important note is that custom resource type images are fetched in a manner identical to using a base resource in your pipeline, so for your case of a private Docker registry, you will just need to configure the necessary source: on the docker-image resource (See the docs for the docker-image-resource)
You can then use the type for resources as you would any of the base types:
resources:
- name: some-custom-git-resource
type: custom-git
source: ...
Note the type: key of the resource matches the name: on the resource type.
Take a look at the Concourse Documentation for Configuring Resource Types for more information on how to use custom types in your pipeline.

How to parameterise concourse task files

I'm pretty impressed by the power and simplicity of Concourse. Since my pipelines keep growing I decided to move the tasks to separate files. One of the tasks use a custom Docker image from our own private registry. So, in that task file I have:
image_resource:
type: docker-image
source:
repository: docker.mycomp.com:443/app-builder
tag: latest
username: {{dckr-user}}
password: {{dckr-pass}}
When I do a set-pipeline, I pass the --load-from-vars argument to load credentials etc from a seperate file.
Now here's my problem: I notice that the vars in my pipeline files are replaced with the actual correct values, but once the task runs, the afore mentioned {{dckr-user}} and {{dckr-pass}} are not replaced.
How do I achieve this?
In addition to what was provided in this answer
If specifically you are looking to use private images in a task, you can do the following in your pipeline.yml:
resources:
- name: some-private-image
type: docker
params:
repository: ...
username: {{my-username}}
password: {{my-password}}
jobs:
- name: foo
plan:
- get: some-private-image
- task: some-task
image: some-private-image
Because this is your pipeline, you can use --load-vars-from, which will first get your image as a resource and then use it for the subsequent task.
You can also see this article on pre-fetching ruby gems in test containers on Concourse
The only downside to this is you cannot use this technique when running a fly execute.
As of concourse v3.3.0, you can set up Credential Management in order to use variables from one of the supported credential managers which are currently Vault, Credhub, Amazon SSM, and Amazon Secrets Manager. So you don't have to separate your task files partially in the pipeline.yml anymore. The values you set in the Vault will be also accessible from the task.yml files.
And since v3.2.0 {{foo}} is deprecated in favor of ((foo)).
Using the Credential Manager you can parameterize:
source under resources in a pipeline
source under resource_types in a pipeline
webhook_token under resources in a pipeline
image_resource.source under image_resource in a task config
params in a pipeline
params in a task config
For setting up vault with concourse you can refer to:
https://concourse-ci.org/creds.html
You can always define tasks in a pipeline.yml...
For example:
jobs:
- name: dotpersecond
plan:
- task: dotpersecond
config:
image_resource:
type: docker-image
source:
repository: docker.mycomp.com:443/app-builder
tag: latest
username: {{dckr-user}}
password: {{dckr-pass}}
run:
path: sh
args:
- "-c"
- |
for i in `seq 1000`; do echo hi; sleep 2; done