How to configure a custom resource type in a concourse pipeline? - concourse

I've already done a google search to find a way to setup a custom resource in concourse pipeline but the answers/documentation do not work.
Can someone provide a working example of custom resource type that is pulled from a local registry and used in a build plan?
For example, say I were to clone the git resource and slightly modify it and pushed it to my local registry.
The git resource image would be name: localhost:5000/local_git:latest
How would you be able to use this custom resource (local_git:latest) in a pipeline definition?

There are two main settings to consider here when running a local registry:
Must use insecure_registries:
insecure_registries: ["my.local.registry:8080"]
If you are running your registry in "localhost", you shouldn't use localhost as the address for your registry, if you do, the docker image will try to resolve to the localhost of the docker image instead of your local machine, in order to avoid this problem, use the IP address of your local machine. (DON'T use 127.0.0.1)

You can define your custom resource type in your pipeline under the resource_types key in the pipeline yml.
Eg:
resource_types:
- name: custom-git
type: docker-image
source:
repository: localhost:5000/local_git
An important note is that custom resource type images are fetched in a manner identical to using a base resource in your pipeline, so for your case of a private Docker registry, you will just need to configure the necessary source: on the docker-image resource (See the docs for the docker-image-resource)
You can then use the type for resources as you would any of the base types:
resources:
- name: some-custom-git-resource
type: custom-git
source: ...
Note the type: key of the resource matches the name: on the resource type.
Take a look at the Concourse Documentation for Configuring Resource Types for more information on how to use custom types in your pipeline.

Related

vault-secrets-provider alias not recognized with docker-kaniko

I'm having some issues when trying to use Hashicorp vault template (kubernetes with Google Kubernetes Engine) with to.be.continuous.
Actually when I use it with Google Docker Kaniko layer I got an error message: ... wget: bad address 'vault-secrets-provider'.
It seems that Kaniko doesn't recognize the vault-secrets-provider layer. Would you please help me with this? Or perhaps, where I can ask for some help?
This is a summary of .gitlab-ci.yml
# Kubernetes template
- project: 'to-be-continuous/kubernetes'
ref: '2.0.4'
file: '/templates/gitlab-ci-k8s.yml'
- project: "to-be-continuous/kubernetes"
ref: "2.0.4"
file: "templates/gitlab-ci-k8s-vault.yml"
...
K8S_DEFAULT_KUBE_CONFIG: "#url#http://vault-secrets-provider/api/secrets/noprod?field=kube_config"
VAULT_BASE_URL: "http://myvault.myserver.com/v1"
Error Message:
[ERROR] Failed getting secret K8S_DEFAULT_KUBE_CONFIG:
... wget: bad address 'vault-secrets-provider'
I tried many times directly without Vault layer and Kaniko works ok, I mean without Vault secrets.
How I can accomplish this? I tried modifying the kaniko template but without success.
I will appreciate any help with this.
To fix your issue, first upgrade the docker template to its latest version (2.3.0 at the time this response was written).
Then depending on your case you have 2 options:
Docker needs to handle some of your secrets managed by Vault: then you shall also activate the Vault variant for Docker,
Docker doesn't needs to handle any secret managed by Vault: don't use the Vault variant for Docker, you'll have a warning message from Docker not being able to decode the secret (basically the same as the one you had, but not failing the build),
You shall simply use it in your .gitlab-ci.yml file:
include:
# Docker template
- project: 'to-be-continuous/docker'
ref: '2.3.0'
file: '/templates/gitlab-ci-docker.yml'
# Vault variant for Docker (depending on your above case)
- project: 'to-be-continuous/docker'
ref: '2.3.0'
file: '/templates/gitlab-ci-docker-vault.yml'
# Kubernetes template
- project: 'to-be-continuous/kubernetes'
ref: '2.0.4'
file: '/templates/gitlab-ci-k8s.yml'
- project: "to-be-continuous/kubernetes"
ref: "2.0.4"
file: "/templates/gitlab-ci-k8s-vault.yml"
K8S_DEFAULT_KUBE_CONFIG: "#url#http://vault-secrets-provider/api/secrets/noprod?field=kube_config"
VAULT_BASE_URL: "http://myvault.myserver.com/v1"

GitHub Actions: How to dynamically set environment url based on deployment step output?

I found out about a really nice GitHub Actions Feature called Environments. Using the appropriate syntax a Environment could also be created inside a GitHub Action workflow.yml like this:
environment:
name: test_environment
url: https://your-apps-url-here.com
As the docs state thats a valid way to create GitHub Action Environments:
Running a workflow that references an environment that does not exist
will create an environment with the referenced name.
But inside my current GitHub Action workflow is there a way I dynamically set the url based on a deployment step output? I have a dynamic URL resulting from the deployment process to AWS which I can't define up-front.
The job workflow docs tell us that there's also a way of using expressions inside the url field:
environment:
name: test_environment
url: ${{ steps.step_name.outputs.url_output }}
Now imagine a ci.yml workflow file that uses AWS CLI to deploy a static website to S3, where we used a tool like Pulumi to dynamically create a S3 Bucket inside our AWS account. We can read the dynamically created S3 url using the following command pulumi stack output bucketName. The deploy step inside the ci.yml could then look like this:
- name: Deploy Nuxt.js generated static site to S3 Bucket via AWS CLI
id: aws-sync
run: |
aws s3 sync ../dist/ s3://$(pulumi stack output bucketName) --acl public-read
echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)"
working-directory: ./deployment
There are 2 crucial points here: First we should use id inside the deployment step to define a step name we could easily access via step_name inside our environment:url. Second we need to define a step output using echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)". In this example I create a variable s3_url. You could replace the pulumi stack output bucketUrl with any other command you'd like or tool you use, which responds with your dynamic environment url.
Be also sure to add a http:// or https:// in order to prevent an error message like this:
Environment URL 'microservice-ui-nuxt-js-hosting-bucket-bc75fce.s3-website.eu-central-1.amazonaws.com' is not a valid http(s) URL, so it will not be shown as a link in the workflow graph.
Now the environment definition at the top of our ci.yml can access the s3_url output variable from our deployment step like this:
jobs:
ci:
runs-on: ubuntu-latest
environment:
name: microservice-ui-nuxt-js-deployment
url: ${{ steps.aws-sync.outputs.s3_url }}
steps:
- name: Checkout
...
Using steps.aws-sync we reference the deployment step directly, since we defined it with the id. The appended .outputs.s3_url then directly references the variable containing our S3 url. If you defined everything correctly the GitHub Actions UI will render the environment URL directly below the finished job:
Here's also a fully working workflow embedded inside a example project.

Concourse: What is the difference between "Resource Types" and "Resource"?

When i developing pipeline i can't understand the difference between "Resource Types" and "Resource".
According to documentation Resource type is there only to provide the type of the resource and check for the tags. Like in example bellow:
---
resource_types:
- name: rss
type: docker-image
source:
repository: suhlig/concourse-rss-resource
tag: latest
resources:
- name: booklit-releases
type: rss
source:
url: http://www.qwantz.com/rssfeed.php
jobs:
- name: announce
plan:
- get: booklit-releases
trigger: true
Why do we need both of them? isn't it enough just to use resources?
I'm also new to this project. Please correct me if I am wrong.
I think in the term of the container:
A resource type is an image and we need to config the repository and tag in its source so that the concourse can locate/download it.
A resource is a container which is an instance of that image and can be used in the jobs when the pipeline is running. Its source that we configure is the common parameters which will be passed on the stdin to the check, in and out scripts when the resource is configured in a get or put step.
I think it's a little similar to the comparison between the class and object.

How to set API Server parameters on kubespray deployment

I am using kubespray for the deployment of a kubernetes cluster and
want to set some API Server parameters for the deployment. In specific I want to configure the authentication via OpenID Connect (e.g set the oidc-issuer-url parameter). I saw that kubespray has some vars to set (https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vars.md), but not the ones I am looking for.
Is there a way to set these parameters via kubespray? I don't want to configure each master manually (e.g by editing the /etc/kubernetes/manifests/kube-apiserver.yaml files).
Thanks for your help
On the bottom of the page you are referring to there is description how to define custom flags for various components of k8s:
kubelet_custom_flags:
- "--eviction-hard=memory.available<100Mi"
- "--eviction-soft-grace-period=memory.available=30s"
- "--eviction-soft=memory.available<300Mi"
The possible vars are:
apiserver_custom_flags
controller_mgr_custom_flags
scheduler_custom_flags
kubelet_custom_flags
kubelet_node_custom_flags
The k8s-cluster.yml file has some parameters which allow to set the OID configuration:
kube_oidc_auth: true
...
kube_oidc_url: https:// ...
kube_oidc_client_id: kubernetes
kube_oidc_ca_file: "{{ kube_cert_dir }}/ca.pem"
kube_oidc_username_claim: sub
kube_oidc_username_prefix: oidc:
kube_oidc_groups_claim: groups
kube_oidc_groups_prefix: oidc:
These parameters are the counter parts to the oidc api server parameters

Is it possible to use variables in a codeship-steps.yml file?

We currently use Codeship Pro to push Docker images to a private registry on AWS, as well as to deploy those images to an ECS cluster.
However, the codeship-steps.yml file includes a hard-coded region name for which AWS region I'm pushing to. For example:
- name: push_production
service: app
type: push
image_name: 123456789012.dkr.ecr.us-east-1.amazonaws.com/project/app-name
image_tag: "{{.Timestamp}}"
tag: master
registry: https://123456789012.dkr.ecr.us-east-1.amazonaws.com
dockercfg_service: aws_generator
I would like to be able to fairly easily switch this to deploy to a different AWS region. Thus the question:
Is it possible to use variables in a codeship-steps.yml file?
I know some of the properties can use a handful of built-in variables provided by Codeship (such as the {{.Timestamp}} value used for the image_tag property), but I don't know if, for example, values from an env_file can be used in the image_name, registry, and/or command properties of a step.
I'm imagining something like this...
codeship-steps.yml:
- name: push_production
service: app
type: push
image_name: "123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com/project/app-name"
image_tag: "{{.Timestamp}}"
tag: master
registry: "https://123456789012.dkr.ecr.{{.AWS_REGION}}.amazonaws.com"
dockercfg_service: aws_generator
... but that results in an "error parsing image name during push step: invalid reference format" on the push step.
I've tried simply not specifying the registry in the image_name...
image_name: project/app-name
... but I get a "Build Error: no basic auth credentials" on the push step. At this point, I'm running out of ideas.
Is it possible to use [environment] variables in a codeship-steps.yml file?
While the image_tag can take advantage of Go templates, the same is not the case for image_name, registry, or anything else. This is a separate set of templating variables that are accessible only to the image_tag generation.
As for environment variables in general (CI environment variables or those defined in the service configs), these values can be used in codeship-steps.yml on the command step when passed through a shell command. For example:
- service: app
command: echo The branch name is: $CI_BRANCH
Results in:
The branch name is: $CI_BRANCH
- service: app
command: /bin/sh -c 'echo The branch name is: $CI_BRANCH'
Results in:
The branch name is: master
As for your 'no basic auth credentials' error message, it's possible that there's an issue with how you are retrieving the basic auth credentials for access to your image registry. If you are on a MacOS device, I would recommend that you review our documentation on how to generate Docker credentials.