Using k8s secrets in concourse pipeline - concourse

This question is similar to this one Concourse CI can't find kubernetes secrets . However, the marked solution in it did not work for me.
I have setup concourse using this helm chart https://github.com/helm/charts/tree/master/stable/concourse
My release name is concourse-ci. So, my namespace prefix is concourse-ci- and the team name is main.
So, following the documentation https://github.com/helm/charts/tree/master/stable/concourse#kubernetes-secrets I created my secrets like this
apiVersion: v1
kind: Secret
metadata:
name: git
namespace: concourse-ci-main
type: Opaque
data: #Base 64 (can be removed)
username: <base64 encoded username>
password: <base64 encoded password>
In mu pipeline, I have the following:
resources:
- name: my-repo
type: git
source:
uri: <my-uri>
branch: develop
username: ((git.username))
password: ((git.password))
When, I execute the pipeline with the above code, it gets' stuck. However, if I replace ((git.username)) and ((git.password)) with the actual values, it works perfectly fine.
Am I missing something? I tried creating the secret in concourse-ci instead of concourse-ci-main, but I still get the same error.
I have the following in values.yml
kubernetes:
enabled: true
and
rbac:
# true here enables creation of rbac resources
create: false

Related

How to add protocol prefix in Kubernetes ConfigMap

In my Kubernetes cluster, I have a ConfigMap object containing the address of my Postgres pod. It was created with the following YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: postgres-service
Now I reference this value in one of my Deployment's configuration:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
This deployment is a Spring Boot application that intends to communicate with the database. Thus it reads the database's URL from the DB_ADDRESS environment variable. (ignore the default values, those are used only during development)
datasource:
url: ${DB_ADDRESS:jdbc:postgresql://localhost:5432/users}
username: ${POSTGRES_USER:postgres}
password: ${POSTGRES_PASSWORD:mysecretpassword}
So, according to the logs, the problem is that the address has to have the jdbc:postgresql:// prefix. Either in the ConfigMap's YAML or in the application.yml I would need to concatenate the prefix protocol string with the variable. Any idea how to do it in yml or suggestion of some other workaround?
If you create a Service, that will provide you with a hostname (the name of the service) that you can then use in the ConfigMap. E.g., if you create a service named postgres, then your ConfigMap would look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: jdbc:postgresql://postgres:5432/users
Kubernetes environment variable declarations can embed the values of other environment variables. This is the only string manipulation that Kubernetes supports, and it pretty much only works in env: blocks.
For this setup, once you've retrieved the database hostname from the ConfigMap, you can then embed it into a more complete SPRING_DATASOURCE_URL environment variable:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
- name: SPRING_DATASOURCE_URL
value: 'jdbc:postgresql://$(DB_ADDRESS):5432/users'
You might similarly parameterize the port (though it will almost always be the standard port 5432) and database name. Avoid putting these settings in a Spring profile YAML file, where you'll have to rebuild your application if any of the deploy-time settings change.

How to restart Tekton PipelineRun, having a pipeline-run.yml defined in git (e.g. using Cloud Native Buildpacks)?

We want to use the official Tekton buildpacks task from Tekton Hub to run our builds using Cloud Native Buildpacks. The buildpacks documentation for Tekton tells us to install the buildpacks & git-clone Task from Tekton Hub, create Secret, ServiceAccount, PersistentVolumeClaim and a Tekton Pipeline.
As the configuration is parameterized, we don't want to start our Tekton pipelines using a huge kubectl command but instead configure the PipelineRun using a separate pipeline-run.yml YAML file (as also stated in the docs) containing the references to the ServiceAccount, workspaces, image name and so on:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: buildpacks-test-pipeline-run
spec:
serviceAccountName: buildpacks-service-account # Only needed if you set up authorization
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
- name: cache-workspace
subPath: cache
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- name: image
value: <REGISTRY/IMAGE NAME, eg gcr.io/test/image > # This defines the name of output image
Now running the Tekton pipeline once is no problem using kubectl apply -f pipeline-run.yml. But how can we restart or reuse this YAML-based configuration for all the other pipelines runs?
There are some discussions about that topic in the Tekton GitHub project - see tektoncd/pipeline/issues/664 and tektoncd/pipeline/issues/685. Since Tekton is heavily based on Kubernetes, all Tekton objects are Kubernetes CRDs - which are in fact immutable. So it is intended to not be able to re-run an already run PipelineRun.
But as also discussed in tektoncd/pipeline/issues/685 we can simply use the generateName variable of the metadata field like this:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: buildpacks-test-pipeline-run-
spec:
serviceAccountName: buildpacks-service-account # Only needed if you set up authorization
pipelineRef:
name: buildpacks-test-pipeline
workspaces:
- name: source-workspace
subPath: source
persistentVolumeClaim:
claimName: buildpacks-source-pvc
- name: cache-workspace
subPath: cache
persistentVolumeClaim:
claimName: buildpacks-source-pvc
params:
- name: image
value: <REGISTRY/IMAGE NAME, eg gcr.io/test/image > # This defines the name of output image
Running kubectl create -f pipeline-run.yml will now work multiple times and kind of "restart" our Pipeline, while creating a new PipelineRun object like buildpacks-test-pipeline-run-dxcq6 everytime the command is issued.
Keep in mind to delete old PipelineRun objects once in a while though.
tkn cli has the switch --use-pipelinerun to the command tkn pipeline start, what this command does is to reuse the params/workspaces from that pipelinerun and create a new one, so effectively "restarting" it.
so to 'restart' the pipelinerun pr1 which belong to the pipeline p1 you would do:
tkn pipeline start p1 --use-pipelinerun pr1
maybe we should have a easier named command, I kicked the discussion sometime ago feel free to contribute a feedback :
https://github.com/tektoncd/cli/issues/1091
You cannot restart a pipelinerun.
Since in tekton, a pipelinerun is one time execution for a pipeline(treat as template), so it should not able to be restart, another kubectl apply for pipelinerun is another execution...

How to deploy helm charts which are stored in AWS ECR using argoCD

I want to deploy helm charts, which are stored in a repository in AWS ECR, in the kubernetes cluster using ArgoCD. But I am getting a 401 unauthorized issue. I have pasted the entire issue below
Unable to create application: application spec is invalid: InvalidSpecError: Unable to get app details: rpc error: code = Unknown desc = `helm chart pull <aws account id>.dkr.ecr.<region>.amazonaws.com/testrepo:1.1.0` failed exit status 1: Error: unexpected status code [manifests 1.1.0]: 401 Unauthorized
Yes, you can use ECR for storing helm charts (https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html)
I have managed to add the repo to ArgoCD, but the token expires so it is not a complete solution.
argocd repo add XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com --type helm --name some-helmreponame --enable-oci --username AWS --password $(aws ecr get-login-password --region us-east-1)
Using the declarative repository definition (see https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#repositories, or just override .argo-cd.configs.repositories in the Helm chart) it is actually quite easy to create a cron-job that updates the ECR credentials:
apiVersion: batch/v1
kind: CronJob
metadata:
name: argocd-ecr-credentials
spec:
schedule: '0 */6 * * *' # every 6 hours, since credentials expire every 12 hours
jobTemplate:
metadata:
name: argocd-ecr-credentials
spec:
template:
spec:
serviceAccountName: argocd-server
restartPolicy: OnFailure
containers:
- name: update-secret
image: alpine/k8s # Anything that contains kubectl + aws cli
command:
- /bin/bash
- "-c"
- |
PASSWORD=$(aws ecr get-login-password --region [your aws region] | base64 -w 0)
kubectl patch secret -n argocd argocd-repo-[name of your repository] --type merge -p "{\"data\": {\"password\": \"$PASSWORD\"}}"
ArgoCD repository secrets are usually called argocd-repo-* suffixed with the key of the repository entry in the values.yaml.
This will start a pod every 6 hours to do an ECR login and update the secret in kubernetes, that contains the repository definition for ArgoCD.
Make sure to use the argocd-server service account (or create your own) since the container will not be able to modify the secret otherwise.
I'm experimenting with the following (Not yet complete)
Create a secret for an AWS IAM role that allows you to get an ECR login password.
apiVersion: v1
kind: Secret
metadata:
name: aws-ecr-get-login-password-creds
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
AWS_ACCESS_KEY_ID: <Fill In>
AWS_SECRET_ACCESS_KEY: <Fill In>
Now create an ArgoCD workflow that either runs every 12 hours or runs on PreSync Hook (Completely untested, will try to keep this updated, anyone can update this for me).
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: aws-ecr-get-login-password-
annotations:
argocd.argoproj.io/hook: PreSync
spec:
entrypoint: update-ecr-login-password
templates:
# This is what will run.
# First the awscli
# Then the resource creation using the stdout of the previous step
- name: update-ecr-login-password
steps:
- - name: awscli
template: awscli
- - name: argocd-ecr-credentials
template: argocd-ecr-credentials
arguments:
parameters:
- name: password
value: "{{steps.awscli.outputs.result}}"
# Create a container that has awscli in it
# and run it to get the password using `aws ecr get-login-password`
- name: awscli
script:
image: amazon/aws-cli:latest
command: [bash]
source: |
aws ecr get-login-password --region us-east-1
# We need aws secrets that can run `aws ecr get-login-password`
envFrom:
- secretRef:
name: aws-ecr-get-login-password-creds
# Now we can create the secret that has the password in it
- name: argocd-ecr-credentials
inputs:
parameters:
- name: password
resource:
action: create
manifest: |
apiVersion: v1
kind: Secret
metadata:
name: argocd-ecr-credentials
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
url: 133696059149.dkr.ecr.us-east-1.amazonaws.com
username: AWS
password: {{inputs.parameters.password}}

How to connect to Kubernetes using ansible?

I want to connect to Kubernetes using Ansible. I want to run some ansible playbooks to create Kubernetes objects such as roles and rolebindings using ansible k8s module. I want to know if the Ansible K8s module is standard Kubernetes client that can use Kubeconfig in the same way as helm and kubectl.
Please let me know how to configure Kubeconfig for ansible to connect to K8s cluster.
You basically specify the kubeconfig parameter in the Ansible YAML file. (It defaults to ~/.kube/config.json). For example:
---
- hosts: localhost
gather_facts: false
vars_files:
- vars/main.yml
tasks:
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '~/.kube/config'
state: present
loop: "{{ lookup('template', 'myapp/mysql-pass.yml') | from_yaml_all | list }}"
no_log: k8s_no_log
...
You can also make it a variable:
...
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '{{ k8s_kubeconfig }}'
...
Thankyou..It worked for me..I tried the below.
- hosts: localhost
gather_facts: false
tasks:
- name: Create a k8s namespace
k8s:
kubeconfig: '~/Documents/sample-project/eks-kubeconfig'
name: testing1
api_version: v1
kind: Namespace
state: present
state: present

Ansible: Obtain api_token from gce_container_cluster

I launch the GCP cluster with no problem but I do not know how to get k8s ansible module to work. I would prefer to get the api_key to authenticate into k8s module.
My playbook is the following.
- name: Hello k8s
hosts: all
tasks:
- name: Create a cluster
register: cluster
gcp_container_cluster:
name: thecluster
initial_node_count: 1
master_auth:
username: admin
password: TheRandomPassword
node_config:
machine_type: g1-small
disk_size_gb: 10
oauth_scopes:
- "https://www.googleapis.com/auth/compute"
- "https://www.googleapis.com/auth/devstorage.read_only"
- "https://www.googleapis.com/auth/logging.write"
- "https://www.googleapis.com/auth/monitoring"
zone: europe-west3-c
project: second-network-255214
auth_kind: serviceaccount
service_account_file: "{{ lookup('env', 'GOOGLE_CREDENTIALS') }}"
state: present
- name: Show results
debug: var=cluster
- name: Create temporary file for CA
tempfile:
state: file
suffix: build
register: ca_crt
- name: Save content to file
copy:
content: "{{ cluster.masterAuth.clusterCaCertificate |b64decode }}"
dest: "{{ ca_crt.path }}"
- name: Create a k8s namespace
k8s:
host: "https://{{ cluster.endpoint }}"
ca_cert: "{{ ca_crt.path }}"
api_key: "{{ cluster.HOW_I_GET_THE_API_KEY}}" <<<-- Here is what I want!!!
name: testing
api_version: v1
kind: Namespace
state: present
Any idea?
I founded a workaround that is to call gcloud directly:
- name: Get JWT
command: gcloud auth application-default print-access-token
register: api_key
Obviously, I needed to:
Install GCloud
Redefine the envvar with the auth.json to GOOGLE_APPLICATION_CREDENTIALS.
The task calls gcloud directly to obtain the token, so no need to generate the token. I will try to add to add this feature as a module into ansible for better interoperability with kubernetes.
Once obtained it is possible to call k8s module like this:
- name: Create ClusterRoleBinding
k8s:
state: present
host: "https://{{ cluster.endpoint }}"
ca_cert: "{{ ca_crt.path }}"
api_version: rbac.authorization.k8s.io/v1
api_key: "{{ api_key.stdout }}"
definition:
kind: ClusterRoleBinding
metadata:
name: kube-system_default_cluster-admin
subjects:
- kind: ServiceAccount
name: default # Name is case sensitive
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
According to the fine manual, masterAuth contains two other fields, clientCertificate and clientKey that correspond to the client_cert: and client_key: parameters, respectively. From that point, you can authenticate to your cluster's endpoint as cluster-admin using the very, very strong credentials of the private key, and from that point use the same k8s: task to provision yourself a cluster-admin ServiceAccount token if you wish to do that.
You can also apparently use masterAuth.username and masterAuth.password in the username: and password: parameters of k8s:, too, which should be just as safe since the credentials travel over HTTPS, but you seemed like you were more interested in a higher entropy authentication solution.