How do I access a private Container Registry from IBM Cloud Delivery Pipeline (Tekton) - ibm-cloud

I am trying to use a container image from a private container registry in one of my tasks.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: echo-hello-world
spec:
steps:
- name: echo
image: de.icr.io/reporting/status:latest
command:
- echo
args:
- "Hello World"
But when I run this task within an IBM Cloud Delivery Pipeline (Tekton) the image can not be pulled
message: 'Failed to pull image "de.icr.io/reporting/status:latest": rpc error: code = Unknown desc = failed to pull and unpack image "de.icr.io/reporting/status:latest": failed to resolve reference "de.icr.io/reporting/status:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized'
I read several tutorials and blogs, but so far couldn't find a solution. This is probably what I need to accomplish, so that the IBM Cloud Delivery Pipeline (Tekton) can access my private container registry: https://tekton.dev/vault/pipelines-v0.15.2/auth/#basic-authentication-docker
So far I have created a secret.yaml file in my .tekton directory:
apiVersion: v1
kind: Secret
metadata:
name: basic-user-pass
annotations:
tekton.dev/docker-0: https://de.icr.io # Described below
type: kubernetes.io/basic-auth
stringData:
username: $(params.DOCKER_USERNAME)
password: $(params.DOCKER_PASSWORD)
I am also creating a ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: default-runner
secrets:
- name: basic-user-pass
And in my trigger definition I telling the pipeline to use the default-runner ServiceAccount:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
serviceAccountName: default-runner
pipelineRef:
name: hello-goodbye

I found a way to pass my API key to my IBM Cloud Delivery Pipeline (Tekton) and the tasks in my pipeline are now able to pull container images from my private container registry.
This is my working trigger template:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
resourcetemplates:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
pipelineRef:
name: hello-goodbye
podTemplate:
imagePullSecrets:
- name: pipeline-pull-secret
It first defines a parameter called pipeline-dockerconfigjson:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
The second part turns the value passed into this parameter into a Kubernetes secret:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
And this secret is then pushed into the imagePullSecrets field of the PodTemplate.
The last step is to populate the parameter with a valid dockerconfigjson and this can be accomplished within the Delivery Pipeline UI (IBM Cloud UI).
To create a valid dockerconfigjson for my registry de.icr.io I had to use the following kubectl command:
kubectl create secret docker-registry mysecret \
--dry-run=client \
--docker-server=de.icr.io \
--docker-username=iamapikey \
--docker-password=<MY_API_KEY> \
--docker-email=<MY_EMAIL> \
-o yaml
and then within the output there is a valid base64 encoded .dockerconfigjson field.

Please also note that there is a public catalog of sample tekton tasks:
https://github.com/open-toolchain/tekton-catalog/tree/master/container-registry
More on IBM Cloud Continuous Delivery Tekton:
https://www.ibm.com/cloud/blog/ibm-cloud-continuous-delivery-tekton-pipelines-tools-and-resources
Tektonized Toolchain Templates: https://www.ibm.com/cloud/blog/toolchain-templates-with-tekton-pipelines

The secret you created (type basic-auth) would not allow Kubelet to pull your Pods images.
The doc mentions those secrets are meant to provision some configuration, inside your tasks containers runtime. Which may then be used during your build jobs, pulling or pushing images to registries.
Although Kubelet needs some different configuration (eg: type dockercfg), to authenticate when pulling images / starting containers.

Related

How to use Kubernetes Secret to pull a private docker image from docker hub?

I'm trying to run my kuberntes app using minikube on ubuntu20.04 and applied a secret to pull a private docker image from docker hub, but it doesn't seem to work correctly.
Failed to pull image "xxx/node-graphql:latest": rpc error: code
= Unknown desc = Error response from daemon: pull access denied for xxx/node-graphql, repository does not exist or may require
'docker login': denied: requested access to the resource is denied
Here's the secret generated by
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<pathtofile>.docker/config.json \
--type=kubernetes.io/dockerconfigjson
And here's the secret yaml file I have created
apiVersion: v1
data:
.dockerconfigjson: xxx9tRXpNakZCSTBBaFFRPT0iCgkJfQoJfQp9
kind: Secret
metadata:
name: node-graphql-secret
uid: xxx-2e18-44eb-9719-xxx
type: kubernetes.io/dockerconfigjson
Did anyone try to pull a private docker image into Kubernetes using a secret? Any kind of help would be appreciated. Thank you!
I managed to add the secrets config in the following steps.
First, you need to login to docker hub using:
docker login
Next, you create a k8s secret running:
kubectl create secret generic <your-secret-name>\\n --from-file=.dockerconfigjson=<pathtoyourdockerconfigfile>.docker/config.json \\n --type=kubernetes.io/dockerconfigjson
And then get the secret in yaml format
kubectl get secret -o yaml
It should look like this:
apiVersion: v1
items:
- apiVersion: v1
data:
.dockerconfigjson: xxxewoJImF1dGhzIjogewoJCSJodHRwczovL2luZGV4LmRvY2tl
kind: Secret
metadata:
creationTimestamp: "2022-10-27T23:06:01Z"
name: <your-secret-name>
namespace: default
resourceVersion: "513"
uid: xxxx-0f12-4beb-be41-xxx
type: kubernetes.io/dockerconfigjson
kind: List
metadata:
resourceVersion: ""
And I have copied the content for the secret in the secret yaml file:
apiVersion: v1
data:
.dockerconfigjson: xxxewoJImF1dGhzIjogewoJCSJodHRwczovL2luZGV4LmRvY2tlci
kind: Secret
metadata:
creationTimestamp: "2022-10-27T23:06:01Z"
name: <your-secret-name>
namespace: default
resourceVersion: "513"
uid: xxx-0f12-4beb-be41-xxx
type: kubernetes.io/dockerconfigjson
It works! This is a simple approach to using Secret to pull a private docker image for K8s.
As a side note, to apply the secret, run kubectl apply -f secret.yml
Hope it helps

Creating "imagePullSecret" to work with the deployment.yaml file which contains an image from the private repository

I have created a deployment yaml file with an image name with my private docker repository.
when running the command :
kubectl get pods
I see that the status of the created pod from that deployment file is ImagePullBackOff. from what I have read it is because I am pulling the image from a private registry without imagePullSecret.
how do I create "imagePullSecret" as a yaml file to work with the deployment.yaml file which contains an image from the private repository? or is it a feild which should be part of the deployment file?
There is a field in spec of the pod called imagePullSecrets, whichi you could define with your deployment yaml:
imagePullSecrets:
- name: registry-secret
Then you can define the missing "imagePullSecret" as a yaml file:
apiVersion: v1
kind: Secret
metadata:
name: registry-secret
namespace: xxx-namespace
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
The data field .dockerconfigjson can be created from this docs.
First you create the imagePullSecret from your docker
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Alternatively, you can create the imagePullSecret by providing the required arguments in the command line
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Then use the imagePullSecret name in the pod manifest
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
Reference:
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

kubectl Secret - passing service account ( json ) file in ansible k8s module

Am trying to use kubernetes ansible module for creating the kubectl secret, below is my command
kubectl create secret generic -n default test --from-file=gcp=serviceaccount.json
Do we have any way to pass service account json file(--from-file=gcp=serviceaccount.json) in Ansible k8s module,
how to pass this --from-file in the below module?
- name: CREATE SECRET
k8s:
state: present
definition:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: test
namespace: default
data:
?? : ??
Am able to resolve the issue with stringData option.
Ive passed the content of the file with stringData option
- name: CREATE SECRET
k8s:
state: present
definition:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: test
namespace: default
stringData:
content of the file

How to pass a Secret in Vault to imagePullSecrets to access an image in private registry in Kubernetes

I created a Secret in Vault in Kubernetes instead of using K8s API. And would like to use the Secret in Vault to pull images from a private registry. I am unable to find a way to do so. The following is the example code, assuming I used all the labels for Vault access by the Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
containers:
- name: test-app-exporter
image: index.docker.io/xxxxx/test-app-exporter:3
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: [myregistrykey--Secret From Vault]
Try using https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/ so kubelet can run vault executable to get the image pull credentials and kubelet/container runtime can pull the image using the credentials
I had the same issue. You can use an external secret with the External Secret Operator, more info here.
If you create a external secret, you can reference the kubernetes secret in imagePullSecrets, but the actual value will be stored in vault, and synced. I found this links very useful:
this one about the creation of a external secret linked with Vault
https://www.digitalocean.com/community/tutorials/how-to-access-vault-secrets-inside-of-kubernetes-using-external-secrets-operator-eso
this one about the configuration of the secret as a dockerconfigjson secret type https://external-secrets.io/v0.7.1/guides/common-k8s-secret-types/#dockerconfigjson-example
You can use the external secret or write a custom CRD solution also which ideally should sync or copy the secret from the vault to the Kubernetes secret.
Option: 1
You have multiple options to use External Secret
steps for external secret
helm repo add external-secrets
helm install k8s-external-secrets external-secrets/kubernetes-external-secrets
Example ref
apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
name: imagepull-secret
namespace: default
spec:
kvVersion: 1
backendType: vault
vaultMountPoint: kubernetes
vaultRole: user-role
data:
- name: IMAGE_PULL
...
Option : 2
i have used this CRD personally and easy way to integrate Vault-CRD
Installation guide & Dockercfg
Create a secret in vault first
Create vault-crd YAML file
Example
apiVersion: "koudingspawn.de/v1"
kind: Vault
metadata:
name: image-secret
spec:
path: "secret/docker-hub"
type: "DOCKERCFG"
Apply YAML file to the cluster
it will fetch the secret from vault and create new secret in Kubernetes and you can directly use that secret to Deployment

how does K8S handles multiple remote docker registeries in POD definition using imagePullSecrets list

I would like to access multiple remote registries to pull images.
In the k8s documentation they say:
(If you need access to multiple registries, you can create one secret
for each registry. Kubelet will merge any imagePullSecrets into a
single virtual .docker/config.json)
and so the POD definition should be something like this:
apiVersion: v1
kind: Pod
spec:
containers:
- name: ...
imagePullSecrets:
- name: secret1
- name: secret2
- ....
- name: secretN
Now I am not sure how K8S will pick the right secret for each image? will all secrets be verified one by one each time? and how K8S will handle the failed retries? and if a specific amount of unauthorized retries could lead to some lock state in k8sor docker registries?
/ Thanks
You can use following script to add two authentications in one secret
#!/bin/bash
u1="user_1_here"
p1="password_1_here"
auth1=$(echo -n "$u1:$p1" | base64 -w0)
u2="user_2_here"
p2="password_2_here"
auth2=$(echo -n "$u2:$p2" | base64 -w0)
cat <<EOF > docker_config.json
{
"auths": {
"repo1_name_here": {
"auth": "$auth1"
},
"repo2_name_here": {
"auth": "$auth2"
}
}
}
EOF
base64 -w0 docker_config.json > docker_config_b64.json
cat <<EOF | kubectl apply -f -
apiVersion: v1
type: kubernetes.io/dockerconfigjson
kind: Secret
data:
.dockerconfigjson: $(cat docker_config_b64.json)
metadata:
name: specify_secret_name_here
namespace: specify_namespace_here
EOF
Kubernetes isn't going to try all secrets until find the correct. When you create the secret, you are referencing that it's a docker registry:
$ kubectl create secret docker-registry user1-secret --docker-server=https://index.docker.io/v1/ --docker-username=user1 --docker-password=PASSWORD456 --docker-email=user1#email.com
$ kubectl create secret docker-registry user2-secret --docker-server=https://index.docker.io/v1/ --docker-username=user2 --docker-password=PASSWORD123 --docker-email=user2#email.com
$ kubectl get secrets user1-secret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-01-13T13:15:52Z"
name: user1-secret
namespace: default
resourceVersion: "1515301"
selfLink: /api/v1/namespaces/default/secrets/user1-secret
uid: d2f3bb0c-3606-11ea-a202-42010a8000ad
type: kubernetes.io/dockerconfigjson
As you can see, type is kubernetes.io/dockerconfigjson is telling Kubernetes to treat this differently.
So, when you reference the address of your container as magic.example.com/magic-image on your yaml, Kubernetes will have enough information to connect the dots and use the right secret to pull your image.
apiVersion: v1
kind: Pod
metadata:
name: busyboxes
namespace: default
spec:
imagePullSecrets:
- name: user1-secret
- name: user2-secret
containers:
- name: jenkins
image: user1/jenkins
imagePullPolicy: Always
- name: busybox
image: user2/busybox
imagePullPolicy: Always
So as this example describes, it's possible to have 2 or more docker registry secrets with the same --docker-server value. Kubernetes will manage to take care of it seamlessly.