How to pass a Secret in Vault to imagePullSecrets to access an image in private registry in Kubernetes - kubernetes

I created a Secret in Vault in Kubernetes instead of using K8s API. And would like to use the Secret in Vault to pull images from a private registry. I am unable to find a way to do so. The following is the example code, assuming I used all the labels for Vault access by the Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
containers:
- name: test-app-exporter
image: index.docker.io/xxxxx/test-app-exporter:3
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: [myregistrykey--Secret From Vault]

Try using https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/ so kubelet can run vault executable to get the image pull credentials and kubelet/container runtime can pull the image using the credentials

I had the same issue. You can use an external secret with the External Secret Operator, more info here.
If you create a external secret, you can reference the kubernetes secret in imagePullSecrets, but the actual value will be stored in vault, and synced. I found this links very useful:
this one about the creation of a external secret linked with Vault
https://www.digitalocean.com/community/tutorials/how-to-access-vault-secrets-inside-of-kubernetes-using-external-secrets-operator-eso
this one about the configuration of the secret as a dockerconfigjson secret type https://external-secrets.io/v0.7.1/guides/common-k8s-secret-types/#dockerconfigjson-example

You can use the external secret or write a custom CRD solution also which ideally should sync or copy the secret from the vault to the Kubernetes secret.
Option: 1
You have multiple options to use External Secret
steps for external secret
helm repo add external-secrets
helm install k8s-external-secrets external-secrets/kubernetes-external-secrets
Example ref
apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
name: imagepull-secret
namespace: default
spec:
kvVersion: 1
backendType: vault
vaultMountPoint: kubernetes
vaultRole: user-role
data:
- name: IMAGE_PULL
...
Option : 2
i have used this CRD personally and easy way to integrate Vault-CRD
Installation guide & Dockercfg
Create a secret in vault first
Create vault-crd YAML file
Example
apiVersion: "koudingspawn.de/v1"
kind: Vault
metadata:
name: image-secret
spec:
path: "secret/docker-hub"
type: "DOCKERCFG"
Apply YAML file to the cluster
it will fetch the secret from vault and create new secret in Kubernetes and you can directly use that secret to Deployment

Related

Add ExternalSecret to Yaml file deploying to K8s

I'm trying to deploy a Kubernetes processor to a cluster on GCP GKE but the pod fails with the following error:
secret "service-account-credentials-dbt-test" not found: CreateContainerConfigError
This is my deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dbt-core-processor
namespace: prod
labels:
app: dbt-core
spec:
replicas: 1
selector:
matchLabels:
app: dbt-core
template:
metadata:
labels:
app: dbt-core
spec:
containers:
- name: dbt-core-processor
image: IMAGE
resources:
requests:
cpu: 50m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
valueFrom:
secretKeyRef:
name: service-account-credentials-dbt-test
key: service-account-credentials-dbt-test
---
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: service-account-credentials-dbt-test
namespace: prod
spec:
backendType: gcpSecretsManager
data:
- key: service-account-credentials-dbt-test
name: service-account-credentials-dbt-test
version: latest
When I run kubectl apply -f deployment.yml I get the following error:
deployment.apps/dbt-core-processor created
error: unable to recognize "deployment.yml": no matches for kind "ExternalSecret" in version "kubernetes-client.io/v1"
This creates my processor but the pod fails to spin up the secrets:
secret "service-account-credentials-dbt-test" not found: CreateContainerConfigError
How do I add the secrets from my secrets manager in GCP to this deployment?
ExternalSecret is a custom resource definition (CRD) and it looks like it is not installed on your cluster.
I googled kubernetes-client.io/v1 and it looks like you may be following instructions from the old, archived project that first provided this CRD? The GitHub repo pointed me to a maintained project that has replaced it.
The good news is that the current project has what looks like comprehensive documentation, including a guide to how to install the CRDs on your cluster and the proper configuration for the External secret.

How to get or call secrets from hashicorp vault using Helm

I'm having a use case with me.
currently, I'm using helm over on premise Kubernetes cluster where all of my environment variables and secrets are stored in helm itself but now I want to store them in hashicorp vault.
as of now its totaly new for me and i'm having some hard time to make it work.
so the use case is something like,
how we can use hashicorp vault to store the values which are getting use by Helm as of now.
Once we store the values which we want how we can call them by using helm it self only.
any help will be greatly appreciated
To use external-secrets.io to mount secrets from Hashicorp Vault as Kubernetes secrets and consume them as environment variables in a pod, you can follow these steps:
Install external-secrets.io into your Kubernetes cluster:
kubectl apply -f https://external-secrets.io/install
Create a Kubernetes secret backed by Vault:
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: mysecret
spec:
provider: hashivault
hashivault:
path: secret/myapp
data:
- key: username
name: USERNAME
- key: password
name: PASSWORD
Create a Kubernetes deployment that consumes the secret:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: USERNAME
- name: PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: PASSWORD
Apply the deployment:
kubectl apply -f deployment.yaml
This will mount the secrets from Vault into a Kubernetes secret, and then consume the secrets as environment variables in the pod. You can access the values of the secrets in your application as os.getenv("USERNAME") and os.getenv("PASSWORD").
Note: You will need to have the Hashicorp Vault CLI installed and configured, and have access to the Vault server, in order to complete these steps.

docker desktop kubernetes can't pull from private registry

I'm running docker desktop with the built-in kubernetes cluster. I've got an image in an on-prem gitlab instance. I create a project API key and on the local machine I can do a docker push gitlab.myserver.com/group/project:latest and similarly pull the image after doing a docker login gitlab.myserver.com with the project bot username and the API key.
I create a kubernetes secret with kubectl create secret docker-registry myserver --docker-server=gitlab.myserver.com --docker-username=project_42_bot --docker-password=API_KEY
I then create a pod:
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- args:
- data_generator.py
image: gitlab.myserver.com/group/project:latest
imagePullPolicy: Always
name: foo
imagePullSecrets:
- name: myserver
but I get an access forbidden on the pull.
Try to create a secret for registry login by running the command below :
kubectl create secret docker-registry <secret_name> --docker-server=<your.registry.domain.name> --docker-username=<user> --docker-password=<password> --docker-email=<your_email>
And then use this secret for deployment:
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: server
spec:
imagePullSecrets:
- name: <secret_name>
Refer to the documentation and github link for more information.

How do I access a private Container Registry from IBM Cloud Delivery Pipeline (Tekton)

I am trying to use a container image from a private container registry in one of my tasks.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: echo-hello-world
spec:
steps:
- name: echo
image: de.icr.io/reporting/status:latest
command:
- echo
args:
- "Hello World"
But when I run this task within an IBM Cloud Delivery Pipeline (Tekton) the image can not be pulled
message: 'Failed to pull image "de.icr.io/reporting/status:latest": rpc error: code = Unknown desc = failed to pull and unpack image "de.icr.io/reporting/status:latest": failed to resolve reference "de.icr.io/reporting/status:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized'
I read several tutorials and blogs, but so far couldn't find a solution. This is probably what I need to accomplish, so that the IBM Cloud Delivery Pipeline (Tekton) can access my private container registry: https://tekton.dev/vault/pipelines-v0.15.2/auth/#basic-authentication-docker
So far I have created a secret.yaml file in my .tekton directory:
apiVersion: v1
kind: Secret
metadata:
name: basic-user-pass
annotations:
tekton.dev/docker-0: https://de.icr.io # Described below
type: kubernetes.io/basic-auth
stringData:
username: $(params.DOCKER_USERNAME)
password: $(params.DOCKER_PASSWORD)
I am also creating a ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: default-runner
secrets:
- name: basic-user-pass
And in my trigger definition I telling the pipeline to use the default-runner ServiceAccount:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
serviceAccountName: default-runner
pipelineRef:
name: hello-goodbye
I found a way to pass my API key to my IBM Cloud Delivery Pipeline (Tekton) and the tasks in my pipeline are now able to pull container images from my private container registry.
This is my working trigger template:
apiVersion: tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
name: theTemplateTrigger
spec:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
resourcetemplates:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-$(uid)
spec:
pipelineRef:
name: hello-goodbye
podTemplate:
imagePullSecrets:
- name: pipeline-pull-secret
It first defines a parameter called pipeline-dockerconfigjson:
params:
- name: pipeline-dockerconfigjson
description: dockerconfigjson for images used in .pipeline-config.yaml
default: "eyJhdXRocyI6e319" # ie. {"auths":{}} base64 encoded
The second part turns the value passed into this parameter into a Kubernetes secret:
- apiVersion: v1
kind: Secret
data:
.dockerconfigjson: $(tt.params.pipeline-dockerconfigjson)
metadata:
name: pipeline-pull-secret
type: kubernetes.io/dockerconfigjson
And this secret is then pushed into the imagePullSecrets field of the PodTemplate.
The last step is to populate the parameter with a valid dockerconfigjson and this can be accomplished within the Delivery Pipeline UI (IBM Cloud UI).
To create a valid dockerconfigjson for my registry de.icr.io I had to use the following kubectl command:
kubectl create secret docker-registry mysecret \
--dry-run=client \
--docker-server=de.icr.io \
--docker-username=iamapikey \
--docker-password=<MY_API_KEY> \
--docker-email=<MY_EMAIL> \
-o yaml
and then within the output there is a valid base64 encoded .dockerconfigjson field.
Please also note that there is a public catalog of sample tekton tasks:
https://github.com/open-toolchain/tekton-catalog/tree/master/container-registry
More on IBM Cloud Continuous Delivery Tekton:
https://www.ibm.com/cloud/blog/ibm-cloud-continuous-delivery-tekton-pipelines-tools-and-resources
Tektonized Toolchain Templates: https://www.ibm.com/cloud/blog/toolchain-templates-with-tekton-pipelines
The secret you created (type basic-auth) would not allow Kubelet to pull your Pods images.
The doc mentions those secrets are meant to provision some configuration, inside your tasks containers runtime. Which may then be used during your build jobs, pulling or pushing images to registries.
Although Kubelet needs some different configuration (eg: type dockercfg), to authenticate when pulling images / starting containers.

Are multiple imagePullSecrets allowed and used by Kubernetes to pull an image from a private registry?

I have a private registry (gitlab) where my docker images are stored.
For deployment a secret is created that allows GKE to access the registry. The secret is called deploy-secret.
The secret's login information expires after short time in the registry.
I additionally created a second, permanent secret that allows access to the docker registry, named permanent-secret.
Is it possible to specify the Pod with two secrets? For example:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: deploy-secret
- name: permanent-secret
Will Kubernetes, when trying to re-pull the image later, recognize that the first secret does not work (does not allow authentication to the private registry) and then fallback successfully to the second secret?
Surprisingly this works! I just tried this on my cluster. I added a fake registry credentials secret, with the wrong values. I put both secrets in my yaml like you did (below) and the pods got created and container is created and running successfully:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
nodeSelector:
containers:
- image: gitlab.myapp.com/my-image:tag
name: test
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred-test
- name: regcred
The regcred secret has the correct values and the regcred-test is just a bunch of gibberish. So we can see that it ignores the incorrect secret.