How to inject secrets from Google Secret Manager into K8s pod? - kubernetes

What is the best practice for injecting a secret from Google Secret Manager into a Kubernetes deployment? I have stored the admin password for my Grafana instance into Google Secret Manager. The Grafana instance was deployed using a helm chart on Google Kubernetes Engine. I did try using kube-secrets-init, which is a Kubernetes mutating admission webhook, that mutates any K8s Pod that is referencing a secret Google Secret Manager. I followed the instructions, but when I deploy my Grafana instance, I get the following error:
Internal error occurred: failed calling webhook "secrets-init.doit-intl.com": expected webhook response of admission.k8s.io/v1, Kind=AdmissionReview, got /, Kind=
This is the file used to deploy the mutating webhook:
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: mutating-secrets-init-webhook-cfg
labels:
app: secrets-init-webhook
webhooks:
- name: secrets-init.doit-intl.com
clientConfig:
service:
name: secrets-init-webhook-svc
namespace: default
path: "/pods"
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLekNDQWhPZ0F3SUJBZ0lSQVAyQ3BnQjlEVGpZbk5xSVBlM01aTTB3RFFZSktvWklodmNOQVFFTEJRQXcKTHpFdE1Dc0dBMVVFQXhNa09ETTVPVFptTW1ZdE1qSmtPQzAwT0RaakxUazNaVGt0TXpsbE0yWXlObUV5T0RaagpNQjRYRFRJeE1ETXhNREF5TWpZMU0xb1hEVEkyTURNd09UQXpNalkxTTFvd0x6RXRNQ3NHQTFVRUF4TWtPRE01Ck9UWm1NbVl0TWpKa09DMDBPRFpqTFRrM1pUa3RNemxsTTJZeU5tRXlPRFpqTUlJQklqQU5CZ2txaGtpRzl3MEIKQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeTVRU2ZDTitoWERoTG04ZGRrN2Zzbk1HMG84bm9ZeVhUaC9ZeW1UZApiUGRrRGFTT3g1eU9weWtGb1FHV3RNaWR6RTNzd2FWd0x6WjFrdkpCaHZIWm43YzBsRDBNKytJbmNGV2dqZjEzCjdUS2RYZjI1TEFDNkszUVl3MmlzMTc5L1U1U2p2TUVCUFdzMkpVcFZ1L2s2Vm50UmZkMWtLSmpST2tVVTVMWlgKajVEZncyb2prNlZTeSs3MDh4aTBlZU14bjNzUzU1Q3hUSGJzNkdBWTRlOXVRUVhpT2dCWXl4UG90Nlk2Vk9WSApDcW1yTXQ3V1ZTQ0xvOVJDb1V3NjlLSnQ5aWVDdW13QnpWMW4xNXF5bExhNXF0MWdWa3h2RkF3MDRweUxWMnBCCmowSFNXdVQ3L2w4Q1dYeXhMdnlNNFoxeEc3VFQva3FSMElHRyt5YWI4Snk3cFFJREFRQUJvMEl3UURBT0JnTlYKSFE4QkFmOEVCQU1DQWdRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVVRFSjFJN3phSGJkRQp0amxTUEtJdGU2VlhXbTB3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQU1YaU9BbGcweDlqTE5Zbm1IS3MrNW1ECmVhbnhUdmZxWHFCSlphK1ZPZnFYNm4xMzBncGk2cnM2K2lxUnh6bkVtVUJVZWJGM2s4c2VSUFBzenFvRzh4OFMKOXNHaE1idlBkVjZleTByMUR0VGFQQnk2blFXUUdqTzhXV2JNd21uakRlODhCQzZzckp1UEpCM0ZLVVYwSWF1NQo5bHhKaW5OamtoMXp4OHpYNVV2anNXeXg2dk5Ha0lZQy9KR2pTem5NRGhzZEVEbmE0R2FqOHR0TUlPWjduRG9JCkdkeWxCNnFwVUgvZmVsMURoTGlRWFIvL0cyRkdHRERQc3BUc1ZmczV5N2p3d2NURGgwYnpSZmpjdjhBRXR1cDQKWjlQSU9hNUdhN0NXbVJIY0FHSXBnSGdzUTZ5VC95N3krVVluM1pmVW44NEYwdERFMi9HbnN5ekRWZDM4cHZBPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
admissionReviewVersions: ["v1"]
sideEffects: None
rules:
- operations: [ "CREATE" ]
apiGroups: ["*"]
apiVersions: ["*"]
resources: ["pods"]

If I understood everything correctly - the current problem in your case is the fact, that there is no AdmissionReview support in V1.
There is a related opened github issue since last year: Add support for v1 AdmissionReview.
If we go more deeper, we will see that kube-secrets-init use as upstream slok/kubewebhook
And kubewebhook has its own opened issue: Add support for v1 AdmissionReview #72
Current stage:
As per authors comment, new v2.0.0-beta.1 has been released but seems noone tested it.
In kube-secrets-init issue there is proposal to update and release V2 as well..
So,
kube-secrets-init is not compatible with v1 version yet. It's pending fix till someone will give feedback to its upstream project (slok/kubewebhook) in version v2.0.0-beta.1

Related

Validating Webhook for configmap

I have Validating Webhook that triggers when some CRDs resources get [CREATE, UPDATE] operations.
I wanted to add for that, a specific configmap that will trigger that validating webhook.
Under the same namespace, I have multiple CRDs and configmaps, but I wanted to trigger the webhook also for one of the configmaps.
This is the ValidatingWebhook v1beta1 admissionregistration.k8s.io properties.
I guess the namespaceSelector is not the perfect match for my needs since it triggers for any configmap under that namespace. Tried to understand also if the objectSelector is good solution, but couldnt fully understand.
This is the relevent part of my webhook configurations:
webhooks:
- name: myWebhook.webhook
clientConfig:
***
failurePolicy:
***
rules:
- operations: ['CREATE', 'UPDATE']
apiGroups: ***
apiVersion: ***
resources: [CRD_resource_1, CRD_resource_2]
So I guess that my question is- how can I pick one of the multiple configmaps to triger my validation webhook?
Many thanks.
You definitely should use objectSelector in order to act only on specific configMaps.
You can make sure you put some specific label on those configMaps and configure your webhook:
objectSelector:
matchLabels:
myCoolConfigMaps: true

Automatically create Kubernetes resources after namespace creation

I have 2 teams:
devs: they create a new Kubernetes namespace each time they deploy a branch/tag of their app
ops: they manage access control to the cluster with (cluster)roles and (cluster)rolebindings
The problem is that 'devs' cannot kubectl their namespaces until 'ops' have created RBAC resources. And 'devs' cannot create RBAC resources themselves as they don't have the list of subjects to put in the rolebinding resource (sharing the list is not an option).
I have read the official documentation about Admission webhooks but what I understood is that they only act on the resource that triggered the webhook.
Is there a native and/or simple way in Kubernetes to apply resources whenever a new namespace is created?
I've come up with a solution by writing a custom controller.
With the following custom resource deployed, the controller injects the role and rolebinding in namespaces matching dev-.* and fix-.*:
kind: NamespaceResourcesInjector
apiVersion: blakelead.com/v1alpha1
metadata:
name: nri-test
spec:
namespaces:
- dev-.*
- fix-.*
resources:
- |
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: dev-role
rules:
- apiGroups: [""]
resources: ["pods","pods/portforward", "services", "deployments", "ingresses"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["create"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["list", "get"]
- |
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dev-rolebinding
subjects:
- kind: User
name: dev
roleRef:
kind: Role
name: dev-role
apiGroup: rbac.authorization.k8s.io
The controller is still in early stages of development but I'm using it successfully in more and more clusters.
Here it is for those interested: https://github.com/blakelead/nsinjector
Yes, there is a native way but not an out of the box feature.
You can do what you have described by using/creating an operator. Essentially extending Kubernetes APIs for your need.
As operator is just an open pattern which can implement things in many ways, in the scenario you gave one way the control flow could look like could be:
Operator with privileges to create RBAC is deployed and subscribed to changes to a k8s namespace object kind
Devs create namespace containing an agreed label
Operator is notified about changes to the cluster
Operator checks namespace validation (this can also be done by a separate admission webhook)
Operator creates RBAC in the newly created namespace
If RBACs are cluster wide, same operator can do the RBAC cleanup once namespace is deleted
It's kind of related to how the user is authenticated to the cluster and how they get a kubeconfig file.You can put a group in the client certificate or the bearer token that kubectl uses from the kubeconfig. Ahead of time you can define a clusterrole having a clusterrolebinding to that group which gives them permission to certain verbs on certain resources(for example ability to create namespace)
Additionally you can use an admission webhook to validate if the user is supposed to be part of that group or not.

GitOps (Flex) install of standard Jenkins Helm chart in Kubernetes via HelmRelease operator

I've just started working with Weavework's Flux GitOps system in Kubernetes. I have regular deployments (deployments, services, volumes, etc.) working fine. I'm trying for the first time to deploy a Helm chart.
I've followed the instructions in this tutorial: https://github.com/fluxcd/helm-operator-get-started and have its sample service working after making a few small changes. So I believe that I have all the right tooling in place, including the custom HelmRelease K8s operator.
I want to deploy Jenkins via Helm, which if I do manually is as simple as this Helm command:
helm install --set persistence.existingClaim=jenkins --set master.serviceType=LoadBalancer jenkins stable/jenkins
I want to convert this to a HelmRelease object in my Flex-managed GitHub repo. Here's what I've got, per what documentation I can find:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
updating-applications/
fluxcd.io/ignore: "false"
spec:
releaseName: jenkins
chart:
git: https://github.com/helm/charts/tree/master
path: stable/jenkins
ref: master
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer
I have this in the file 'jenkins/jenkins.yaml' from the root of the location in my git repo that Flex is monitoring. Adding this file does nothing...I get no new K8s objects, no HelmRelease object, and no new Helm release when I run "helm list -n jenkins".
I see some mention of having to have 'image' tags in my 'values' section, but since I don't need to specify any images in my manual call to Helm, I'm not sure what I would add in terms of 'image' tags. I've also seen examples of HelmRelease definitions that don't have 'image' tags, so it seems that they aren't absolutely necessary.
I've played around with adding a few annotations to my 'metadata' section:
annotations:
# fluxcd.io/automated: "true"
# per: https://blog.baeke.info/2019/10/10/gitops-with-weaveworks-flux-installing-and-updating-applications/
fluxcd.io/ignore: "false"
But none of that has helped to get things rolling. Can anyone tell me what I have to do to get the equivalent of the simple Helm command I gave at the top of this post to work with Flex/GitOps?
Have you tried checking the logs on the fluxd and flux-helm-operator pods? I would start there to see what error message you're getting. One thing that i'm seeing is that you're using https for git. You may want to double check, but I don't recall ever seeing any documentation configuring chart pulls via git to use anything other than SSH. Moreover, I'd recommend just pulling that chart from the stable helm repository anyhow:
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: jenkins
namespace: jenkins
annotations: #not sure what updating-applications/ was?
fluxcd.io/ignore: "false" #pretty sure this is false by default and can be omitted
spec:
releaseName: jenkins
chart:
repository: https://kubernetes-charts.storage.googleapis.com/
name: jenkins
version: 1.9.16
values:
persistence:
existingClaim: jenkins
master:
serviceType: LoadBalancer

Why does my kubernetes webook only get called on create and not on update?

I have a working mutating admission hook for kubernetes. It is called when I first deploy and app using helm. But it is not called when I update using helm. It will in fact call it if I change the version number for the deployment. But if only the content changed, then it skips calling the hook.
How can I make it always call the hook for any deployment?
Here is my hook config:
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: appcfg-mutator
webhooks:
- name: appcfg-mutator.devops.primerica.com
clientConfig:
service:
name: appcfg-mutator
namespace: appcfg-mutator
path: "/"
caBundle: {{ .Values.webhook.caBundle }}
rules:
- operations: ["*"]
apiGroups: [""]
apiVersions: ["v1","v1beta1","v1beta2"]
resources: ["pod","deployments","namespaces","services"]
failurePolicy: Fail
I log all requests as soon as they arrive and before deserializing the http rq body so I can see it's not getting called each update. Only on create, delete or when version field in yaml is changed.
Turns out I had a typo in my mutator config for "pod" instead of "pods". Plus, I was misunderstanding and expecting to see "deployments" updates since I was actually changing the "Deployment" kind yaml. Its just "pods" that I needed.
Here is the correction:
resources: ["pods","deployments","namespaces","services"]

How do you get Jinja templates into spinnaker/echo for webhook processing?

I have Spinnaker 1.10.5 deployed to Azure Kubernetes Service using Halyard.
I am trying to get Azure Container Registry webhooks to trigger a pipeline. I found that you can set up echo to allow artifact webhooks using an echo-local.yml like this:
webhooks:
artifacts:
enabled: true
sources:
- source: azurecr
templatePath: /path/to/azurecr.jinja
However, I'm stuck on the templatePath value. Since I'm deploying with Halyard into Kubernetes, all the configuration files get mounted as volumes from Kubernetes secrets.
How do I get my Jinja template into my Halyard-deployed echo so it can be used in a custom webhook?
As of Halyard 1.13 there will be the ability to custom mount secrets in Kubernetes
Create a Kubernetes secret with your Jinja template.
apiVersion: v1
kind: Secret
metadata:
name: echo-webhook-templates
namespace: spinnaker
type: Opaque
data:
mytemplate: [base64-encoded-contents-of-template]
Set the templatePath in the ~/.hal/default/profiles/echo-local.yml to the place you're mounting the secret.
webhooks:
artifacts:
enabled: true
sources:
- source: mysource
templatePath: /mnt/webhook-templates/mytemplate
Add the mount to ~/.hal/default/service-settings/echo.yml
kubernetes:
volumes:
- id: echo-webhook-templates
type: secret
mountPath: /mnt/webhook-templates
Since Halyard 1.13 hasn't actually been released yet, I obviously haven't tried this, but it's how it should work. Also... I guess I may be stuck until then.