How to inject secrets from vault to Kubernetes pods - kubernetes

All,
I have all my secrets stored in vault. How can I fetch secrets from vault and inject them in pods.
Do I have to use a sidecard for it or there is some easiest way also .

There is one great project on Github Vault-CRD in java: https://github.com/DaspawnW/vault-crd
Vault CRD for sharing Vault Secrets with Kubernetes. It injects & sync values from Vault to Kubernetes secret. You can use these secrets as environment variables inside pod.

The sidecar pattern is common with Kubernetes applications and can be applied to access secrets from Vault.
There is a great step by step walk through on hands-on-with-vault-on-kubernetes on git hub. This will answer all your basic questions on how to do this with example.
One more for your reference Injecting Vault Secrets Into Kubernetes Pods via a Sidecar

Related

Inject vault secret into K8s configmap

I have deployed vault in K8s . I would like to know how to inject the vault secret in the configmap of a application which has all the configuration of the application.
It's not possible you can not mount the vault secret into config map. But you can parallelly inject both configmap and vault secret to single deployment.
If you are mounting configmap as the file you can mount vault secret as file into same directory or another directory.
If injecting configmap as an environment variable you can also do the same with vault secret.
If you are injecting the configmap as environment variable i would suggest checking out the : https://github.com/DaspawnW/vault-crd
vault-crd sync the vault secret to Kubernetes secret and you can easily inject Kubernetes secret to deployment. Although it's not good considering the security perspective.
There are many different method you can inject vault secret into deployment.

Kubernetes secret programmatically update

Is there a way to programmatically update a kubernetes secret from a pod? that is, not using kubectl.
I have a secret mounted on a pod and also exposed via an environment variable. I would like to modify it from my service but it looks that it's read only by default.
You can use the Kubernetes REST API with the pod's serviceaccount's token as credentials (found at /var/run/secrets/kubernetes.io/serviceaccount/token inside the pod), you just need to allow the service account to edit secrets in the namespace via a role.
See Secret for the API docs
The API server is internally reachable via https://kubernetes.default

Can we use vault with kubernetes without sidecar or init container?

Our current model is to use init containers to fetch secrets from vault. But When the application crashes due to OOM issues, the pod goes into crashloopback state. Also, we don't want to overload the pod with a sidecar container. Is there any other way to use vault with kubernetes?
Yes, you can use Vault without using the side-car container.
You can create a path into the vault and save the key-value pair inside it.
As per requirement use the KV1 and KV2
To sync vault values with Kubernetes secret you can use :
https://github.com/DaspawnW/vault-crd
Vault CRD is the custom resource that will sync your vault variables to Kubernetes secret on the specific intervals you define.
Each time new value updated in vault that will sync back to secret and you can inject that secret into deployment or statefulset as per need

Kubernetes, deploy from within a pod

We have an AWS EKS Kubernetes cluster with two factor authentication for all the kubectl commands.
Is there a way of deploying an app into this cluster using a pod deployed inside the cluster?
Can I deploy using helm charts or by specifying service account instead of kubeconfig file?
Can I specify a service account(use the one that is assigned to the pod with kubectl) for all actions of kubectl?
All this is meant to bypass two-factor authentication for the continuous deployment via Jenkins, by deploying jenkins agent into the cluster and using it for deployments. Thanks.
You can use a supported Kubernetes client library or Kubectl or directly use curl to call rest api exposed by Kubernetes API Server from within a pod.
You can use helm as well as long as you install it in the pod.
When you call Kubernetes API from within a pod by default service account is used.Service account mounted in the pod need to have role and rolebinding associated to be able to call Kubernetes API.

Kubeconfig for deploying to all namespaces in a k8s cluster

I am looking at instructions on how to go about generating a kubeconfig file that can deploy, delete my k8s deployment to all namespaces and also have have permissions to create, delete and view secrets in all namespaces.
The use case for this kubeconfig is to use it in Jenkins for performing deployments to a kube cluster.
I am aware of k8s service accounts with role and rolebindings, however it appears they can be used to only to specific namespace(s)
Thanks
you should create cluster role and cluster role bindings to grant access cluster level. Then using the service account that has cluster level access, you should be able to do the stuff across all namespaces.