How to create a kubernetes secret using Spinnaker and Hashicorp Vault - kubernetes

We have a whole bunch of secrets on our Hashicorp Vault server. We have started testing out spinnaker for deploying on Kubernetes but I do not see any documentation around how to create a secret on kubernetes reading from Hashicorp Vault.
Can someone point me in the right direction for this? Is it even advisable to create secrets using Spinnaker or should we just use it strictly for deployments?

The problem with creating secret via spinnaker is that where do you keep the content of the secret in the first place to be able to create a secret from it. Wherever you keep it it introduces a risk of compromise. So I would suggest to create the secret dynamically at runtime using a sidecar injector.
HashiCorp Vault sidecar injector agent is a tool that can be used for this purpose. The injector is a Kubernetes Mutation Webhook Controller. The controller intercepts pod events and applies mutations to the pod if annotations exist within the request.
Since the secret gets injected directly into the pod as VolumeMounts from the Vault Server the chance of compromise is less compared to creating a secret via Spinnaker

Related

How to manage Kubernetes secrets?

Can anyone suggest me How I manage my Kubernetes secrets? till now I used to use kubectl or Helm to apply secrets from my local system but this is not the right way to do I guess. I also refer few docs about managing Kubernetes secrets, from that some are mention below,
I find hashicorp vault which is also used to manage secrets https://www.vaultproject.io/use-cases/kubernetes
Aws secret https://aws.amazon.com/secrets-manager/
but still, I m looking for the other available option to manage secrets. Please suggest me the best and most secure way to store and manage secrets in Kubernetes. for your kind information I m using AWS EKS cluster so please help me out

How to configure an AKS cluster to use secrets from external Vault installed on different AKS Cluster

I have two kubernetes clusters running on Azure AKS.
One cluster named APP-Cluster which is hosting application pods.
One cluster named Vault-Cluster which the Hashicorp Vault is installed on.
I have installed Hashicorp Vault with Consul in HA mode according to below official document. The installation is successful.
https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes
But I am quite lost on how to connect and retrieve the secrets in Vault cluster from another cluster. I would like to use the sidecar injection method of Vault for my app cluster to communicate with vault cluster. I tried the follow the steps in below official document but in the document minikube is used instead of public cloud Kubernetes Service. How do I define the "EXTERNAL_VAULT_ADDR" variable for AKS like described in the document for minikube? Is it the api server DNS address which I can get from Azure portal?
https://learn.hashicorp.com/tutorials/vault/kubernetes-external-vault?in=vault/kubernetes
The way you interact with Vault is via HTTP(s) API. That means you need to expose the vault service running in your Vault-Cluster cluster using one of the usual methods.
As an example you could:
use a service of type LoadBalancer (this works because you are running kubernetes in a cloud provider that supports this feature);
install an ingress controller, expose it (again with a load balancer) and define an Ingress resource for your vault service.
use a node port service
The EXTERNAL_VAULT_ADDR value depends on which strategy you want to use.

Vault deployment in alive cluster using terraform

I want to deploy vault with a cluster which contains microservices and my vault shouldn't have an external access, and everything should be done using terraform. Does anyone know how to do it?
Please read the Terraform Getting started on how to create Terraform code.
You will need to use the Google Provider to deploy your resources. On that page you can view resources like kubernetes, vault and lots of others.

Dynamic token generation before deployment in kubernetes

I am fairly new to kubernetes and learning kubernetes deployments from scratch. For a microservice based projecct that I am working on, each microservice has to authenticate with their own client-id and client-secret to the auth server, before requesting any information (JWT). These ids and secrets are required for each services and needs to be in their environment variables. Initially the auth service will generate those ids and secrets via database seeds. What is the best way in the world of kubernetes to automatically set this values in the environments of a pod deployment before pod creation?
Depends on how automatic you want it to be. A simple approach would be an initContainer to provision a new token, put that in a shared volume file, and then an entrypoint script in the main container which reads the file and sets the env var.
The problem with that is authenticating the initContainer is hard. The big hammer solution would be to write a custom operator to manage this but if you're new to Kubernetes that's going to be super hard and probably overkill anyway.

app in its own namespace with a service account available in any namespace

I have a very specific scenario I'm trying to solve for:
Using Kubernetes (single cluster)
Installing Vault on that cluster
sending GitLab containers to the same cluster.
I need to install Vault in such a way that:
Vault lives in it's own namespace (easy/solved)
Vault's service account (vault-auth) is available to all other namespaces (unsolved)
GitLab's default behavior is to put all apps/services into their own namespaces with the Project ID; EG: repo_name+project_id. It's predictable but the two options are:
When the app is in its own namespace it cannot access the Vault service account in the 'vault' Namespace. It requires you to create a vault service account in each application namespace; hot garbage, or...
Put ALL apps + Vault in the default namespace and applications can easily find the 'vault-auth' service account. Messy but totally works.
To use GitLab in the way it is intended (and I don't disagree) is to leave each app in it's own namespace. The question then becomes:
How would one create the Kubernetes Service Account for Vault (vault-auth) so that Vault the application is in it's own namespace but the service account itself is available to ALL namespaces?
Then, no matter the namespace that GitLab creates, the containers have equal access to the 'vault-auth' service account.