How can we store Kubernetes secrets in github secrets - github

Hi i'm working on task that is in regards which one we should implement for Kubernets secrets Vault or github secrets.
i'm still very new to kubernets so i need help with this, if anyone can help me with some references that explain how we can store secrets and credentials in guthub secrets and use those github credentials in kubernets as secrets.
We are running
on-prem kubernetes
github enterprise
i have configure secrets through github and trying to use in kubernets but i have no idea how to do that just blank here.

You can use Sealed Secret to manage your k8s secrets in github.
Sealed Secrets is composed of two parts:
A cluster-side controller / operator
A client-side utility: kubeseal
The kubeseal utility uses asymmetric crypto to encrypt secrets that only the controller can decrypt.
To learn more, head over to sealed-secret github repo

Related

Enabling a Secrets Engine in Hashicorp Vault upon installation via Helm chart

I installed a Hasicorp Vault server via Helm with my custom values.yaml file (used this as a reference: https://developer.hashicorp.com/vault/docs/platform/k8s/helm/configuration)
I know I can enable different secrets engines after I initialize and unseal Vault (via the UI, CLI or API).
However, I am wondering whether it is possible to enable secrets engines via the values.yaml before initializing and unsealing Vault - i.e., when I open the UI after initializing and unsealing Vault I would like to see these engines already enabled and on the list of secrets engines (without enabling them manually).
I searched online for a way to do this but my efforts were in vain. I would really appreciate any answer on this subject.
Thanks in advance!

How to write secrets to HashiCorp Valut or Azure Key Vault from Kubernetes?

I have come across injectors/drivers/et cetera for Kubernetes for most major secret providers, but the common theme with those solutions are that these only sync one-way, i.e., only from the vault to the cluster. I want to be able to update the secrets too, from my Kubernetes cluster.
What is the recommended pattern for doing this? (Apart from the obvious solution of writing a custom service that communicates with the vault)
I'd say that this is an anti pattern, meaning you shouldn't do that.
If you create your secret in k8s from file, that would mean you either have it in version control, something you should never do. Or you don't have it in version control or create it from literal, which is good, but than you neither have a change history/log nor a real documentation of your secret. I guess that would explain, why the major secret providers don't support that.
You should set up the secret using the key vault and apply it to your cluster using Terraform for example.
Terraform supports both azure key vault secret https://www.terraform.io/docs/providers/azurerm/r/key_vault_secret.html and Kubernetes secrets https://www.terraform.io/docs/providers/kubernetes/r/secret.html
You can simply import the key vault secret and use it in the k8s secret. Every time you update the key vault secret, you apply the changes with Terraform.

Best practice for shared K8s Secrets in Helm 3?

I have a couple Charts which all need access to the same Kubernetes Secret. My initial plan was to create a Chart just for those Secrets but it seems Helm doesn't like that. I am thinking this must be a common problem and am wondering what folks generally do to solve this problem?
Thanks!
Best practice is, don't save any sensitive secrets in kubernetes clusters. kubernetes secret is encode, not encrypt.
You can reference the secret via aws ssm/secrets manager, hashicorp Vault or other similars.
https://github.com/aws-samples/aws-workshop-for-kubernetes/tree/master/04-path-security-and-networking/401-configmaps-and-secrets
Most charts that follow the common chart development practices allow you to use an existing secret instead of creating one for you. This way, you can create your common secrets normally (without helm), and refer to them from the charts that need them, via a reference like existingSecret config key.
Take minio helm chart for example: it accepts an existingSecret key as an alternative to passing an accessKey and a secretKey.
As you can see in the main charts repo, this is a pretty common practice.

Airflow KubePodOperator pull image from private repository

How can Apache Airflow's KubernetesPodOperator pull docker images from a private repository?
The KubernetesPodOperator has an image_pull_secrets which you can pass a Secrets object to authenticate with the private repository. But the secrets object can only represent an environment variable, or a volume - neither of which fit my understanding of how Kubernetes uses secrets to authenticate with private repos.
Using kubectl you can create the required secret with something like
$ kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
But how can you create the authentication secret in Airflow?
There is secret object with docker-registry type according to kubernetes documentation which can be used to authenticate to private repository.
As You mentioned in Your question; You can use kubectl to create secret of docker-registry type that you can then try to pass with image_pull_secrets.
However depending on platform You are using this might have limited or no use at all according to kubernetes documentation:
Configuring Nodes to Authenticate to a Private Registry
Note: If you are running on Google Kubernetes Engine, there will already be a .dockercfg on each node with credentials for Google Container Registry. You cannot use this approach.
Note: If you are running on AWS EC2 and are using the EC2 Container Registry (ECR), the kubelet on each node will manage and update the ECR login credentials. You cannot use this approach.
Note: This approach is suitable if you can control node configuration. It will not work reliably on GCE, and any other cloud provider that does automatic node replacement.
Note: Kubernetes as of now only supports the auths and HttpHeaders section of docker config. This means credential helpers (credHelpers or credsStore) are not supported.
Making this work on mentioned platforms is possible but it would require automated scripts and third party tools.
Like in Amazon ECR example: Amazon ECR Docker Credential Helper would be needed to periodically pull AWS credentials to docker registry configuration and then have another script to update kubernetes docker-registry secrets.
As for Airflow itself I don't think it has functionality to create its own docker-repository secrets.
You can request functionality like that in Apache Airflow JIRA.
P.S.
If You still have issues with Your K8s cluster you might want to create new question on stack addressing them.

How to use secrets and configmaps from spinnaker

I'm deploying my application into Google Kubernetes Engine, but I need to use Secrets/ConfigMaps in the configuration of ADD SERVER GROUP in spinnaker.
How can I do that?
Should I keep the keys or the data of secrets in spinnaker machine and give the pat of that directory or should I give the GitHub URL?