generate dynamic secret name inside kubernetes deployment file - kubernetes

I have 2 secrets i.e. production and staging. I want to dynamically load this secret in deployment file using the environment variable being set in same file
env:
- name: NODE_ENV
value: "production"
- name: general-secret
secret:
secretName: general-production-secret
I want to load environment specific secrets like using
secretName: general-{{env. NODE_ENV}}-secret
Is it possible?

As far as I know, this is not possible unless you have Helm chart for your applications, then you can make this possible
this is a solution in helm chart close to what you need
Dynamically accessing values depending on variable values in a Helm chart

Related

Same secret for multiple deployments in Helm chart

I am using supabase-community/supabase-kubernetes to deploy Supabase in Kubernetes.
For Studio, Storage, Kong, Realtime, Rest and Auth services, you need to define at least jwt secret or in some cases the anon or service key.
However, I have two problems with this kind of configuration:
You need to configure the same secret information multiple times in values.yaml
The secrets won't be stored in a K8s secret
To improve these two aspects, I propose two configure those values in a dedicated section, e.g.:
jwtSecrets:
anonKey: "JWT_ANON_KEY"
serviceKey: "JWT_SERVICE_KEY"
key: "YOUR_SUPER_SECRET_JWT_TOKEN_WITH_AT_LEAST_32_CHARACTERS_LONG"
When rendered with the templates, a "global" secret gets created and every service (Studio, Storage, Kong, etc.) references this secret in its configuration:
env:
...
- name: SUPABASE_ANON_KEY
valueFrom:
secretKeyRef:
name: my-jwt-secret
key: anonKey
However, I am unsure if this is best practice for Helm charts?, to have such global configuration sections? Besides, I would like to know where to define this global secret creation — in _helpers.tpl?
Any help is appreciated! :)
As stated out by #David Maze there is no best practice regarding one secret for multiple deployments in values.yaml of a Helm chart.
On grounds of convenience, the secret name should be referenced in values.yaml like this:
jwtSecretName: my-secret
While the secret must be created by the user beforehand:
apiVersion: v1
data:
jwtSecret: YWRtaW4=
serviceKey: MWYyZDFlMmU2N2Rm
anonKey: MWYyZDFlMmU2N2Rm
kind: Secret
This allows to store the secret data according to Kubernetes best practice and simplifies the configuration of Helm charts.

Substitute env variable in PATH in ConfigMap in Kubernetes

In Kubernetes, I have the following section in deployment.yaml. I am using ConfigMap and I want to set the path dynamically based on the pod metadata or label or env variable in pod. Does ConfigMap support setting path dynamically?
spec:
volumes:
- name: configmap
configMap:
name: devconfig
items:
- key: config
path: $(ENVIRONMENT)
defaultMode: 420
This is call substitution which kubectl does not support out of the box. However, you can easily achieve what you want by using envsubst command which will substitute $ENVIRONMENT in your yaml with the the environment variable set in your current shell.
As an alternative to envsubst, that was absolutely correct answered by #gohm's, you may wanna try to use combination of job, that will check your configmap and pass proper values to your path.
Take a look: Kubernetes: use environment variable/ConfigMap in PersistentVolume host path

Using same spec across different deployment in argocd

I am currently using Kustomize. We are have multiple deployments and services. These have the same spec but different names. Is it possible to store the spec in individual files & refer them across all the deployments files?
Helm is a good fit for the solution.
However, since we were already using Kustomize & migration to Helm would have needed time, we solved the problem using namePrefix & label modifiers in Kustomize.
Use Helm, in ArgoCD create a pipeline with helm:3 container and create a helm-chart directory or repository. Pull the chart repository, deploy with helm. Use values.yaml for the dynamic values you want to use. Also, you will need to add kubeconfig file to your pipeline but that is another issue.
This is the best offer I can give. For further information I need to inspect ArgoCD.
I was faced with this problem and I resolved it using Helm3 charts:
I have a chart. Yaml file where I indicated my release name and version
values. Yam where I define all variable to use for a specific environment.
Values-test. Yaml a file to use, for example, in a test environment where you should only put the variable that must be changed from an environment to another.
I hope that can help you to resolve your issue.
I would also suggest using Helm. However a restriction of Helm is that you cannot create dynamic values.yaml files (https://github.com/helm/helm/issues/6699) - this can be very annoying, especially for multi-environment setups. However, ArgoCD provides a very nice way to do this with its Application type.
The solution is to create a custom Helm chart for generating your ArgoCD applications (which can be called with different config for each environment). The templates in this helm chart will generate ArgoCD Application types. This type supports a source.helm.values field where you can dynamically set the values.yaml.
For example, the values.yaml for HashiCorp Vault can be highly complex and this is a scenario where a dynamic values.yaml per environment is highly desirable (as this prevents having multiple values.yaml files for each environment which are large but very similar).
If your custom ArgoCD helm chart is my-argocd-application-helm, then the following are example values.yaml and the template which generates your Vault application i.e.
values.yaml
server: 1.2.3.4 # Target kubernetes server for all applications
vault:
name: vault-dev
repoURL: https://git.acme.com/myapp/vault-helm.git
targetRevision: master
path: helm/vault-chart
namespace: vault
hostname: 5.6.7.8 # target server for Vault
...
templates/vault-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ .Values.vault.name }}
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: 'vault'
server: {{ .Values.server }}
project: 'default'
source:
path: '{{ .Values.vault.path }}'
repoURL: {{ .Values.vault.repoURL }}
targetRevision: {{ .Values.vault.targetRevision }}
helm:
# Dynamically generate `values.yaml`
values: |
vault:
server:
ingress:
activeService: true
hosts:
- host: {{ required "Please set 'vault.hostname'" .Values.vault.hostname | quote }}
paths:
- /
ha:
enabled: true
config: |
ui = true
...
These values will then override any base configuration residing in the values.yaml specified by {{ .Values.vault.repoURL }} which can contain config which doesn't change for each environment.

Is there a way in a Helm chart to take secrets stored as text strings and write them out to a file while installing the chart?

I am working on my first project using AWS EKS. My company has their own proprietary workflow for deploying apps to EKS which includes the requirement to use Helm.
I need to take terraform and k8s yaml code provided by a vendor to stand up their app, and convert it to fit my company's proprietary standards. One of those standards requires that secrets not be stored with the code. I have to store secrets as text strings in a specific secrets.yaml file, which is then stored in a secure location and brought into the chart on the fly while it's installing it.
That's the background, now here's the question...
The vendor provided app is designed to ingest credentials in the form of a text file. This text file is expected to be in with the code, which I can't do.
Is there a way to embed a script or something within my Helm chart which can take these credentials which I have stored in the secrets file as a text string, and output to a temporary text file while the chart is being installed so that the file exists that the app needs?
I assume the secrets.yaml is added as a Kubernetes Secret in your environment?
What you can do is add the secret as a file in your app.
apiVersion: v1
kind: Pod
metadata:
name: secret-volume-example
spec:
containers:
- name: provider-app
image: provider-app
volumeMounts:
- name: foo
mountPath: "/path/to/provider/expected/location"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret

Standard way of keeping Dockerhub credentials in Kubernetes YAML resource

I am currently implementing CI/CD pipeline using docker , Kubernetes and Jenkins for my micro services deployment. And I am testing the pipeline using the public repository that I created in Dockerhub.com. When I tried the deployment using Kubernetes Helm chart , I were able to add my all credentials in Value.yaml file -the default file getting for adding the all configuration when we creating a helm chart.
Confusion
Now I removed my helm chart , and I am only using deployment and service n plane YAML files. SO How I can add my Dockerhub credentials here ?
Do I need to use environment variable ? Or Do I need to create any separate YAML file for credentials and need to give reference in Deployment.yaml file ?
If I am using imagePullSecrets way How I can create separate YAML file for credentials ?
From Kubernetes point of view: Pull an Image from a Private Registry you can create secrets and add necessary information into your yaml (Pod/Deployment)
Steps:
1. Create a Secret by providing credentials on the command line:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
2. Create a Pod that uses your Secret (example pod):
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
You can pass the dockerhub creds as environment variables at jenkins only and Imagepullsecrets are to be made as per kubernetes doc, as they are one time things, you can directly add them to the required clusters