kubernetes config map data value externalisation - kubernetes

I'm installing fluent-bit in our k8s cluster. I have the helm chart for it on our repo, and argo is doing the deployment.
Among the resources in the helm chart is a config-map with data value as below:
apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit
labels:
app: fluent-bit
data:
...
output-s3.conf: |
[OUTPUT]
Name s3
Match *
bucket bucket/prefix/random123/test
region ap-southeast-2
...
My question is how can I externalize the value for the bucket so it's not hardcoded (please note that the bucket value has random numbers)? As the s3 bucket is being created by a separate app that gets ran on the same master node, the randomly generated s3 bucket name is available as environment variable, e.g. doing "echo $s3bucketName" on the node would give the actual value).
I have tried doing below on the config map but it didn't work and is just getting set as it is when inspected on pod:
bucket $(echo $s3bucketName)
Using helm, I know it can be achieved something like below and then can populate using scripting something like helm --set to set the value from environment variable. But the deployment is happening auto through argocd so it's not like there is a place to do helm --set command or please let me know if otherwise.
bucket {{.Values.s3.bucket}}
TIA

Instead of using helm install you can use helm template ... --set ... > out.yaml to locally render your chart in a yaml file. This file can then be processed by Argo.
Docs

With FluentBit you should be able to use environment variables such as:
output-s3.conf: |
[OUTPUT]
Name s3
Match *
bucket ${S3_BUCKET_NAME}
region ap-southeast-2
You can then set the environment variable on your Helm values. Depending on the chart you are using and how values are passed you may have to perform a different setup, but for example using the official FluentBit charts with a values-prod.yml like:
env:
- name: S3_BUCKET_NAME
value: "bucket/prefix/random123/test"
Using ArgoCD, you probably have a Git repository where Helm values files are defined (like values-prod.yml) and/or an ArgoCD application defining values direct. For example, if you have an ArgoCD application defined such as:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
# [...]
spec:
source:
# ...
helm:
# Helm values files for overriding values in the helm chart
valueFiles:
# You can update this file
- values-prod.yaml
# Helm values
values: |
# Or update values here
env:
- name: S3_BUCKET_NAME
value: "bucket/prefix/random123/test"
# ...
You should be able to update either values-prod.yml on the repository used by ArgoCD or update directly values: with you environment variable

Related

How to set shared home for JIRA Data Center when it is installed using Helm chart?

I installed JIRA Data Center helm chart and had the k8s provision the shared home dynamically by setting the create flag to true as shown in this link.
https://github.com/atlassian/data-center-helm-charts/blob/main/src/main/charts/jira/values.yaml#L219
This created the AWS EFS access point as expected. Also, the pod shows "JIRA_SHARED_HOME" environment variable for Jira container. However, the JIRA software doesn't seem to honor the environment variable. The log (/var/atlassian/jira/logs/atlassian-jira.log) shows that jira.shared.home is set to the same values as jira.local.home which is /var/atlassian/application-data/jira.
I expected the jira.shared.home to be set as /var/atlassian/application-data/shared-home as the below link would indicate.
https://github.com/atlassian/data-center-helm-charts/blob/main/src/main/charts/jira/values.yaml#L249
I found a workaround based on the following documentation which necessitates the cluster.properties file in the Jira local home.
https://developer.atlassian.com/server/jira/platform/configuring-a-jira-cluster/#jira-setup
The helm chart allows creation of config/properties file from k8s config maps.
https://github.com/atlassian/data-center-helm-charts/blob/main/src/main/charts/jira/values.yaml#L765
Create a ConfigMap k8s object
apiVersion: v1
kind: ConfigMap
metadata:
name: jira-configs
data:
cluster.properties: |
jira.shared.home=/var/atlassian/application-data/shared-home
Add the following to the JIRA data center helm chart values.yaml.
additionalFiles:
- name: jira-configs
type: configMap
key: cluster.properties
mountPath: /var/atlassian/application-data/jira

How to refrence pod's shell env variable in configmap data section

I have a configmap.yaml file as below :
apiVersion: v1
kind: ConfigMap
metadata:
name: abc
namespace: monitoring
labels:
app: abc
version: 0.17.0
data:
application.yml: |-
myjava:
security:
enabled: true
abc:
server:
access-log:
enabled: ${myvar}. ## this is not working
"myvar" value is available in pod as shell environment variable from secretkeyref field in deployment file.
Now I want to replace myvar shell environment variable in configmap above i.e before application.yml file is available in pod it should have replaced myvar value. which is not working i tried ${myvar} and $(myvar) and "#{ENV['myvar']}"
Is that possible in kubernetes configmap to reference with in data section pod's environment variable if yes how or should i need to write a script to replace with sed -i application.yml etc.
Is that possible in kubernetes configmap to reference with in data section pod's environment variable
That's not possible. A ConfigMap is not associated with a particular pod, so there's no way to perform the sort of variable substitution you're asking about. You would need to implement this logic inside your containers (fetch the ConfigMap, perform variable substitution yourself, then consume the data).

Using same spec across different deployment in argocd

I am currently using Kustomize. We are have multiple deployments and services. These have the same spec but different names. Is it possible to store the spec in individual files & refer them across all the deployments files?
Helm is a good fit for the solution.
However, since we were already using Kustomize & migration to Helm would have needed time, we solved the problem using namePrefix & label modifiers in Kustomize.
Use Helm, in ArgoCD create a pipeline with helm:3 container and create a helm-chart directory or repository. Pull the chart repository, deploy with helm. Use values.yaml for the dynamic values you want to use. Also, you will need to add kubeconfig file to your pipeline but that is another issue.
This is the best offer I can give. For further information I need to inspect ArgoCD.
I was faced with this problem and I resolved it using Helm3 charts:
I have a chart. Yaml file where I indicated my release name and version
values. Yam where I define all variable to use for a specific environment.
Values-test. Yaml a file to use, for example, in a test environment where you should only put the variable that must be changed from an environment to another.
I hope that can help you to resolve your issue.
I would also suggest using Helm. However a restriction of Helm is that you cannot create dynamic values.yaml files (https://github.com/helm/helm/issues/6699) - this can be very annoying, especially for multi-environment setups. However, ArgoCD provides a very nice way to do this with its Application type.
The solution is to create a custom Helm chart for generating your ArgoCD applications (which can be called with different config for each environment). The templates in this helm chart will generate ArgoCD Application types. This type supports a source.helm.values field where you can dynamically set the values.yaml.
For example, the values.yaml for HashiCorp Vault can be highly complex and this is a scenario where a dynamic values.yaml per environment is highly desirable (as this prevents having multiple values.yaml files for each environment which are large but very similar).
If your custom ArgoCD helm chart is my-argocd-application-helm, then the following are example values.yaml and the template which generates your Vault application i.e.
values.yaml
server: 1.2.3.4 # Target kubernetes server for all applications
vault:
name: vault-dev
repoURL: https://git.acme.com/myapp/vault-helm.git
targetRevision: master
path: helm/vault-chart
namespace: vault
hostname: 5.6.7.8 # target server for Vault
...
templates/vault-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: {{ .Values.vault.name }}
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: 'vault'
server: {{ .Values.server }}
project: 'default'
source:
path: '{{ .Values.vault.path }}'
repoURL: {{ .Values.vault.repoURL }}
targetRevision: {{ .Values.vault.targetRevision }}
helm:
# Dynamically generate `values.yaml`
values: |
vault:
server:
ingress:
activeService: true
hosts:
- host: {{ required "Please set 'vault.hostname'" .Values.vault.hostname | quote }}
paths:
- /
ha:
enabled: true
config: |
ui = true
...
These values will then override any base configuration residing in the values.yaml specified by {{ .Values.vault.repoURL }} which can contain config which doesn't change for each environment.

Standard way of keeping Dockerhub credentials in Kubernetes YAML resource

I am currently implementing CI/CD pipeline using docker , Kubernetes and Jenkins for my micro services deployment. And I am testing the pipeline using the public repository that I created in Dockerhub.com. When I tried the deployment using Kubernetes Helm chart , I were able to add my all credentials in Value.yaml file -the default file getting for adding the all configuration when we creating a helm chart.
Confusion
Now I removed my helm chart , and I am only using deployment and service n plane YAML files. SO How I can add my Dockerhub credentials here ?
Do I need to use environment variable ? Or Do I need to create any separate YAML file for credentials and need to give reference in Deployment.yaml file ?
If I am using imagePullSecrets way How I can create separate YAML file for credentials ?
From Kubernetes point of view: Pull an Image from a Private Registry you can create secrets and add necessary information into your yaml (Pod/Deployment)
Steps:
1. Create a Secret by providing credentials on the command line:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
2. Create a Pod that uses your Secret (example pod):
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
You can pass the dockerhub creds as environment variables at jenkins only and Imagepullsecrets are to be made as per kubernetes doc, as they are one time things, you can directly add them to the required clusters

k8s - trigger new pod creation via config map update

I have a deployment for which the env variables for pod are set via config map.
envFrom:
- configMapRef:
name: map
My config map will look like this
apiVersion: v1
data:
HI: HELLO
PASSWORD: PWD
USERNAME: USER
kind: ConfigMap
metadata:
name: map
all the pods have these env variables set from map. Now If I change the config map file and apply - kubectl apply -f map.yaml i get the confirmation that map is configured. However it does not trigger new pods creation with updated env variables.
Interestingly this one works
kubectl set env deploy/mydeploy PASSWORD=NEWPWD
But not this one
kubectl set env deploy/mydeploy --from=cm/map
But I am looking for the way for new pods creation with updated env variables via config map!
Interestingly this one works
kubectl set env deploy/mydeploy PASSWORD=NEWPWD
But not this one
kubectl set env deploy/mydeploy --from=cm/map
This is expected behavior. Your pod manifest hasn't changed in second command (when you use the cm), that's why Kubernetes not recreating it.
There are several ways to deal with that. Basically what you can do is artificially change Pod manifest every time ConfigMap changes, e.g. adding annotation to the Pod with sha256sum of ConfigMap content. This is actually what Helm suggests you do. If you are using Helm it can be done as:
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
[...]
From here: https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change
Just make sure you add annotation to Pod (template) object, not the Deployment itself.
The simple answer is NO.
In case you are not using helm & looking for a hack, after updating the configMap, just use an dummy env variable - keep updating the value just to trigger the rolling update.
kubectl set env deploy/mydeploy DUMMY_ENV_FOR_ROLLING_UPDATE=dummyval