using generateName for yaml file to be installed by helm - kubernetes

I have upload.yaml file which is uploads a script to mongo, I package with helm.
apiVersion: batch/v1
kind: Job
metadata:
generateName: upload-strategy-to-mongo-v2
spec:
parallelism: 1
completions: 1
template:
metadata:
name: upload-strategy-to-mongo
spec:
volumes:
- name: upload-strategy-to-mongo-scripts-volume
configMap:
name: upload-strategy-to-mongo-scripts-v3
containers:
- name: upload-strategy-to-mongo
image: mongo
env:
- name: MONGODB_URI
value: ####
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-user
key: ####
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-user
key: #####
volumeMounts:
- mountPath: /scripts
name: upload-strategy-to-mongo-scripts-volume
command: ["mongo"]
args:
- $(MONGODB_URI)/ravnml
- --username
- $(MONGODB_USERNAME)
- --password
- $(MONGODB_PASSWORD)
- --authenticationDatabase
- admin
- /scripts/upload.js
restartPolicy: Never
---
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: upload-strategy-to-mongo-scripts-v3
data:
upload.js: |
// Read the object from file and parse it
var data = cat('/scripts/strategy.json');
var obj = JSON.parse(data);
// Upsert strategy
print(db.strategy.find());
db.strategy.replaceOne(
{ name : obj.name },
obj,
{ upsert: true }
)
print(db.strategy.find());
strategy.json: {{ .Files.Get "strategy.json" | quote }}
now I am using generateName to generate a custom name every time I install it. I require to have multiple packages been installed and I require the name to be dynamic.
Error
When I install this script with helm install <name> <tar.gz file> -n <namespace> I get the following error
Error: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: resource name may not be empty
but I am able to install if I don't use generateName. Any ideas?
I looked at various resources but they don't seem to answer how to install via helm.
references looked:
Add random string on Kubernetes pod deployment name https://github.com/kubernetes/kubernetes/issues/44501 ;
https://zknill.io/posts/kubernetes-generated-names/

This seems to be a known issue. Helm doesn't work with generateName. For unique names, you can use the Helm's build in properties like Revision or Name. See the following link for reference:
https://github.com/helm/helm/issues/3348#issuecomment-482369133

Related

Kubernetes - Use values from Secret in multiline configmap

I am relativly new to Kubernetes and I have the following problem: We use Grafana in our Kubernetes Cluster, but currently the way our template.yaml file is built does not allow to use a secret form a password.
- apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: ${APP}
name: "${APP}-ldap-file"
data:
ldap.toml: |-
[[servers]]
....
# Search user bind dn
bind_dn = "uid=tu0213,cn=users,o=company,c=de"
bind_password = ${BIND_PASSWORD}
parameters:
- name: BIND_PASSWORD
Just using the password this way works fine, but it´s in plain text in a params file in our CI/CD Pipeline.
I a different repository I fould this:
spec:
containers:
- name: nginx-auth-ldap
image: ${REGISTRY}/${NAMESPACE}/nginx-auth-ldap:6
imagePullPolicy: Always
env:
- name: LDAP_BIND_DN
valueFrom:
secretKeyRef:
name: ldap-bind-dn
key: dn
Is this valueFrom approach also possible in my usecase?
You can use a secret like that but you have to split the data into separate keys like this:
apiVersion: v1
kind: Secret
metadata:
labels:
app: ${APP}
name: "${APP}-ldap-file"
stringData:
dn: "uid=tu0213,cn=users,o=company,c=de"
The format you specify is correct. Just create a secret with name "ldap-bind-dn" and as a value provide your password there.
Path for secret: In openshift console go to Resources-> Secrets -> create secret.
spec:
containers:
- name: nginx-auth-ldap
image: ${REGISTRY}/${NAMESPACE}/nginx-auth-ldap:6
imagePullPolicy: Always
env:
- name: LDAP_BIND_DN
valueFrom:
secretKeyRef:
name: ldap-bind-dn
key: dn

How to supply a value of a server in NFS mount in a k8 Deployment via a ConfigMap

I'm writing a helm chart where I need to supply a nfs.server value for the volume mount from the ConfigMap (efs-url in the example below).
There are examples in the docs on how to pass the value from the ConfigMap to env variables or even mount ConfigMaps. I understand how I can pass this value from the values.yaml but I just can't find an example on how it can be done using a ConfigMap.
I have control over this ConfigMap so I can reformat it as needed.
Am I missing something very obvious?
Is it even possible to do?
If not, what are the possible workarounds?
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-url
data:
url: yourEFSsystemID.efs.yourEFSregion.amazonaws.com
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: efs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: <<< VALUE SHOULD COME FROM THE CONFIG MAP >>>
path: /
Having analysed the comments it looks like using ConfigMap approach is not suitable for this example as ConfigMap
is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
To read more about ConfigMaps and how they can be utilized one can visit the "ConfigMaps" section and the "Configure a Pod to Use a ConfigMap" section.

Kubernetes Kustomize: replace variable in patch file

Given the following kustomize patch:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de
I want to use kubectl apply -k and somehow pass a value for ${PASSWORD} which I can set from my build script.
The only solution I got to work so far was replacing the ${PASSWORD} with sed, but I would prefer a kustomize solution.
As #Jonas already suggested you should consider using Secret. It's nicely described in this article.
I want to use kubectl apply -k and somehow pass a value for
${PASSWORD} which I can set from my build script.
I guess your script can store the generated password as a variable or save it to some file. You can easily create a Secret as follows:
$ kustomize edit add secret sl-demo-app --from-literal=db-password=$PASSWORD
or from a file:
$ kustomize edit add secret sl-demo-app --from-file=file/path
As you can read in the mentioned article:
These commands will modify your kustomization.yaml and add a
SecretGenerator inside it.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- custom-env.yaml
- replica-and-rollout-strategy.yaml
secretGenerator:
- literals:
- db-password=12345
name: sl-demo-app
type: Opaque
kustomize build run in your project directory will create among others following Secret:
apiVersion: v1
data:
db-password: MTIzNDU=
kind: Secret
metadata:
name: sl-demo-app-6ft88t2625
type: Opaque
...
More details you can fine in the article.
If we want to use this secret from our deployment, we just have, like
before, to add a new layer definition which uses the secret.
For example, this file will mount the db-password value as
environement variables
apiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
template:
spec:
containers:
- name: app
env:
- name: "DB_PASSWORD"
valueFrom:
secretKeyRef:
name: sl-demo-app
key: db.password
In your Deployment definition file it may look similar to this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
env:
- name: "PASSWORD"
valueFrom:
secretKeyRef:
name: git-secret
key: git.password
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de

Setting postgres environmental variables running image

As the documentation shows, you should be setting the env vars when doing a docker run like the following:
docker run --name some-postgres -e POSTGRES_PASSWORD='foo' POSTGRES_USER='bar'
This sets the superuser and password to access the database instead of the defaults of POSTGRES_PASSWORD='' and POSTGRES_USER='postgres'.
However, I'm using Skaffold to spin up a k8s cluster and I'm trying to figure out how to do something similar. How does one go about doing this for Kubernetes and Skaffold?
#P Ekambaram is correct but I would like to go further into this topic and explain the "whys and hows".
When passing passwords on Kubernetes, it's highly recommended to use encryption and you can do this by using secrets.
Creating your own Secrets (Doc)
To be able to use the secrets as described by #P Ekambaram, you need to have a secret in your kubernetes cluster.
To easily create a secret, you can also create a Secret from generators and then apply it to create the object on the Apiserver. The generators should be specified in a kustomization.yaml inside a directory.
For example, to generate a Secret from literals username=admin and password=secret, you can specify the secret generator in kustomization.yaml as
# Create a kustomization.yaml file with SecretGenerator
$ cat <<EOF >./kustomization.yaml
secretGenerator:
- name: db-user-pass
literals:
- username=admin
- password=secret
EOF
Apply the kustomization directory to create the Secret object.
$ kubectl apply -k .
secret/db-user-pass-dddghtt9b5 created
Using Secrets as Environment Variables (Doc)
This is an example of a pod that uses secrets from environment variables:
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
Source: here and here.
Use the below YAML
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- name: postgres
image: postgres:11.2
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "sampledb"
- name: POSTGRES_USER
value: "postgres"
- name: POSTGRES_PASSWORD
value: "secret"
volumeMounts:
- name: data
mountPath: /var/lib/postgresql
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
name: postgres

Use Kubernetes secrets as environment variables inside a config map

I have an application in a container which reads certain data from a configMap which goes like this
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
password: hello123
Now I created a secret for the password and mounted as env variable while starting the container.
apiVersion: v1
kind: Secret
metadata:
name: appdbpassword
type: Opaque
stringData:
password: hello123
My pod looks like:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.pod.name }}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.image }}
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;"]
env:
- name: password
valueFrom:
secretKeyRef:
name: appdbpassword
key: password
volumeMounts:
- name: config-volume
mountPath: /app/app-config/application.yaml
subPath: application.yaml
volumes:
- name: config-volume
configMap:
name: app-config
I tried using this env variable inside the configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
**password: ${password}**
But my application is unable to read this password. Am I missing something here?
EDIT:
I cannot change the application.yaml to any other form as my server looks for application.yaml in source path. Do we have any way to use that environment variable in values.yaml(helm) file and use it in the configmap?
your ${password} variable will not be replaced by its value as application.yaml is a static file. If you use this yaml file in some configuration then it is possible that it will get replaced by its value.
consider a scenario where instead of application.yaml pass this file
application.sh: |
echo "${password}"
now go inside /app/app-config you will see application.sh file . And now do sh application.sh you will see the value of environment variable.
I hope this might clear your point.
You cannot use a secret in ConfigMap as they are intended to non-sensitive data (See here).
Also you should not pass Secrets using env's as it's create potential risk (Read more here why env shouldn't be
used).
Applications usually dump env variables in error reports or even write the to the
app logs at startup which could lead to exposing Secrets.
The best way would be to mount the Secret as file.
Here's an simple example how to mount it as file:
spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: ssh-key
Kubernetes documentation explains well how to use and mount secrets.