helm error on a vlaue in the values.yaml file - kubernetes-helm

I am trying to replace a value in my configMap yaml with a value from the value.yaml file.
My values.yaml files includes fields that are referenced from my deployment.yaml file, and they all work well wit no problem.
I was trying to replace now a hard coded value in my configMap.yaml with a value taken from the values.yaml file.
When I run the "helm template"command, everything is OK and no error is displayed.
However, when I run the "upgrade command"it fails with this error:
Error: UPGRADE FAILED: cannot patch "oct-2-files" with kind ConfigMap:
"" is invalid: patch: Invalid value: "{\"apiVersion\":\"v1\",\"data\":{\"keys\":\"\\n\\n##server.http.port=4442\\n\\n\\nbase.request.host=https://localhost\\n\\nbase.request.port=8060\\n\\nlte.device.provisioning.simulate.mode=false\\n\\n\\npostgres.cluster=${POSTGRES_CLUSTER}\\n\\npostgres.host=${POSTGRES_HOST}\\n\\npostgres.port=${POSTGRES_PORT}\\n\\npostgres.dbname=${POSTGRES_DB}\\n\\n\\n\\njdbc.user=${POSTGRES_USER}\\n\\njdbc.pass=${POSTGRES_PASS}\\n\\n\\n\\n#spring.data.mongodb.host=${MONGO_HOST}\\n\\nspring.data.mongodb.uri=${MONGO_URI}\\n\\n\\nspring.redis.password=${REDIS_PASS}\\n\\n\\nspring.redis.host=${REDIS_HOST}\\n\\nspring.redis.port=${REDIS_PORT}\\n\\n\\n\\n\\nactivemq.cluster=${AMQ_CLUSTER}\\n\\nactivemq.server.address=${AMQ_HOST}\\n\\nactivemq.server.port=${AMQ_PORT}\\n\\n\\n\\nlocation.mapitem.image_path=/usr/src/octopus/storage/\\n\\n\\n\\n## OCR\\n\\n\\nocr.tesseract.data.path=/usr/src/octopus/backend/tesseract\\n\\n\\n\\n# define if image saving is enabled\\n\\nocr.image.saving.enabled=true\\n\\n\\n# the path for the image storing folder\\n\\nocr.image.path=/usr/src/octopus/storage/\\n\\n\\n# cron pattern for image clean process\\n\\nocr.image.clean.cron.expression=0 0/10 * * * *\\n\\n\\nocr.image.engine=fe\\n\\n\\n\\n## Debugging\\n\\n# To enable debugging activate the desired properties below and restart the\\nbackend container, e.g.:\\n\\n# docker restart octopus-be-dev-1\\n\\n\\n# logging.level.org.springframework.web=DEBUG\\n\\nlogging.level.org.springframework.web=INFO\\n\\n\\n# logging.level.com.mot.nsa=DEBUG\\n\\nlogging.level.com.mot.nsa=INFO\\n\\n\\n# logging.level.root=DEBUG\\n\\nlogging.level.root=INFO\\n\\n\\n# logging.level.org.apache.camel=on\\n\\nlogging.level.org.apache.camel=off\\n\\n\\n# log4j.logger.org.springframework=DEBUG\\n\\nlog4j.logger.org.springframework=INFO\\n\\n\\n# logging.level.org.hibernate=DEBUG\\n\\nlogging.level.org.hibernate=INFO\\n\\n\\n# hibernate.show_sql=true\\n\\nhibernate.show_sql=false\\n\",\"server.http.port\":4442},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{\"meta.helm.sh/release-name\":\"helmoct\",\"meta.helm.sh/release-namespace\":\"default\"},\"creationTimestamp\":\"2022-06-23T07:05:20Z\",\"labels\":{\"app.kubernetes.io/managed-by\":\"Helm\"},\"managedFields\":[{\"manager\":\"kubectl-create\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2022-03-14T15:50:44Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:data\":{}}},{\"manager\":\"helm\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2022-06-27T04:35:27Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:data\":{\"f:keys\":{}}}}],\"name\":\"octopus-2-files\",\"namespace\":\"default\",\"resourceVersion\":\"738993\",\"uid\":\"e43715b4-cdb9-4c13-94a4-2877ec975597\"}}":
**json: cannot unmarshal number into Go struct field ConfigMap.data of type string**
This is my values.yaml file:
version: 164
dummyhostval: 1.13.19.196
mainfeport: 9090
proxyhost: 1.13.19.73
httpport: 4442 #this is the only value from the configMap yaml
An this is my configMap yaml:
kind: ConfigMap
apiVersion: v1
metadata:
name: octopus-2-files
data:
server.http.port: {{ .Values."httpport" }}
keys: |
##server.http.port=4442
base.request.host=https://localhost
base.request.port=8060
lte.device.provisioning.simulate.mode=false
postgres.cluster=${POSTGRES_CLUSTER}
postgres.host=${POSTGRES_HOST}
postgres.port=${POSTGRES_PORT}
postgres.dbname=${POSTGRES_DB}
Why does helm thinks it is a number, and how can I overcome this?

The error is telling you that you can't use a number in a configmap. You need to make it all strings.
cannot unmarshal number into Go struct field ConfigMap.data of type string
So quote the server port:
server.http.port: {{ .Values.httpport | quote }}

Related

Approach for configmap and secret for a yaml file

I have a yaml file which needs to be loaded into my pods, this yaml file will have both sensitive and non-sensitive data, this yaml file need to be present in a path which i have included as env in containers.
env:
- name: CONFIG_PATH
value: /myapp/config/config.yaml
If my understanding is right, the configmap was the right choice, but i am forced to give the sensitive data like password as plain text in the values.yaml in helm chart.
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-config
labels:
app: {{ .Release.Name }}-config
data:
config.yaml: |
configuration:
settings:
Password: "{{.Values.config.password}}"
Username: myuser
Values.yaml
config:
password: "mypassword"
Mounted the above config map as follows
volumeMounts:
- name: {{ .Release.Name }}-config
mountPath: /myapp/config/
So i wanted to try secret, If i try secret, it is loading as Environment Variables inside pod, but it is not going into this config.yaml file.
If i convert the above yaml file into secret instead of configmap , should i convert the entire config.yaml into base64 secret? my yaml file has more entries and it will look cumbersome and i dont think it as a solution.
If i take secret as a stringdata then the base64 will be taken as it is.
How do i make sure that config.yaml loads into pods with passwords not exposed in the values.yaml Is there a way to combine configmap and secret
I read about projected volumes, but i dont see a use case for merging configmap and secrets into single config.yaml
Any help would be appreciated.
Kubernetes has no real way to construct files out of several parts. You can embed an entire (small) file in a ConfigMap or a Secret, but you can't ask the cluster to assemble a file out of parts in multiple places.
In Helm, one thing you can do is to put the configuration-file data into a helper template
{{- define "config.yaml" -}}
configuration:
settings:
Password: "{{.Values.config.password}}"
Username: myuser
{{ end -}}
In the ConfigMap you can use this helper template rather than embedding the content directly
apiVersion: v1
kind: ConfigMap
metadata: { ... }
data:
config.yaml: |
{{ include "config.yaml" . | indent 4 }}
If you move it to a Secret you do in fact need to base64 encode it. But with the helper template that's just a matter of invoking the template and encoding the result.
apiVersion: v1
kind: Secret
metadata: { ... }
data:
config.yaml: {{ include "config.yaml" . | b64enc }}
If it's possible to set properties in this file directly via environment variables (like Spring properties) or to insert environment-variable references in the file (like a Ruby ERB file) that could let you put the bulk of the file into a ConfigMap, but use a Secret for specific values; you would need a little more wiring to also make the environment variables available.
You briefly note a concern around passing the credential as a Helm value. This does in fact require having it in plain text at deploy time, and an operator could helm get values later to retrieve it. If this is a problem, you'll need some other path to inject or retrieve the secret value.

ArgoCD multiple files into argocd-rbac-cm configmap data

Is it possible to pass a csv file to the data of the "argocd-rbac-cm" config map? Since I've deployed argo-cd through gitops (with the official argo-cd helm chart), I would not like to hardcode a large csv file inside the configmap itseld, I'd prefer instead reference a csv file direct from the git repository where the helm chart is located.
And, is it also possible to pass more than one file-like keys?
Example:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
data:
policy.default: role:readonly
policy.csv: |
<<< something to have this append many files
<<< https://gitlab.custom.net/proj_name/-/blob/master/first_policy.csv # URL from the first csv file in the git repository >>
<<< https://gitlab.custom.net/proj_name/-/blob/master/second_policy.csv # URL from the second csv file in the git repository >>
Thanks in advance!
Any external evaluation in a policy.csv would lead to some unpredictable behaviour in cluster and would complicate argocd codebase without obvious gains. That's why this configmap should be set statically before deploying anything
You basically have two options:
Correctly set .server.rbacConfig as per https://github.com/argoproj/argo-helm/blob/master/charts/argo-cd/templates/argocd-configs/argocd-rbac-cm.yaml - create your configuration with some bash scripts and assign to a variable, i.e. RBAC_CONFIG then pass it in your CI/CD pipeline as helm upgrade ... --set "server.rbacConfig=$RBAC_CONFIG"
Extend chart with your own template and use .Files.Get function to create cm from files that already exists in your repository, see https://helm.sh/docs/chart_template_guide/accessing_files/ , with something like
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
data:
policy.csv: |-
{{- $files := .Files }}
{{- range tuple "first_policy.csv" "first_policy.csv" }}
{{ $files.Get . }}
{{- end }}

validation error in config.yml of kibana kubernetes

apiVersion: v1
kind: ConfigMap
metadata:
name: kibana
namespace: the-project
labels:
app: kibana
env: dev
data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |- server.name: kib.the-project.d4ldev.txn2.com server.host: "0" elasticsearch.url: http://elasticsearch:9200
this is my config.yml file. when I try to create this project, I get this error
error: error parsing configmap.yml: error converting YAML to JSON: yaml: line 13: did not find expected comment or line break
I can't get rid of the error even after removing the space in line 13 column 17
The yml content can be directly put on multiple lines, formatted like a real yaml, take a look at the following example:
data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |-
server:
name: kib.the-project.d4ldev.txn2.com
host: "0"
elasticsearch.url: http://elasticsearch:9200
This works when put in a ConfigMap, it should work even if provided to a HELM Chart (depending on how the HELM templates are written)

How to refrence pod's shell env variable in configmap data section

I have a configmap.yaml file as below :
apiVersion: v1
kind: ConfigMap
metadata:
name: abc
namespace: monitoring
labels:
app: abc
version: 0.17.0
data:
application.yml: |-
myjava:
security:
enabled: true
abc:
server:
access-log:
enabled: ${myvar}. ## this is not working
"myvar" value is available in pod as shell environment variable from secretkeyref field in deployment file.
Now I want to replace myvar shell environment variable in configmap above i.e before application.yml file is available in pod it should have replaced myvar value. which is not working i tried ${myvar} and $(myvar) and "#{ENV['myvar']}"
Is that possible in kubernetes configmap to reference with in data section pod's environment variable if yes how or should i need to write a script to replace with sed -i application.yml etc.
Is that possible in kubernetes configmap to reference with in data section pod's environment variable
That's not possible. A ConfigMap is not associated with a particular pod, so there's no way to perform the sort of variable substitution you're asking about. You would need to implement this logic inside your containers (fetch the ConfigMap, perform variable substitution yourself, then consume the data).

helm not creating the resources

I have tried to run Helm for the first time. I am having deployment.yaml, service.yaml and ingress.yaml files alongwith values.yaml and chart.yaml.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc
namespace: xyz
labels:
app: abc
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 3
template:
spec:
containers:
- name: abc
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
-
containerPort: 8080
service.yaml
apiVersion: v1
kind: Service
metadata:
name: abc
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
namespace: xyz
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.service.sslCert }}
spec:
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
selector:
app: abc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "haproxy-ingress"
namespace: xyz
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: alb
From what I can see I do not think I have missed putting app.kubernetes.io/managed-by but still, I keep getting an error:
rendered manifests contain a resource that already exists. Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
It renders the file locally correctly.
helm list --all --all-namespaces returns nothing.
Please help.
You already have some resources, e.g. service abc in the given namespace, xyz that you're trying to install via a Helm chart.
Delete those and install them via helm install.
$ kubectl delete service -n <namespace> <service-name>
$ kubectl delete deployment -n <namespace> <deployment-name>
$ kubectl delete ingress -n <namespace> <ingress-name>
Once you have these resources deployed via Helm, you will be able to perform helm update to change properties.
Remove the "app.kubernetes.io/managed-by" label from your yaml's, this will be added by Helm.
The error below is quiet common:
label validation error: missing key "app.kubernetes.io/managed-by":
must be set to "Helm"; annotation validation error: missing key
"meta.helm.sh/release-name": must be set to ..
So I'll provide a bit longer explanation and also a context to the topic.
What happend?
It seems that you tried to create resources that were already exist and created outside of Helm (probably with kubectl).
Why Helm throw the error?
Helm doesn't allow a resource to be owned by more than one
deployment.
It is the responsibility of the chart creator to ensure that the chart
produce unique resources only.
How can you solve this?
Option 1 - Follow the error message and add the meta.helm.sh annotations:
As can be describe in this PR: Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that
already exists in the target cluster if the existing resource has the
correct meta.helm.sh/release-name and
meta.helm.sh/release-namespace annotations, and matches the label
selector app.kubernetes.io/managed-by=Helm. This facilitates
zero-downtime migrations to Helm 3 for managing existing deployments,
and allows Helm to "adopt" existing resources that it previously
created.
(*) I think that the meta.helm.sh scope is a less common approach today.
Option 2 - Add the app.kubernetes.io/instance label:
As can be seen in different Helm chart providers (Bitnami, Nginx ingress controller, External-Dns for example) - the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
(*) Notice: There are some CD tools like ArgoCD that automatically sets the app.kubernetes.io/instance label and uses it to determine which resources form the app.
Option 3 - Delete old resources.
It might be relevant in your specific case where the old resources might not be relevant anymore.
For those who need some context
What are those labels?
Shared labels and annotations share a common prefix: app.kubernetes.io. Labels without a prefix are private to users. The shared prefix ensures that shared labels do not interfere with custom user labels.
In order to take full advantage of using these labels, they should be applied on every resource object.
The app.kubernetes.io/managed-by label is used to describe the tool being used to manage the operation of an application - for example: helm.
Read more on the Recommended Labels section.
Are they added by helm?
No.
First of all, as mentioned before, those labels are not specific to Helm and Helm itself never requires that a particular label be present.
From the other hand, Helm docs recommend to use the following Standard Labels. app.kubernetes.io/managed-by is one of them and should be set to {{ .Release.Service }} in order to find all resources managed by Helm.
So it is the role of the chart maintainer to add those labels.
What is the best way to add them?
Many Helm chart providers adds them to the _helpers.tpl file and let all resources include it:
labels: {{ include "my-chart.labels" . | nindent 4 }}
The trick here is to chase the error message.
For example, in the below case the erro message points at something wrong with the 'service' in namespace 'xyz'
Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
Simply delete the same service from the mentioned namespace with below:
kubectl -n xyz delete svc abc
And then try the installation/deployment again. It might so happen that similar issue may appear but for a different resource as shown in the below example:
Release "nok-sec-sip-tls-crd" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Role "nok-sec-sip-tls-crd-role" in namespace "debu" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nok-sec-sip-tls-crd": current value is "nok-sec-sip"
Again use the kubectl command and delete the resource mentioned in the error message. For example, in the above case the error resource should be deleted with the below command:
kubectl delete role nok-sec-sip-tls-crd-role -n debu
I was getting this error because I was trying to upgrade the helm chart with wrong release name. So it conflicted with the existing resources in same namespace.
I was running this command with wrong releasename
helm upgrade --install --namespace <namespace> wrong-releasename <chart-folder>
and got the similar errors
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap \"cmname\" in namespace \"namespace\" exists and cannot be imported into the current release
invalid ownership metadata; label validation error: missing key \"app.kubernetes.io/managed-by\": must be set to \"Helm\"; annotation validation error: missing key \"meta.helm.sh/release-name\": must be set to \"wrong-releasename\"; annotation validation error: missing key \"meta.helm.sh/release-namespace\": must be set to \"namespace\"
I checked the existing helm releases in the same namespace and used the same name as the listed release name to upgrade my helmchart
helm ls -n <namespace>
helm upgrade --install --namespace <namespace> releasename <chart-folder>
Here's a faster and more thorough way to get rid of argo so it can be reinstalled :
helm list -A # see argocd in namespace argocd
helm uninstall argocd -n argocd
kubectl delete namespace argocd
The last line gets rid of all secrets and other resources not cleaned up by uninstalling the helm chart, and was needed in my environment, otherwise, I got the same sorts of errors about duplicate resources you were seeing.
We use GitOps via Flux, and I was getting the same rendered manifests contain a resource that already exists error. For me the problem was I accidentally defined a resource with the same name in two different files, so it was trying to create it twice. I removed the duplicate resource definition from one of the files to fix it up.