Is there any option to reference service's property from another entity, Like Config Map or Deployment? To be more specific I want to put service's name in ConfigMap, not by myself, but rather to link it programmatically.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
namespace: ConfigMap-Namespace
Data:
ServiceName: <referenced-service-name>
---
apiVersion: v1
kind: Service
metadata:
name: service-name /// that name I want to put in ConfigMap.
namespace: ConfigMap-Namespace
spec:
....
Thanks...
Using plain kubectl, there is no way to dynamically fill in content like this. There are very limited exceptions around injecting values into environment variables in Pods (and PodSpecs inside other objects) but a ConfigMap must contain fixed content.
In this example, the Service object name is fixed, and I'd just embed the same fixed string into the ConfigMap.
If you were using a templating engine like Helm then you could call the same template code to render the Service name in both places. For example:
{{- define "service.name" -}}
{{ .Release.Name }}-service
{{- end -}}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "service.name" . }}
...
---
apiVersion: v1
kind: ConfigMap
metadata: { ... }
data:
serviceName: {{ include "service.name" . }}
Related
I'm trying to find a way to optionally install a manifest based on a list or a map (really don't mind which) in the values file.
in the values file I have
provisioners: ["gp","test"]
and in the manifest I have
{{- if has "test" .Values.provisioners }}
I've also tried
provisioners:
- "gp"
- "test"
and put this in the yaml
{{- if hasKey .Values.provisioners "test" }}
but I can't either way to work, the chart never installs anything.
I feel like I'm missing something pretty basic, but I can't figure out what. Can someone point me in the right direction.
I don't think you shared everything in you template and there might be something else. What you already did is correct, as you in my example below:
# templates/configmap.yaml
{{- if has "test" .Values.provisioners }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
namespace: default
data:
config.yaml: |
attr=content
{{- end }}
{{- if has "gp" .Values.provisioners }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gp-config
namespace: default
data:
config.yaml: |
attr=content
{{- end }}
{{- if has "unknown" .Values.provisioners }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: not-templated-config
namespace: default
data:
config.yaml: |
attr=content
{{- end }}
Output of the helm template . against local chart:
---
# Source: chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
namespace: default
data:
config.yaml: |
attr=content
---
# Source: chart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: gp-config
namespace: default
data:
config.yaml: |
attr=content
I have a file like this
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{...}
---
apiVersion: v1
kind: Service
{...}
---
apiVersion: v1
kind: ConfigMap
{...}
This has 3 objects separated by ---. I want to reference ConfigMap object inside Deployment to use with checksum annotation. Is it possible to do so?
You will have to use a template system like this one, which will process your YAMLs and generate the desired manifests for your resources.
I suspect you will have to declare your ConfigMap as a variable that can be substituted in your deployment.yaml by the template system.
Alternatively, you can look at the kustomize system which also provides templatized generation of manifests. An example on how to deal with annotations with kustomize can be found here.
I have a file like this
apiVersion: apps/v1
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{...}
---
apiVersion: v1
kind: Service
{...}
---
apiVersion: v1
kind: ConfigMap
{...}
This has 3 objects separated by ---. I want to reference ConfigMap object inside Deployment to use with checksum annotation. Is it possible to do so?
You will have to use a template system like this one, which will process your YAMLs and generate the desired manifests for your resources.
I suspect you will have to declare your ConfigMap as a variable that can be substituted in your deployment.yaml by the template system.
Alternatively, you can look at the kustomize system which also provides templatized generation of manifests. An example on how to deal with annotations with kustomize can be found here.
I've created a helm chart which contains some resources, which are reused in several other Helm charts:
base/templates/base.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: {{ .Chart.Name }}
Then I've created a helm chart which inherits the base chart and contains some special resources:
sub1/templates/sub1.yaml
...
name: {{ .Chart.Name }}
Actual Output
In the actual output the resources of the base chart use always the chart name of the base chart.
---
# Source: sub1/templates/sub1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sub1
---
# Source: sub1/charts/base/templates/base.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: base
Wanted output
But I want the chart name of the sub chart to be used in the base chart resources.
# Source: sub1/charts/base/templates/base.yaml
...
kind: SecretProviderClass
metadata:
name: sub1
How can I achieve this?
A solution is to reuse the resources via named templates:
base/templates/base.yaml
{{- define "base-lib.secret-provider-class" -}}
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
name: {{ .Chart.Name }}
{{- end -}}
sub1/templates/sub1.yaml
{{ include "base-lib.secret-provider-class" . }}
---
...
I am writing helm charts and it creates one deployment and one statefulset component.
Now I want to generate uuid and send the value to both k8s components.
I a using uuid function to generate the uuid. But need help how I can send this value to both components.
Here is my chart folder structure --
projectdir
chart1
templates
statefulset.yaml
chart2
templates
deployment.yaml
helperchart
templates
_helpers.tpl
I have to write the logic to generate the uuid in _helpers.tpl.
Edit: It seems defining it in the _helpers.tpl does not work - thank you for pointing it out.
I have lookup it up a bit, and it seems currently the only way to achieve that is to put both of the manifests, separated by --- to the same file under the templates/. See the following example, where the UUID is defined in the first line and then used in the both Deployment and the StatefulSet:
{{- $mySharedUuid := uuidv4 -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "uuid-test.fullname" . }}-1
labels:
{{- include "uuid-test.labels" . | nindent 4 }}
annotations:
my-uuid: {{ $mySharedUuid }}
spec:
...
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "uuid-test.fullname" . }}-2
labels:
{{- include "uuid-test.labels" . | nindent 4 }}
annotations:
my-uuid: {{ $mySharedUuid }}
spec:
...
After templating, the output is:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: uuid-test-app-1
labels:
helm.sh/chart: uuid-test-0.1.0
app.kubernetes.io/name: uuid-test
app.kubernetes.io/instance: uuid-test-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
my-uuid: fe0346f5-a963-4ca1-ada0-af17405f3155
spec:
...
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: uuid-test-app-2
labels:
helm.sh/chart: uuid-test-0.1.0
app.kubernetes.io/name: uuid-test
app.kubernetes.io/instance: uuid-test-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
my-uuid: fe0346f5-a963-4ca1-ada0-af17405f3155
spec:
...
See the same issue: https://github.com/helm/helm/issues/6456
Note that this approach will still cause the UUID to be regenerated when you do a helm upgrade. To circumvent that, you would need to use another workaround along with this one.
You should explicitly pass the value in as a Helm value; don't try to generate it in the chart.
The other answers to this question highlight a couple of the issues you'll run into. #UtkuĆzdemir notes that every time you call the Helm uuidv4 function it will create a new random UUID, so you can only call that function once in the chart ever; and #srr further notes that there's no way to persist a generated value like this, so if you helm upgrade the chart the UUID value will be regenerated, which will cause all of the involved Kubernetes objects to be redeployed.
The Bitnami RabbitMQ chart has an interesting middle road here. One of its configuration options is an "Erlang cookie", also a random string that needs to be consistent across all replicas and upgrades. On an initial install it generates a random value if one isn't provided, and tells you how to retrieve it from a Secret; but if .Release.IsUpgrade then you must provide the value directly, and the error message explains how to get it from your existing deployment.
You may be able to get around the "only call uuidv4 once ever" problem by putting the value into a ConfigMap or Secret, and then referencing it from elsewhere. This works only if the only place you use the UUID value is in an environment variable, or something else that can have a value injected from a secret; it won't help if you need it in an annotation or label.
apiVersion: v1
kind: Secret
metadata:
name: {{ template "chart.name" . }}
data:
the-uuid: {{ .Values.theUuid | default uuidv4 | b64enc }}
{{-/* this is the only place uuidv4 ^^^^^^ is called at all */}}
env:
- name: THE_UUID
valueFrom:
secretKeyRef:
name: {{ template "chart.name" . }}
key: the-uuid
As suggested in helm issue tracker https://github.com/helm/helm/issues/6456, we have to put both components in same file and looks like thats the only solution right now.
Its a surprise, Helm not supporting cache the value to share across charts/components. I wish Helm support this feature in future.