Helm chart: reference a secret gives name: %!s(<nil>)-%!s(<nil>) - kubernetes

I am creating a Helm chart. When doing a dry run I get a error:
Error: YAML parse error on vstsagent/templates/vsts-buildrelease-agent.yaml: error converting YAML to JSON: yaml: line 28: found character that cannot start any token
The dry run also outputs the secret and deployment YAML file which I created. The part where it goes wrong in the deployment shows:
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: %!s(<nil>)-%!s(<nil>)
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: %!s(<nil>)-%!s(<nil>)
key: TOKEN
The output from the dry run for the secret looks fine.
The templates I created:
apiVersion: v1
kind: Secret
metadata:
name: {{ template "chart.fullname" . }}
type: Opaque
data:
ACCOUNT: {{ .Values.chart.secret.account }}
TOKEN: {{ .Values.chart.secret.token }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "chart.fullname" . }}
labels:
app: {{ template "chart.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
release: {{ .Release.Name }}
app: {{ template "chart.name" . }}
annotations:
agentVersion: {{ .Values.chart.image.tag }}
spec:
containers:
- name: {{ template "chart.name" . }}
image: {{ .Values.chart.image.name }}
imagePullPolicy: {{ .Values.chart.image.pullPolicy }}
env:
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" }}
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" }}
key: TOKEN
The _helper.tpl looks like this:
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "chart.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
Where am I going wrong in this?

I missed 2 dots....
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" . }}
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" . }}
key: TOKEN

Related

Helm iterate over nested list and added output in yaml with decoded value

I have a bunch of secretKey in values.yaml like below. I need to add each value of secretKey as a key and decoded value in template.data like as a value like below.
How can I achieve this ?
{{- range $externalSecretName, $externalSecret := .Values.externalSecrets }}
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: {{ $externalSecretName }}
spec:
refreshInterval: 1m
secretStoreRef:
name: secret
kind: SecretStore
target:
name: {{ $externalSecretName }}
creationPolicy: Owner
template:
data:
## Needs to insert/add each secretKey value here like below
{
keyname1: "{{ .keyname1 | b64dec }}".
keyname2: "{{ .keyname2 | b64dec }}".
}
data:
{{- toYaml $externalSecret.data | nindent 4 }}
---
{{- end }}
values.yaml:
===========
extraEnvSecret:
fromSecret:
name: master-tf-address-handling
data:
PREFIX_KEYNAME1: keyname1
PREFIX_KEYNAME2: keyname2
externalSecrets:
demo-app:
data:
- secretKey: keyname1
remoteRef:
key: value1
- secretKey: keyname2
remoteRef:
key: value1
{{- range $externalSecretName, $externalSecret := .Values.externalSecrets }}
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: {{ $externalSecretName }}
spec:
refreshInterval: 1m
secretStoreRef:
name: secret
kind: SecretStore
target:
name: {{ $externalSecretName }}
creationPolicy: Owner
template:
data:
{
{{- range $externalSecret.data }}
{{ .secretKey }}: "{{ .remoteRef.key | b64enc }}",
{{- end }}
}
data:
{{- toYaml $externalSecret.data | nindent 4 }}
{{- end }}

How to add a PersistentVolumeClaim to a deployment running GitLab AutoDevops?

What am I trying to achieve?
We are using a self-hosted GitLab instance and use GitLab AutoDevops to deploy our projects to a Kubernetes cluster. At the time of writing, we are only using one node within the cluster. For one of our projects it is important that the built application (i.e. the pod(s)) is able to access (read only) files stored on the Kubernetes cluster's node itself.
What have I tried?
Created a (hostPath) PersistentVolume (PV) on our cluster
Created a PersistentVolumeClaim (PVC) on our cluster (named "test-api-claim")
Now GitLab AutoDevops uses a default helm chart to deploy the applications. In order to modify it's behavior, I've added this chart to the project's repository (GitLab AutoDevops automatically uses the chart in a project's ./chart directory if found). So my line of thinking was to modify the chart so that the deployed pods use the PV and PVC which I created manually on the cluster.
Therefore I modified the deployment.yaml file that can be found here. As you can see in the following code-snippet, I have added the volumeMounts & volumes keys (not present in the default/original file). Scroll to the end of the snippet to see the added keys.
{{- if not .Values.application.initializeCommand -}}
apiVersion: {{ default "extensions/v1beta1" .Values.deploymentApiVersion }}
kind: Deployment
metadata:
name: {{ template "trackableappname" . }}
annotations:
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if or .Values.enableSelector (eq (default "extensions/v1beta1" .Values.deploymentApiVersion) "apps/v1") }}
selector:
matchLabels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
{{- end }}
replicas: {{ .Values.replicaCount }}
{{- if .Values.strategyType }}
strategy:
type: {{ .Values.strategyType | quote }}
{{- end }}
template:
metadata:
annotations:
checksum/application-secrets: "{{ .Values.application.secretChecksum }}"
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
spec:
imagePullSecrets:
{{ toYaml .Values.image.secrets | indent 10 }}
containers:
- name: {{ .Chart.Name }}
image: {{ template "imagename" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.application.secretName }}
envFrom:
- secretRef:
name: {{ .Values.application.secretName }}
{{- end }}
env:
{{- if .Values.postgresql.managed }}
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: app-postgres
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: app-postgres
key: password
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
{{- end }}
- name: DATABASE_URL
value: {{ .Values.application.database_url | quote }}
- name: GITLAB_ENVIRONMENT_NAME
value: {{ .Values.gitlab.envName | quote }}
- name: GITLAB_ENVIRONMENT_URL
value: {{ .Values.gitlab.envURL | quote }}
ports:
- name: "{{ .Values.service.name }}"
containerPort: {{ .Values.service.internalPort }}
livenessProbe:
{{- if eq .Values.livenessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.livenessProbe.path }}
scheme: {{ .Values.livenessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.livenessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
{{- if eq .Values.readinessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.readinessProbe.path }}
scheme: {{ .Values.readinessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.readinessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- end -}}
volumeMounts:
- mountPath: /data
name: test-pvc
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: test-api-claim
What is the problem?
Now when I trigger the Pipeline to deploy the application (using AutoDevops with my modified helm chart), I am getting this error:
Error: YAML parse error on auto-deploy-app/templates/deployment.yaml: error converting YAML to JSON: yaml: line 71: did not find expected key
Line 71 in the script refers to the valueFrom.secretKeyRef.name in the yaml:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
The weird thing is that when I delete the volumes and volumeMounts keys, it works as expected (and the valueFrom.secretKeyRef.name is still presented and causes no trouble..).
I am not using tabs in the yaml file and I double checked the indentation.
Two questions
Could there be something wrong with my yaml?
Does anyone know of another solution to achieve my desired behavior? (adding PVC to the deployment so that pods actually use it?)
General information
We use GitLab EE 13.12.11
For auto-deploy-image (which provides the helm chart I am referring to) we use version 1.0.7
Thanks in advance and have a nice day!
it seems that adding persistence is now supported in the default helm chart.
Check the pvc.yaml and deployment.yaml.
Given that, it should be enough to edit values in .gitlab/auto-deploy-values.yaml to meet your needs. Check defaults in values.yaml for more context.

Helm referring to kubernetes secrets in enviroment variables

I have some environment variables that I'm using in a helm installation and want to hide the password using a k8s secret.
values.yaml
env:
USER_EMAIL: "test#test.com"
USER_PASSWORD: "p8ssword"
I want to add the password via a kubernetes secret mysecrets, created using
# file: mysecrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecrets
type: Opaque
data:
test_user_password: cGFzc3dvcmQ=
and then add this to values.yaml
- name: TEST_USER_PASSWORD
valueFrom:
secretKeyRef:
name: mysecrets
key: test_user_password
I then use the following in the deployment
env:
{{- range $key, $value := $.Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
Is it possible to mix formats for environment variables in values.yaml i.e.,
env:
USER_EMAIL: "test#test.com"
- name: USER_PASSWORD
valueFrom:
secretKeyRef:
name: mysecrets
key: test_user_password
Or is there a way of referring to the secret in line in the original format?
Plan 1 :
One of the simplest implementation methods
You can directly use the yaml file injection method, put the env part here as it is, so you can write the kv form value and the ref form value in the values in the required format.
As follows:
values.yaml
env:
- name: "USER_EMAIL"
value: "test#test.com"
- name: "USER_PASSWORD"
valueFrom:
secretKeyRef:
name: mysecrets
key: test_user_password
deployment.yaml
containers:
- name: {{ .Chart.Name }}
env:
{{ toYaml .Values.env | nindent xxx }}
{{- end }}
(ps: xxx --> actual indent)
Plan 2:
Distinguish the scene by judging the type.
As follows:
values.yaml
env:
USER_EMAIL:
type: "kv"
value: "test#test.com"
USER_PASSWORD:
type: "secretRef"
name: mysecrets
key: p8ssword
USER_CONFIG:
type: "configmapRef"
name: myconfigmap
key: mycm
deployment.yaml
containers:
- name: {{ .Chart.Name }}
env:
{{- range $k, $v := .Values.env }}
- name: {{ $k | quote }}
{{- if eq $v.type "kv" }}
value: {{ $v.value | quote }}
{{- else if eq $v.type "secretRef" }}
valueFrom:
secretKeyRef:
name: {{ $v.name | quote }}
key: {{ $v.key | quote }}
{{- else if eq $v.type "configmapRef" }}
valueFrom:
configMapKeyRef:
name: {{ $v.name | quote }}
key: {{ $v.key | quote }}
{{- end }}
{{- end }}

replacing property in data section of ConfigMap at runtime with Environment variables in kubernetes

my current setup involves Helm charts,Kubernetes
I have a requirement where i have to replace a property in configMap.yaml file with an environment variable declared in the deployment.yaml file
here is a section my configMap.yaml which declares a property file
data:
rest.properties: |
map.dirs=/data/maps
catalog.dir=/data/catalog
work.dir=/data/tmp
map.file.extension={{ .Values.rest.mapFileExtension }}
unload.time=1
max.flow.threads=10
max.map.threads=50
trace.level=ERROR
run.mode={{ .Values.runMode }}
{{- if eq .Values.cache.redis.location "external" }}
redis.host={{ .Values.cache.redis.host }}
{{- else if eq .Values.cache.redis.location "internal" }}
redis.host=localhost
{{- end }}
redis.port={{ .Values.cache.redis.port }}
redis.stem={{ .Values.cache.redis.stem }}
redis.database={{ .Values.cache.redis.database }}
redis.logfile=redis.log
redis.loglevel=notice
exec.log.dir=/data/logs
exec.log.file.count=5
exec.log.file.size=100
exec.log.level=all
synchronous.timeout=300
{{- if .Values.global.linkIntegration.enabled }}
authentication.enabled=false
authentication.server=https://{{ .Release.Name }}-product-design-server:443
config.dir=/opt/runtime/config
{{- end }}
{{- if .Values.keycloak.enabled }}
authentication.keycloak.enabled={{ .Values.keycloak.enabled }}
authentication.keycloak.serverUrl={{ .Values.keycloak.serverUrl }}
authentication.keycloak.realmId={{ .Values.keycloak.realmId }}
authentication.keycloak.clientId={{ .Values.keycloak.clientId }}
authentication.keycloak.clientSecret=${HIP_KEYCLOAK_CLIENT_SECRET}
{{- end }}
i need to replace ${HIP_KEYCLOAK_CLIENT_SECRET} which is defined in deployment.yaml file as shown below
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.global.hImageRegistry }}/{{ include "image.runtime.repo" . }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- if .Values.keycloak.enabled }}
- name: HIP_KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ .Values.keycloak.secret }}
key: clientSecret
{{ end }}
the idea is to have the property file in the deployed pod under /opt/runtime/rest.properties
here is my complete deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "lnk-service.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
helm.sh/chart: {{ include "lnk-service.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "lnk-service.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
{{- if .Values.global.hImagePullSecret }}
imagePullSecrets:
- name: {{ .Values.global.hImagePullSecret }}
{{- end }}
securityContext:
runAsUser: 998
runAsGroup: 997
fsGroup: 997
volumes:
- name: configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-server-config
- name: core-configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-server-core-config
- name: hch-configuration
configMap:
name: {{ include "lnk-service.fullname" . }}-hch-config
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
{{- if .Values.global.linkIntegration.enabled }}
claimName: lnk-shared-px
{{- else }}
claimName: {{ include "pvc.name" . }}
{{- end }}
{{- else }}
emptyDir: {}
{{- end }}
- name: hch-data
{{- if .Values.global.linkIntegration.enabled }}
persistentVolumeClaim:
claimName: {{ include "unicapvc.fullname" . }}
{{- else }}
emptyDir: {}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.global.hImageRegistry }}/{{ include "image.runtime.repo" . }}:{{ .Values.image.tag }}"
#command: ['/bin/sh']
#args: ['-c', 'echo $HIP_KEYCLOAK_CLIENT_SECRET']
#command: [ "/bin/sh", "-c", "export" ]
#command: [ "/bin/sh", "-ce", "export" ]
command: [ "/bin/sh", "-c", "export --;trap : TERM INT; sleep infinity & wait" ]
#command: ['sh', '-c', 'sed -i "s/REPLACEME/$HIP_KEYCLOAK_CLIENT_SECRET/g" /opt/runtime/rest.properties']
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: "HIP_CLOUD_LICENSE_SERVER_URL"
value: {{ include "license.url" . | quote }}
- name: "HIP_CLOUD_LICENSE_SERVER_ID"
value: {{ include "license.id" . | quote }}
{{- if .Values.keycloak.enabled }}
- name: HIP_KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: {{ .Values.keycloak.secret }}
key: clientSecret
{{ end }}
envFrom:
- configMapRef:
name: {{ include "lnk-service.fullname" . }}-server-env
{{- if .Values.rest.extraEnvConfigMap }}
- configMapRef:
name: {{ .Values.rest.extraEnvConfigMap }}
{{- end }}
{{- if .Values.rest.extraEnvSecret }}
- secretRef:
name: {{ .Values.rest.extraEnvSecret }}
{{- end }}
ports:
- name: http
containerPort: {{ .Values.image.port }}
protocol: TCP
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: configuration
mountPath: /opt/runtime/rest.properties
subPath: rest.properties
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
i have tried init containers and replacing the string in rest.properties which works however it involves creating volumes with emptyDir.
can someone kindly help me if there is any simpler way to do this
confd will give you the solution, you can tell it to look at the configmap and change all the environment variables that is expected by the file to the env values that had been set.
Change your ConfigMap to create the file rest.properties.template.
Use an InitContainer that runs cat rest.properties.template | envsubst > rest.properties. The InitContainer can use any Docker container that includes envsubst.
Thanks for the response ,
solution 1: use init containers
Solution 2: we changed the code to read the value from environment variables.
we chose Solution 2
thanks you all for your responses

Issues in Deployment.yaml file

I got an error in my Deoloyment.ysml file. I have made env in this file and assign values in values file. I got a syntax error in this file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
template:
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/name: {{ include "name" . }}
spec:
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources: {}
env:
- name: MONGODB_ADDRESS
value: {{ .Values.mongodb.db.address }}
- name: MONGODB
value: "akira-article"
- name: MONGODB_USER
value: {{ .Values.mongodb.db.user | quote }}
- name: MONGODB_PASS
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: mongodb-password
- name: MONGODB_AUTH_DB
value: {{ .Values.mongodb.db.name | quote }}
- name: DAKEN_USERID
value: {{ .Values.mongodb.db.userId | quote }}
- name: DAKEN_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: daken-pass
- name: JWT_PRIVATE_KEY
valueFrom:
secretKeyRef:
name: {{ include "name" . }}
key: jwt-Privat-Key
- name: WEBSITE_NAME
value: {{ .Values.website.Name }}
- name: WEBSITE_SHORT_NAME
value: {{ .Values.website.shortName }}
- name: AKIRA_HTTP_PORT
value: {{ .Values.website.port }}
ports:
- containerPort: {{ .Values.service.port }}
I got this error:
Error: Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec: v1.DeploymentSpec.Template:
v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container:
v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects "
or n, but found 8, error found in #10 byte of
...|,"value":8080}],"ima|..., bigger context
...|,"value":"AA"},{"name":"AKIRA_HTTP_PORT","value":8080}],"image":"dr.xenon.team/websites/akira-fronte|...
Answer to your problem is available in Helm documentation QUOTE STRINGS, DON’T QUOTE INTEGERS.
When you are working with string data, you are always safer quoting the strings than leaving them as bare words:
name: {{ .Values.MyName | quote }}
But when working with integers do not quote the values. That can, in many cases, cause parsing errors inside of Kubernetes.
port: {{ .Values.Port }}
This remark does not apply to env variables values which are expected to be string, even if they represent integers:
env:
- name: HOST
value: "http://host"
- name: PORT
value: "1234"
I'm assuming you have put the port value of AKIRA_HTTP_PORT inside quotes, that's why you are getting the error.
You can read the docs about Template Functions and Pipelines.
With AKIRA_HTTP_PORT: "8080" in values.yaml, in the env variables write:
env:
- name: AKIRA_HTTP_PORT
value: {{ .Values.website.port | quote }}
It should have to work