I spent some time looking into how to pass the parameters to helm in order to configure the nodeSelector properly.
Different tries led to different errors like:
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.nodeSelector.kubernetes): invalid type for io.k8s.api.core.v1.PodSpec.nodeSelector: got "map", expected "string"
coalesce.go:196: warning: cannot overwrite table with non table for nodeSelector (map[])
Reference: https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip
In the link above, we can see how it should be used:
helm install nginx-ingress stable/nginx-ingress \
--namespace $NAMESPACE \
--set controller.replicaCount=1 \
--set controller.nodeSelector."kubernetes\.io/hostname"=$LOADBALANCER_NODE \
--set controller.service.loadBalancerIP="$LOADBALANCER_IP" \
--set controller.extraArgs.default-ssl-certificate="$NAMESPACE/$LOADBALANCER_NODE-ssl"
In general it is a good source to look into helm help: https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set
Here you can find all the nginx parameters: https://github.com/helm/charts/tree/master/stable/nginx-ingress
Related
Does anyone know how to properly use controller.affinity annotation? In the doc we can see only annotation name w/o type or any example. https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/
I was trying to use it as I did for toleration:
--set controller.affinity[0].key=key_here \
--set controller.affinity[0].operator=In \
--set controller.affinity[0].values[0]=value_here
getting validation error expected map instead of array...
next few try:
--set controller.affinity[0].key=key_here \
--set controller.affinity[0].operator=In \
--set controller.affinity[0].value=value_here
validation error expected map instead of array
--set controller.affinity.key=key_here \
--set controller.affinity.operator=In \
--set controller.affinity.values=value_here
error validating data: [ValidationError(Deployment.spec.template.spec.affinity): unknown field "key" in io.k8s.api.core.v1.Affinity
Any idea how to properly use it?
For example for controller.tolerations this worked from first try:
--set controller.tolerations[0].key=taint \
--set controller.tolerations[0].value=node_taint_here \
--set controller.tolerations[0].effect=NoSchedule
There is no "key" field in affinity, you can easily check that with
kubectl explain deployment.spec.template.spec.affinity
KIND: Deployment
VERSION: apps/v1
RESOURCE: affinity <Object>
DESCRIPTION:
If specified, the pod's scheduling constraints
Affinity is a group of affinity scheduling rules.
FIELDS:
nodeAffinity <Object>
Describes node affinity scheduling rules for the pod.
podAffinity <Object>
Describes pod affinity scheduling rules (e.g. co-locate this pod in the
same node, zone, etc. as some other pod(s)).
podAntiAffinity <Object>
Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod
in the same node, zone, etc. as some other pod(s)).
So if you want to add nodeAffinity, similarly to official docs https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity , you can specify key like this:
--set 'controller.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key=topology.kubernetes.io/zone'
As you can see, this can be quite tedious, and generally, you should use or generate values file, like
controller:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- antarctica-east1
- antarctica-west1
and add it with -f instead of adding 4 big arguments
I'm trying to add some additional annotations to the pods that are created by the istiod deployment.
I'm using the istioctl install documentation, which suggests that I can use the podAnnotations field from the istio operator documentation, but I can't see how to structure the argument correctly.
The docs say it is of type map<string, string>. How do you express that?
I've tried a few variations, e.g
./istioctl install --set profile=minimal --set components.pilot.k8s.hpaSpec.minReplicas=2 --set components.pilot.k8s.podAnnotations={"foo":"bar"} -y
I ended up solving this using an IstioOperator resource, which matches what Chris suggested in his comment.
To achieve the desired effect I used a resource which looks like this:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: minimal
components:
pilot:
k8s:
podAnnotations:
co.elastic.logs/enabled : "true"
co.elastic.logs/processors.dissect.field: "message"
co.elastic.logs/processors.dissect.tokenizer: "%{#timestamp} %{level} %{message}"
co.elastic.logs/processors.dissect.overwrite_keys: "true"
co.elastic.logs/processors.dissect.target_prefix: ""
hpaSpec:
minReplicas : 2
This file is passed to the installer on the command line:
istioctl install -f ${path.root}/k8s/istio/istioctl-config.yaml -y
This seems to be the way to provide podAnnotations. It also matches the recommendation made in the docs themselves to avoid using --set arguments on the install command in production.
I am trying to install some helm charts into custom namespaces on a Microk8s cluster but get the following errors. What am I doing wrong? The cluster is fresh and none of these namespaces or resources exist.
> helm install loki grafana/loki-stack -f values/loki-stack.yaml --namespace loki --create-namespace
W0902 08:08:35.878632 1610435 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Error: rendered manifests contain a resource that already exists. Unable to continue with install:
PodSecurityPolicy "loki-grafana" in namespace "" exists and cannot be imported into the current release:
invalid ownership metadata; annotation validation error:
key "meta.helm.sh/release-namespace" must equal "loki": current value is "default"
> helm install traefik traefik/traefik -f values/traefik.yml --namespace traefik --create-namespace
Error: rendered manifests contain a resource that already exists. Unable to continue with install:
ClusterRole "traefik" in namespace "" exists and cannot be imported into the current release:
invalid ownership metadata; annotation validation error:
key "meta.helm.sh/release-namespace" must equal "traefik": current value is "default"
The resource (loki) to be deployed already exists in the specified namespace.
Please check with helm list -n loki.
So before you install it, you need to uninstall it first.
helm uninstall loki -n loki
helm install loki grafana/loki-stack -f values/loki-stack.yaml --namespace loki
Or you can try to update it directly:
helm upgrade loki grafana/loki-stack -f values/loki-stack.yaml --namespace loki
I'm trying to use extraEnv property in order to add additional environment variables to set in the pod using helm charts.
I have a values.yaml file that includes:
controller:
service:
loadBalancerIP:
extraEnvs:
- name: ev1
value:
- name: ev2
value
first I've set the loadBalancerIP the following way:
helm template path/charts/ingress --set nginx-ingress.controller.service.loadBalancerIP=1.1.1.1 --output-dir .
In order to set extraEnvs values I've tried to use the same logic by doing:
helm template path/charts/ingress --set nginx-ingress.controller.service.loadBalancerIP=1.1.1.1 --set nginx-ingress.controller.extraEnvs[0].value=valueEnv1--set nginx.controller.extraEnvs[1].value=valueEnv2--output-dir .
But it doesn't work. I looked for the right way to set those variables but couldn't find anything.
Helm --set has some limitations.
Your best option is to avoid using the --set, and use the --values flag with your values.yaml file instead:
helm template path/charts/ingress \
--values=values.yaml
If you want to use --set anyway, the equivalent command should have this notation:
helm template path/charts/ingress \
--set=controller.service.loadBalancerIP=1.1.1.1 \
--set=controller.extraEnvs[0].name=ev1,controller.extraEnvs[0].value=valueEnv1 \
--set=controller.extraEnvs[1].name=ev2,controller.extraEnvs[1].value=valueEnv2 \
--output-dir .
Version of Helm and Kubernetes: Client: &version.Version{SemVer:"v2.14.1" and 1.13.7-gke.24
Which chart: stable/nginx-ingress [v0.24.1]
What happened: Trying to override headers using--set-string but it does not work as expected. It always gives issues with the parsing
/usr/sbin/helm install --name cx-nginx-1 --set controller.name=cx-nginx-1 --set controller.kind=Deployment --set controller.service.loadBalancerIP= --set controller.metrics.enabled=true --set-string 'controller.headers={"X-Different-Name":"true","X-Request-Start":"test-header","X-Using-Nginx-Controller":"true"}' . Error: release cx-nginx-1 failed: ConfigMap in version "v1" cannot be handled as a ConfigMap: v1.ConfigMap.Data: ReadMapCB: expect { or n, but found [, error found in #10 byte of ...|","data":["\"X-Diffe|..., bigger context ...|{"apiVersion":"v1","data":["\"X-Different-Name\":\"true\"","\"X-Request-Start|...
What you expected to happen: I want to override the header which the there by default in values.yam with custom headers
How to reproduce it (as minimally and precisely as possible):
I have provided the comment to reproduce,
helm install --name cx-nginx-1 --set controller.name=cx-nginx-1 --set controller.kind=Deployment --set controller.service.loadBalancerIP= --set controller.metrics.enabled=true --set-string 'controller.headers={"X-Different-Name":"true","X-Request-Start":"test-header","X-Using-Nginx-Controller":"true"}' .
I tried to run in debug mode (--dry-run --debug), It shows me configmap like below,
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1
component: "cx-nginx-1"
heritage: Tiller
release: foiled-coral
name: foiled-coral-nginx-ingress-custom-headers
namespace: cx-ingress
data:
- X-Different-Name:true
- X-Request-Start:test-header
- X-Using-Nginx-Controller:true
It seems like its adding intent 4 instead of intent 2. Below warning also i'm getting,
Warning: Merging destination map for chart 'nginx-ingress'. Cannot overwrite table item 'headers', with non table value: map[X-Different-Name:true X-Request-Start:test-header X-Using-Nginx-Controller:true]
Kindly help me to pass the headers in the right way.
Note: controller.headers is deprecated, make sure to use the controller.proxySetHeaders instead.
Helm --set has some limitations.
Your best option is to avoid using the --set, and use the --values instead.
You can declare all your custom values in a file like this:
# values.yaml
controller:
name: "cx-nginx-1"
kind: "Deployment"
service:
loadBalancerIP: ""
metrics:
enable: true
proxySetHeaders:
X-Different-Name: "true"
X-Request-Start: "true"
X-Using-Nginx-Controller: "true"
Then use it on install:
helm install --name cx-nginx-1 stable/nginx-ingress \
--values=values.yaml
If you want to use --set anyway, you should use this notation:
helm install --name cx-nginx-1 stable/nginx-ingress \
--set controller.name=cx-nginx-1 \
--set controller.kind=Deployment \
--set controller.service.loadBalancerIP= \
--set controller.metrics.enabled=true \
--set-string controller.proxySetHeaders.X-Different-Name="true" \
--set-string controller.proxySetHeaders.X-Request-Start="true" \
--set-string controller.proxySetHeaders.X-Using-Nginx-Controller="true"