How to properly use controller.affinity annotation? - kubernetes-helm

Does anyone know how to properly use controller.affinity annotation? In the doc we can see only annotation name w/o type or any example. https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/
I was trying to use it as I did for toleration:
--set controller.affinity[0].key=key_here \
--set controller.affinity[0].operator=In \
--set controller.affinity[0].values[0]=value_here
getting validation error expected map instead of array...
next few try:
--set controller.affinity[0].key=key_here \
--set controller.affinity[0].operator=In \
--set controller.affinity[0].value=value_here
validation error expected map instead of array
--set controller.affinity.key=key_here \
--set controller.affinity.operator=In \
--set controller.affinity.values=value_here
error validating data: [ValidationError(Deployment.spec.template.spec.affinity): unknown field "key" in io.k8s.api.core.v1.Affinity
Any idea how to properly use it?
For example for controller.tolerations this worked from first try:
--set controller.tolerations[0].key=taint \
--set controller.tolerations[0].value=node_taint_here \
--set controller.tolerations[0].effect=NoSchedule

There is no "key" field in affinity, you can easily check that with
kubectl explain deployment.spec.template.spec.affinity
KIND: Deployment
VERSION: apps/v1
RESOURCE: affinity <Object>
DESCRIPTION:
If specified, the pod's scheduling constraints
Affinity is a group of affinity scheduling rules.
FIELDS:
nodeAffinity <Object>
Describes node affinity scheduling rules for the pod.
podAffinity <Object>
Describes pod affinity scheduling rules (e.g. co-locate this pod in the
same node, zone, etc. as some other pod(s)).
podAntiAffinity <Object>
Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod
in the same node, zone, etc. as some other pod(s)).
So if you want to add nodeAffinity, similarly to official docs https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity , you can specify key like this:
--set 'controller.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key=topology.kubernetes.io/zone'
As you can see, this can be quite tedious, and generally, you should use or generate values file, like
controller:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- antarctica-east1
- antarctica-west1
and add it with -f instead of adding 4 big arguments

Related

Syntax for passing podAnnotations map to Istioctl installer

I'm trying to add some additional annotations to the pods that are created by the istiod deployment.
I'm using the istioctl install documentation, which suggests that I can use the podAnnotations field from the istio operator documentation, but I can't see how to structure the argument correctly.
The docs say it is of type map<string, string>. How do you express that?
I've tried a few variations, e.g
./istioctl install --set profile=minimal --set components.pilot.k8s.hpaSpec.minReplicas=2 --set components.pilot.k8s.podAnnotations={"foo":"bar"} -y
I ended up solving this using an IstioOperator resource, which matches what Chris suggested in his comment.
To achieve the desired effect I used a resource which looks like this:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: minimal
components:
pilot:
k8s:
podAnnotations:
co.elastic.logs/enabled : "true"
co.elastic.logs/processors.dissect.field: "message"
co.elastic.logs/processors.dissect.tokenizer: "%{#timestamp} %{level} %{message}"
co.elastic.logs/processors.dissect.overwrite_keys: "true"
co.elastic.logs/processors.dissect.target_prefix: ""
hpaSpec:
minReplicas : 2
This file is passed to the installer on the command line:
istioctl install -f ${path.root}/k8s/istio/istioctl-config.yaml -y
This seems to be the way to provide podAnnotations. It also matches the recommendation made in the docs themselves to avoid using --set arguments on the install command in production.

Helm - documentation specification for very unusual and not regular Kubernetes YAML format

on microsoft docs I found that I should define this Helm YAML file for creating Kubernetes Ingress Controller:
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
So you can easily notice it DOES NOT have usual Kubernetes apiVersion and kind specification.
and after, on the same link that I need to execute the helm command to create Ingress:
helm install nginx-ingress ingress-nginx/ingress-nginx \
-f internal-ingress.yaml \
..............
As you see - suggested Helm file is not very usual, but I would like to stick to those official Microsoft instructions!
Again, it does not have specification for creating Ingress Controller using the regular apiVersion and kind notation like on many links and examples that can be found on the internet.
https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-service.yaml
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
Can I set custom ports for a Kubernetes ingress to listen on besides 80 / 443?
So I really feel very very confused!!! Can you please help me on this - I need to set it's Port but I really could not find specification documentation and examples for this one YAML Microsoft example which actually works!!!
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
Where and how I can find correct syntax how I can "translate" regular "apiVersion" and "kind" specification into this?
Why they are making confusion in this way with this different format???
Please help! Thanks
The -f does not set a "manifest" as stated in the Microsoft docs. As per helm install --help:
-f, --values strings specify values in a YAML file
or a URL (can specify multiple)
The default values file contains the values to be passed into the chart.
User-supplied values with -f are merged with the default value files to generate the final manifest. The precedence order is:
The values.yaml file in the chart
If this is a subchart, the values.yaml file of a parent chart
A values file if passed into helm install or helm upgrade with the -f flag (helm install -f myvals.yaml ./mychart)
Individual parameters passed with --set (such as helm install --set foo=bar ./mychart)
The list above is in order of specificity: values.yaml is the default, which can be overridden by a parent chart's values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
What are you doing is overriding the controller value on top of the default values file. You can find the original/default values for the ingress-nginx chart here.

Provide nodeSelector to nginx ingress using helm

I spent some time looking into how to pass the parameters to helm in order to configure the nodeSelector properly.
Different tries led to different errors like:
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.nodeSelector.kubernetes): invalid type for io.k8s.api.core.v1.PodSpec.nodeSelector: got "map", expected "string"
coalesce.go:196: warning: cannot overwrite table with non table for nodeSelector (map[])
Reference: https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip
In the link above, we can see how it should be used:
helm install nginx-ingress stable/nginx-ingress \
--namespace $NAMESPACE \
--set controller.replicaCount=1 \
--set controller.nodeSelector."kubernetes\.io/hostname"=$LOADBALANCER_NODE \
--set controller.service.loadBalancerIP="$LOADBALANCER_IP" \
--set controller.extraArgs.default-ssl-certificate="$NAMESPACE/$LOADBALANCER_NODE-ssl"
In general it is a good source to look into helm help: https://helm.sh/docs/intro/using_helm/#the-format-and-limitations-of---set
Here you can find all the nginx parameters: https://github.com/helm/charts/tree/master/stable/nginx-ingress

Set extraEnvs ingress variable

I'm trying to use extraEnv property in order to add additional environment variables to set in the pod using helm charts.
I have a values.yaml file that includes:
controller:
service:
loadBalancerIP:
extraEnvs:
- name: ev1
value:
- name: ev2
value
first I've set the loadBalancerIP the following way:
helm template path/charts/ingress --set nginx-ingress.controller.service.loadBalancerIP=1.1.1.1 --output-dir .
In order to set extraEnvs values I've tried to use the same logic by doing:
helm template path/charts/ingress --set nginx-ingress.controller.service.loadBalancerIP=1.1.1.1 --set nginx-ingress.controller.extraEnvs[0].value=valueEnv1--set nginx.controller.extraEnvs[1].value=valueEnv2--output-dir .
But it doesn't work. I looked for the right way to set those variables but couldn't find anything.
Helm --set has some limitations.
Your best option is to avoid using the --set, and use the --values flag with your values.yaml file instead:
helm template path/charts/ingress \
--values=values.yaml
If you want to use --set anyway, the equivalent command should have this notation:
helm template path/charts/ingress \
--set=controller.service.loadBalancerIP=1.1.1.1 \
--set=controller.extraEnvs[0].name=ev1,controller.extraEnvs[0].value=valueEnv1 \
--set=controller.extraEnvs[1].name=ev2,controller.extraEnvs[1].value=valueEnv2 \
--output-dir .

Helm [stable/nginx-ingress] Getting issue while passing headers

Version of Helm and Kubernetes: Client: &version.Version{SemVer:"v2.14.1" and 1.13.7-gke.24
Which chart: stable/nginx-ingress [v0.24.1]
What happened: Trying to override headers using--set-string but it does not work as expected. It always gives issues with the parsing
/usr/sbin/helm install --name cx-nginx-1 --set controller.name=cx-nginx-1 --set controller.kind=Deployment --set controller.service.loadBalancerIP= --set controller.metrics.enabled=true --set-string 'controller.headers={"X-Different-Name":"true","X-Request-Start":"test-header","X-Using-Nginx-Controller":"true"}' . Error: release cx-nginx-1 failed: ConfigMap in version "v1" cannot be handled as a ConfigMap: v1.ConfigMap.Data: ReadMapCB: expect { or n, but found [, error found in #10 byte of ...|","data":["\"X-Diffe|..., bigger context ...|{"apiVersion":"v1","data":["\"X-Different-Name\":\"true\"","\"X-Request-Start|...
What you expected to happen: I want to override the header which the there by default in values.yam with custom headers
How to reproduce it (as minimally and precisely as possible):
I have provided the comment to reproduce,
helm install --name cx-nginx-1 --set controller.name=cx-nginx-1 --set controller.kind=Deployment --set controller.service.loadBalancerIP= --set controller.metrics.enabled=true --set-string 'controller.headers={"X-Different-Name":"true","X-Request-Start":"test-header","X-Using-Nginx-Controller":"true"}' .
I tried to run in debug mode (--dry-run --debug), It shows me configmap like below,
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1
component: "cx-nginx-1"
heritage: Tiller
release: foiled-coral
name: foiled-coral-nginx-ingress-custom-headers
namespace: cx-ingress
data:
- X-Different-Name:true
- X-Request-Start:test-header
- X-Using-Nginx-Controller:true
It seems like its adding intent 4 instead of intent 2. Below warning also i'm getting,
Warning: Merging destination map for chart 'nginx-ingress'. Cannot overwrite table item 'headers', with non table value: map[X-Different-Name:true X-Request-Start:test-header X-Using-Nginx-Controller:true]
Kindly help me to pass the headers in the right way.
Note: controller.headers is deprecated, make sure to use the controller.proxySetHeaders instead.
Helm --set has some limitations.
Your best option is to avoid using the --set, and use the --values instead.
You can declare all your custom values in a file like this:
# values.yaml
controller:
name: "cx-nginx-1"
kind: "Deployment"
service:
loadBalancerIP: ""
metrics:
enable: true
proxySetHeaders:
X-Different-Name: "true"
X-Request-Start: "true"
X-Using-Nginx-Controller: "true"
Then use it on install:
helm install --name cx-nginx-1 stable/nginx-ingress \
--values=values.yaml
If you want to use --set anyway, you should use this notation:
helm install --name cx-nginx-1 stable/nginx-ingress \
--set controller.name=cx-nginx-1 \
--set controller.kind=Deployment \
--set controller.service.loadBalancerIP= \
--set controller.metrics.enabled=true \
--set-string controller.proxySetHeaders.X-Different-Name="true" \
--set-string controller.proxySetHeaders.X-Request-Start="true" \
--set-string controller.proxySetHeaders.X-Using-Nginx-Controller="true"