Helm - documentation specification for very unusual and not regular Kubernetes YAML format - kubernetes

on microsoft docs I found that I should define this Helm YAML file for creating Kubernetes Ingress Controller:
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
So you can easily notice it DOES NOT have usual Kubernetes apiVersion and kind specification.
and after, on the same link that I need to execute the helm command to create Ingress:
helm install nginx-ingress ingress-nginx/ingress-nginx \
-f internal-ingress.yaml \
..............
As you see - suggested Helm file is not very usual, but I would like to stick to those official Microsoft instructions!
Again, it does not have specification for creating Ingress Controller using the regular apiVersion and kind notation like on many links and examples that can be found on the internet.
https://github.com/helm/charts/blob/master/stable/nginx-ingress/templates/controller-service.yaml
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
Can I set custom ports for a Kubernetes ingress to listen on besides 80 / 443?
So I really feel very very confused!!! Can you please help me on this - I need to set it's Port but I really could not find specification documentation and examples for this one YAML Microsoft example which actually works!!!
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
Where and how I can find correct syntax how I can "translate" regular "apiVersion" and "kind" specification into this?
Why they are making confusion in this way with this different format???
Please help! Thanks

The -f does not set a "manifest" as stated in the Microsoft docs. As per helm install --help:
-f, --values strings specify values in a YAML file
or a URL (can specify multiple)
The default values file contains the values to be passed into the chart.
User-supplied values with -f are merged with the default value files to generate the final manifest. The precedence order is:
The values.yaml file in the chart
If this is a subchart, the values.yaml file of a parent chart
A values file if passed into helm install or helm upgrade with the -f flag (helm install -f myvals.yaml ./mychart)
Individual parameters passed with --set (such as helm install --set foo=bar ./mychart)
The list above is in order of specificity: values.yaml is the default, which can be overridden by a parent chart's values.yaml, which can in turn be overridden by a user-supplied values file, which can in turn be overridden by --set parameters.
What are you doing is overriding the controller value on top of the default values file. You can find the original/default values for the ingress-nginx chart here.

Related

Syntax for passing podAnnotations map to Istioctl installer

I'm trying to add some additional annotations to the pods that are created by the istiod deployment.
I'm using the istioctl install documentation, which suggests that I can use the podAnnotations field from the istio operator documentation, but I can't see how to structure the argument correctly.
The docs say it is of type map<string, string>. How do you express that?
I've tried a few variations, e.g
./istioctl install --set profile=minimal --set components.pilot.k8s.hpaSpec.minReplicas=2 --set components.pilot.k8s.podAnnotations={"foo":"bar"} -y
I ended up solving this using an IstioOperator resource, which matches what Chris suggested in his comment.
To achieve the desired effect I used a resource which looks like this:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: minimal
components:
pilot:
k8s:
podAnnotations:
co.elastic.logs/enabled : "true"
co.elastic.logs/processors.dissect.field: "message"
co.elastic.logs/processors.dissect.tokenizer: "%{#timestamp} %{level} %{message}"
co.elastic.logs/processors.dissect.overwrite_keys: "true"
co.elastic.logs/processors.dissect.target_prefix: ""
hpaSpec:
minReplicas : 2
This file is passed to the installer on the command line:
istioctl install -f ${path.root}/k8s/istio/istioctl-config.yaml -y
This seems to be the way to provide podAnnotations. It also matches the recommendation made in the docs themselves to avoid using --set arguments on the install command in production.

Reassign a value in helm chart

Following is the values.yaml in helm chart:
global:
namespace: istio
chart-1:
istioNamespace: istio
chart-2:
targetNamespace: istio
Is there a way where istioNamespace and targetNamespace can refer global.namespace?
Since this is a YAML, you can make use of its anchor, aliases, merge keys to re-use the values/data.DIY.
In your case, you could do something like this:
In the YAML Document, you can refer to a previously defined anchor with an alias
global:
namespace: &ns "istio"
chart-1:
istioNamespace: *ns
chart-2:
targetNamespace: *ns
NOTE: if you try to override this with other values.yaml that might not work as expected.
This is useful when you are doing it in the same YAML file.
Here is a reference link. and in official docs as well.
I have tried a few of them in my day to day works with helm charts, and they work pretty well with Values. yaml

kube helm charts - multiple values files

it might be simple question but can't find anywhere if it's duable;
Is it possible to have values files for helm charts (stable/jenkins let's say) and have two different values files for it?
I would like in values_a.yaml have some values like these:
master:
componentName: "jenkins-master"
image: "jenkins/jenkins"
tag: "lts"
...
password: {{ .Values.secrets.masterPassword }}
and in the values_b.yaml - which will be encrypted with AWS KMS
secrets:
masterPassword: xxx
the above code doesn't work and wanted to know, as you can put those vars in kube manifests like it
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.config.name }}
namespace: {{ .Values.config.namespace }}
...
can they be somehow passed to other values files
EDIT:
If it was possible I would just put the
master:
password: xxx
in values_b.yaml but vars cannot be duplicated, and the official helm chart expects the master.password val from that file - so has to somehow pass it there but in encrypted way
I'm not quite sure but this feature of helm might help you.
Helm gives you the functionality to pass custom Values.yaml which have higher precedence over the fields of the main Values.yaml while performing helm install or helm upgrade.
For Helm 3
$ helm install <name> ./mychart -f myValues.yaml
For Helm 2
$ helm install --name <name> ./mychart --values myValues.yaml
The valid answer is from David Maze in the comments of the response of Kamol Hasan.
You can use multiple -f or --values options: helm install ... -f
values_a.yaml -f values_b.yaml. But you can't use templating in any
Helm values file unless the chart specifically supports it (using the
tpl function).
If you use multiple -f then latest value files override earlier ones.

Register new istioctl profile

I want to install istio with istioctl. My customizations will be numerous, so I dont think it would be helpful to use this syntax: istioctl manifest apply --set addonComponents.grafana.enabled=true
I figured that the best way to do it is to copy a profile from istio-1.5.1/install/kubernetes/operator/profiles
How can I add a custom profile to:
[root#localhost profiles]# istioctl profile list
Istio configuration profiles:
remote
separate
default
demo
empty
minimal
So that I can: istioctl manifest apply --set profile=mynewprofile
istio manifests apply --set addonComponents.grafana.enabled=true would translate to
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
addonComponents:
grafana:
enabled: true
Istio uses the CR IstioOperator to pass values for installation. All customization options listed here are available. Some examples can be found in their documentation under Installation Configuration Profiles and within their releases under install/kubernetes/operator/profiles.
You can make custom profiles by referencing a path to another CR. Let's say you turned the file above into a-new-profile.yaml.
istioctl manifest apply --set installPackagePath=< path to istio releases >/istio-1.5.1/install/kubernetes/operator/charts \
--set profile=path/to/a-new-profile.yaml
After that, you would have your new profile set and ready to use. See their section install-from-external-charts for more information.

no matches for kind "Deployment" in version "extensions/v1beta1"

While deploying mojaloop, Kubernetes responds with the following errors:
Error: validation failed: [unable to recognize "": no matches for kind
"Deployment" in version "apps/v1beta2", unable to recognize "": no
matches for kind "Deployment" in version "extensions/v1beta1", unable
to recognize "": no matches for kind "StatefulSet" in version
"apps/v1beta2", unable to recognize "": no matches for kind
"StatefulSet" in version "apps/v1beta1"]
My Kubernetes version is 1.16.
How can I fix the problem with the API version?
From investigating, I have found that Kubernetes doesn't support apps/v1beta2, apps/v1beta1.
How can I make Kubernetes use a not deprecated version or some other supported version?
I am new to Kubernetes and anyone who can support me I am happy
In Kubernetes 1.16 some apis have been changed.
You can check which apis support current Kubernetes object using
$ kubectl api-resources | grep deployment
deployments deploy apps true Deployment
This means that only apiVersion with apps is correct for Deployments (extensions is not supporting Deployment). The same situation with StatefulSet.
You need to change Deployment and StatefulSet apiVersion to apiVersion: apps/v1.
If this does not help, please add your YAML to the question.
EDIT
As issue is caused by HELM templates included old apiVersions in Deployments which are not supported in version 1.16, there are 2 possible solutions:
1. git clone whole repo and replace apiVersion to apps/v1 in all templates/deployment.yaml using script
2. Use older version of Kubernetes (1.15) when validator accept extensions as apiVersion for Deployment and StatefulSet.
to convert an older Deployment to apps/v1, you can run:
kubectl convert -f ./my-deployment.yaml --output-version apps/v1
You can change manually as an alternative. Fetch the helm chart:
helm fetch --untar stable/metabase
Access the chart folder:
cd ./metabase
Change API version:
sed -i 's|extensions/v1beta1|apps/v1|g' ./templates/deployment.yaml
Add spec.selector.matchLabels:
spec:
[...]
selector:
matchLabels:
app: {{ template "metabase.name" . }}
[...]
Finally install your altered chart:
helm install ./ \
-n metabase \
--namespace metabase \
--set ingress.enabled=true \
--set ingress.hosts={metabase.$(minikube ip).nip.io}
Enjoy!
I prefer kubectl explain.
# kubectl explain deploy
KIND: Deployment
VERSION: apps/v1
DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object metadata.
spec <Object>
Specification of the desired behavior of the Deployment.
status <Object>
Most recently observed status of the Deployment.
With kubectl explain you can also see specific parameters of an object:
# kubectl explain Service.spec.externalTrafficPolicy
KIND: Service
VERSION: v1
FIELD: externalTrafficPolicy <string>
DESCRIPTION:
externalTrafficPolicy denotes if this Service desires to route external
traffic to node-local or cluster-wide endpoints. "Local" preserves the
client source IP and avoids a second hop for LoadBalancer and Nodeport type
services, but risks potentially imbalanced traffic spreading. "Cluster"
obscures the client source IP and may cause a second hop to another node,
but should have good overall load-spreading.
To put it simply, you don't force the current installation to use an outdated version of the API; you fix the version in your config files.
If you want to check which version your current kube supports, run :
root#ubn64:~# kubectl api-versions | grep -i apps
apps/v1
I was getting below error -
error: unable to recognize "deployment.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
solution that worked for me -
modified the line from apiVersion: extensions/v1beta1 to apiVersion: apps/v1 in deployment.yaml
Reason -
we had upgraded the K8 cluster hence this error occured.
This was annoying me because I am testing lots of helm packages so I wrote a quick script - which could be modified to sort your workflow perhaps
see below
New workflow
First fetch the chart as a tgz to your working directory
helm fetch repo/chart
then in your working directly run bash script below - which I named helmk
helmk myreleasename mynamespace chart.tgz [any parameters for kubectl create]
Contents of helmk - need to edit your kubeconfig clustername to work
#!/bin/bash
echo usage $0 releasename namespace chart.tgz [createparameter1] [createparameter2] ... [createparameter n]
echo This will use your namespace then shift back to default so be careful!!
kubectl create namespace $2 #this will create harmless error if namespace exists have to ignore
kubectl config set-context MYCLUSTERNAME --namespace $2
helm template -n $1 --namespace $2 $3 | kubectl convert -f /dev/stdin | kubectl create --save-config=true ${#:4} -f /dev/stdin
#note the --namespace parameter in helm template above seems to be ignored so we have to manually switch context
kubectl config set-context MYCLUSTERNAME --namespace default
It's a slightly dangerous hack since I manually switch to your new desired namespace context then back again so only to be used for single user devs really or comment that out.
You will get a warning about using the kubectl convert facility like this
If you need to edit the YAML to customise - just replace one of the /dev/stdin to intermediate files but It's probably better to get it up using "create" with a save-config as I have and then simply "apply" your changes which means that they will be recorded in kubernetes too.
Good luck
I was facing the same issue on a cluster that was upgraded to a version that does not support certain api versions (v1.17 and apps/v1beta2).
$ helm get manifest some-deployment
...
# Source: some-deployment/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: some-deployment
labels:
...
Looking at the helm docs, it seems that the manifest is stored in the cluster for helm to reference, and it may include invalid api versions, leading to errors.
The 2 proposed methods are to either manually edit the manifest (a rather tedious multi-stage process), or use a helm plugin called mapkubeapis that does it automatically.
$ helm plugin install https://github.com/helm/helm-mapkubeapis
It can be run with the --dry-run flag to simulate the effects:
$ helm mapkubeapis --dry-run some-deployment
2021/02/15 09:33:29 NOTE: This is in dry-run mode, the following actions will not be executed.
2021/02/15 09:33:29 Run without --dry-run to take the actions described below:
2021/02/15 09:33:29
2021/02/15 09:33:29 Release 'some-deployment' will be checked for deprecated or removed Kubernetes APIs and will be updated if necessary to supported API versions.
2021/02/15 09:33:29 Get release 'some-deployment' latest version.
2021/02/15 09:33:30 Check release 'some-deployment' for deprecated or removed APIs...
2021/02/15 09:33:30 Found deprecated or removed Kubernetes API:
"apiVersion: apps/v1beta2
kind: Deployment"
Supported API equivalent:
"apiVersion: apps/v1
kind: Deployment"
2021/02/15 09:33:30 Finished checking release 'some-deployment' for deprecated or removed APIs.
2021/02/15 09:33:30 Deprecated or removed APIs exist, updating release: some-deployment.
2021/02/15 09:33:30 Map of release 'some-deployment' deprecated or removed APIs to supported versions, completed successfully.
and then run without the flag to apply the changes.