helm deployments over kustomize - kubernetes

Say for example I have rendered all the manifests of a tool called cert-manager and I have deployed in kubernetes environment using kustomize, If I want to perform an upgrade version of cert-manager through helm, How do I do that??

You can just run the command
helm repo add jetstack https://charts.jetstack.io
update the variable or value you would like to change
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.7.1 \
# --set installCRDs=true
make sure you use the same name of deployment or stateful sets, so accordingly PODs or deployment will get rolled out.
i think, the cert-manager does not use the PVC or PV so nothing worry about the Read Write Many.
Ref doc : https://cert-manager.io/docs/installation/helm/

Related

Create namespace and secret, do patch only if not existing

In my CI I'm running a helm upgrade command to release an app.
But if it is a non existing app, I have to create the namespace, a secret and patch the serviceaccount. So I come up with this:
kubectl create namespace ${namespace} --dry-run=client -o yaml | kubectl apply -f -
kubectl create secret docker-registry gitlab-registry --namespace ${namespace} --docker-server="\${CI_REGISTRY}" --docker-username="\${CI_DEPLOY_USER}" --docker-password="\${CI_DEPLOY_PASSWORD}" --docker-email="\${GITLAB_USER_EMAIL}" -o yaml --dry-run=client | kubectl apply -f -
kubectl patch serviceaccount default -p '{"imagePullSecrets":[{"name":"gitlab-registry"}]}' --namespace ${namespace}
This is working, but I think it is not the perfect way as these three steps should only be done once.
: Only if app/namespace/secret is not existing
Helm provides the --create-namespace switch that will create the namespace of the release if it does not already exist.
The secret can be added in your helm chart and you can pass the variables (CI_REGISTRY, CI_DEPLOY_USER, etc.) in as helm chart values either as --set values or via the values.yaml file and using --values
The service account patching you can do as a post-install and/or a post-upgrade job (https://helm.sh/docs/topics/charts_hooks/)

Istio installation failed with private docker registry

Bug description
Installation gets timeout errors and in kubectl get pods -n istio-system shows ImagePullBackOff.
kubectl describe pod istiod-xxx-xxx -n istio-system
Failed to pull image "our-registry:5000/pilot:1.10.3": rpc error: code = Unknown desc = Error response from daemon: Head https://our-registry:5000/v2/pilot/manifests/1.10.3: no basic auth credentials
Affected product area (please put an X in all that apply)
[x] Installation
Expected behavior
Successful installation with istioctl install --set profile=demo --set hub=our-registry:5000
Steps to reproduce the bug
Create istio-system namespace.
Set docker-registry user credentials for istio-system namespace.
istioctl manifest generate --set profile=demo --set hub=our-registry:5000 > new-generated-manifest.yaml
Verify it has proper images with our-registry:5000
Pull and push required images to our-registry:5000
istioctl install --set profile=demo --set hub=our-registry:5000
Version
Kubernetes : v1.21
Istio : 1.10.3 / 1.7.3
How was Istio installed?
istioctl install --set profile=demo --set hub=our-registry:5000
[References]
Tried to setup imagePullSecrets as described here, but it gives
Json object error
2. Here describe about using it in charts, but dont know how they applied it.
Originally posted as an issue.
There are two ways to cirumvent this issue.
If installing with istioctl install
Using istioctl install provide a secret with docker-registry auth details with --set values.global.imagePullSecrets. Like this
istioctl install [other options] --set values.global.imagePullSecrets[0]=<auth-secret>
Where <auth-secret> is the secret created prior on the cluster.
You can read more about using secrets with docker repository here
If installing using Istio operator
Installing Istio with operator, from private regostry, you have to pass proper YAML:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
...
spec:
profile: demo #as an example
values:
global:
imagePullSecrets:
- <auth-secret>
...
Again, <auth-secret> must be created prior.

Error: error installing: the server could not find the requested resource HELM Kubernetes

What I Did:
I installed Helm with
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
helm init --history-max 200
Getting an error:
$HELM_HOME has been configured at /root/.helm.
Error: error installing: the server could not find the requested resource
what does that error mean?
How should I install Helm and tiller?
Ubuntu version: 18.04
Kubernetes version: 1.16
Helm version:
helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Error: could not find tiller
Update:
I tried #shawndodo's answer but still tiller not installed
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm'
--output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
Update 2:
helm init --history-max 200 working in kubernetes version 1.15
I met the same problem, then I found this reply on here.
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
It works for me. You can see the detail in this issue.
Unfortunately, Helm is not working with the current version of Kubernetes (1.16.0) as we can see on the issue #6374
For now, we can work around the incompatibility by selecting an older version of Kubernetes.
Starting minikube with a previous Kubernetes version
To solve this issue, simply start the minikube setting the version using the --kubernetes-version param (Ref.):
minikube delete
minikube start --kubernetes-version=1.15.4
Try to reboot the Helm too with the following command:
helm init
After that, you will be able to use the Helm without problems.
So tiller is the server side component that your helm client talks to (tiller is due to be removed in Helm 3 due to various security issues). When running helm init the helm client installs tiller on the cluster that your kubectl is currently setup to connect with (keep in mind that in order to install tiller you need admin access the cluster as tiller needs cluster-wide admin access) However there are many different strategies to work with tiller:
tiller per namespace: This is when you install tiller in a single namespace and only give it access to that namespace (vastly more secure than giving it cluster wide admin), you can find an article on how to here
tillerless: This is when you run tiller locally, you will need to export HELM_HOST to poiunt to this tiller and tiller will use the kube config configured at KUBECONFIG more information found here
I ran into the same issue - exactly the same configuration as initial question:
Ubuntu version: 18.04
Kubernetes version: 1.16
#shawndodo's answer didn't work for me. There were some issues with the tiller deployment and the tiller pod was not getting created at all!
I tried installing the from canary build as described in Helm docs - https://helm.sh/docs/using_helm/#from-canary-builds
helm init --canary-image --upgrade
This didn't work a couple days ago, but tried again (with newer canary build) and it worked today (20191005).
Whether I run into other issues now using canary build remains to be seen, but I got past the initialisation issue...
I tried all suggestions about changing the api version manually to fix this issue, this got rid of the errors but things didnt work properly afterwards. so in my case I removed my latest minicube installation and installed an old one on my mac using the below command, change minikube-darwin-amd64 to minikube-linux-amd64 if needed :
curl -LO https://storage.googleapis.com/minikube/releases/v1.3.0/minikube-darwin-amd64 \
&& sudo install minikube-darwin-amd64 /usr/local/bin/minikube
This downgraded my kubernetes to v1.15.2 which helm currently supports.
kubectl version: v1.16.0
helm version: v2.14.3
minikube start --memory=16384 --cpus=4
helm init --service-account tiller --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | sed 's# replicas: 1# replicas: 1\n selector: {"matchLabels": {"app": "helm", "name": "tiller"}}#' | kubectl apply -f -
helm template istio-1.3.3/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
helm template istio-1.3.3/install/kubernetes/helm/istio --name istio --namespace istio-system | kubectl apply -f -
We need to have tiller installed in the cluster before we start using helm. helm init command installs tiller in the cluster and also we need to have RBAC configured in the cluster for tiller as well. Here you'll find out the RBAC rules required as per your need for your k8s cluster.
try
apt-get upgrade helm in my case it worked.
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -

How to support node selection when using helm install

Using helm 2.7.3. New to helm and kubernetes. I have two worker nodes, and I want to deploy to a specific node. I've assigned unique labels to each node. I then added nodeSelector to deployment.yaml. When I run helm install it appears to be ignoring the node selection and deploys randomly between the two worker nodes.
Would like to understand the best approach to node selection when deploying with helm.
You can use something like this:
helm install --name elasticsearch elastic/elasticsearch --set \
nodeSelector."beta\\.kubernetes\\.io/os"=linux
Note: Escaping . character! Hope this helps.
See the example:
kubectl label nodes <your desired node> databases=mysql --overwrite
Check the label:
kubectl get nodes --show-labels
Run the following command:
helm create test-chart && cd test-chart
helm install . --set nodeSelector.databases=mysql
in ansible task
- name: install etcd middleware
command:
chdir: /var/lib/kube/controlpanel/component
cmd: "{{tools.helm.path}} upgrade etcd ./etcd --install --namespace=middleware --set replicaCount=3 --set nodeSelector.\"xxx\\.yyy\\.local/node-role-middleware\"="
You can use this one example:
helm upgrade --install airflow apache-airflow/airflow --namespace airflow --create-namespace --set {nodeSelector="your-worker-node-name"}
--set allows setting for required parameters.
The following link provides the supported parameters of the mentioned example:
https://airflow.apache.org/docs/helm-chart/stable/parameters-ref.html#kubernetes
You can set many parameters using many --set. Look at the link:https://helm.sh/docs/helm/helm_install/. This is the example:
helm install --set foo=bar --set foo=newbar myredis ./redis
Just check that the nodeSelector parameter is supported by your chart.

Enable Istio in fission

I have a Kubernetes (v1.10) cluster with Istio installed, I'm trying to install fission following Enabling Istio on Fission guide. when i run
[![helm install --namespace $FISSION_NAMESPACE --set enableIstio=true --name istio-demo
https://github.com/fission/fission/releases/download/0.9.1/fission-all-0.9.1.tgz
It throws error saying
Error: the server has asked for the client to provide credentials
(My cluster has two nodes and one master created using kubespray all ubuntu 16.04 machines)
I think that error is probably an authentication failure between helm and the cluster. Are you able to run kubectl version? How about helm ls?
If you have follow up questions, could you ask them on the fission slack? You'll get quicker answers there.
I think problem with helm
Solution
Remove .helm folder
rm -rf .helm
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account=tiller
kubectl get pods -n kube-system