I want to know if helm has a option similar to kubectl config set-context --current --namespace test-namespace, So I wouldn't have to pass namespace every time.
The helm tool is used to deploy modeled kubernetes resource packages. So under the hood, helm uses go-template and kubectl and outside of the helm argument definition "-n" it necessarily refers to the current namespace.
By default (i.e. without using the "-n" option of the helm cli), the helm version will be deployed in the current namespace.
So you can ignore this option if you are in the right namespace with the commands :
kubens test-namespace # Install by krew plugin or binary release
OR
kubectl config set-context --current --namespace test-namespace # kubectl native subcommand
OR
kubie ns test-namespace # (recommended) Install by binary release or rust cargo
I want to install the helm chart stable/prometheus-operator on a GKE cluster. I'm aware that either firewall rules need to be adjusted or hooks need to be disabled by setting prometheusOperator.admissionWebhooks.enabled=false (for details see the README of the chart).
However, if I install the chart with
- wget -qq https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz && tar xf helm-v3.0.0-linux-amd64.tar.gz && mv linux-amd64/helm /usr/local/bin
- helm repo add stable https://kubernetes-charts.storage.googleapis.com/
- helm repo update
- kubectl create ns monitoring
- kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/alertmanager.crd.yaml
- kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/prometheus.crd.yaml
- kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/prometheusrule.crd.yaml
- kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/servicemonitor.crd.yaml
- kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/podmonitor.crd.yaml
- helm install monitoring stable/prometheus-operator --namespace=monitoring --wait --timeout 10m --set prometheusOperator.admissionWebhooks.enabled=false
in GitLab CI the pod prometheus-operator has two containers which remain in state "Pending" for 5 minutes. I expect this rather simple setup to be available within one minute.
You can inspect the cluster setup at https://gitlab.com/krichter/prometheus-operator-503/-/jobs/358887366.
The approach shown in Installing Prometheus on GKE + istio doesn't apply because I didn't install istio.
This is caused by a known issue in the helm chart. According to https://github.com/helm/charts/issues/19147 the issue can be avoided by setting prometheusOperator.tlsProxy.enabled=false.
I am trying to deploy my microservice on a Kuberenetes cluster in 2 different environment dev and test. And I am using helm chart to deploy my Kubernetes service. I am using Jenkinsfile to deploy the chart. And inside Jenkinsfile I added helm command within the stage like the following ,
stage ('helmchartinstall')
{
steps
{
sh 'helm upgrade --install kubekubedeploy --namespace test pipeline/spacestudychart'
}
}
}
Here I am defining the --namespace test parameter. But when it deploying, it showing the console output with default namespace. I already created namespaces test and prod.
When I checked the Helm version, I got response like the following,
docker#mildevdcr01:~$ helm version
Client: &version.Version{SemVer:"v2.14.1",
GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.0",
GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Have I made any mistake here for defining the namespace?
The most likely issue here is that the Chart already specifies default as metadata.namespace which in Helm 2 is not overwritten by the --namespace parameter.
If this is the cause a solution would be to remove the namespace specified in the metadata.namespace or to make it a template parameter (aka release value).
Also see https://stackoverflow.com/a/51137448/1977182.
Approach 1:
export TILLER_NAMESPACE= your_namespace
helm upgrade -i -n release_name chart.tgz
Approach 2:
helm upgrade -i -n release_name --namespace your_namespace chart.tgz
I installed istio using these commands:
VERSION = 1.0.5
GCP = gcloud
K8S = kubectl
#$(K8S) apply -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
#$(K8S) apply -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
#$(K8S) get pods -n istio-system
#$(K8S) label namespace default istio-injection=enabled
#$(K8S) get svc istio-ingressgateway -n istio-system
Now, how do I completely uninstall it including all containers/ingress/egress etc (everthing installed by istio-demo-auth.yaml?
Thanks.
If you used istioctl, it's pretty easy:
istioctl x uninstall --purge
Of course, it would be easier if that command were listed in istioctl --help...
Reference: https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio
Based on their documentation here, you can generate all specs as yml file then pipe it to simple kubectl's delete operation
istioctl manifest generate <your original installation options> | kubectl delete -f -
here's an example:
istioctl manifest generate --set profile=default | kubectl delete -f -
A drawback of this approach though is to remember all options you have used when you installed istio which might be quite hard to remember especially if you enabled specific components.
If you have installed istio using helm's chart, you can uninstall it easily
First, list all installed charts:
helm list -n istio-system
NAME NAMESPACE REVISION UPDATED STATUS
istiod istio-system 1 2020-03-07 15:01:56.141094 -0500 EST deployed
and then delete/uninstall the chart using the following syntax:
helm delete -n istio-system --purge istio-system
helm delete -n istio-system --purge istio-init
...
Check their website for more information on how to do this.
If you already installed istio using istioctl or helm in its own separate namespace, you can easily delete completely that namespace which will in turn delete all resources created inside it.
kubectl delete namespace istio-system
Just run kubectl delete for the files you applied.
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
You can find this in docs as well.
If you have installed it as described, then you will need to delete it in the same way.
kubectl delete -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
Then you would manually delete the folder, and istioctl, if you moved to anywhere.
IMPORTANT: Deleting a namespace is super comfortable to clean up, but you can't do it for all scenarios. In this situation, if you delete the namespace only, you are leaving all the permissions and credentials intact. Now, say you want to update Istio, and Istio team has made some security changes in their RBAC rules, but has not changed the name of the object. You would deploy the new yaml file, and it will throw an error saying the object (for example clusterrolebinding) already exists. If you don't pay attention to what that error was, you can end up with the worse type of errors (when there are no error, but something goes wrong).
Cleaning up Istio is a bit tricky, because of all the things it adds: CustomResourceDefinitions, ConfigMaps, MutatingWebhookConfigurations, etc. Just deleting the istio-system namespace is not sufficient. The safest bet is to use the uninstall instructions from istio.io for the method you used to install.
Kubectl: https://istio.io/docs/setup/kubernetes/install/kubernetes/#uninstall
Helm: https://istio.io/docs/setup/kubernetes/install/helm/#uninstall
When performing these steps, use the version of Istio you are attempting to remove. So if you are trying to remove Istio 1.0.2, grab that release from istio.io.
Don't forget to disable the injection:
kubectl delete -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
kubectl label default your-namespace istio-injection=disabled
Using the profile you used in installation, demo for example, run the following command
istioctl manifest generate --set profile=demo | kubectl delete -f -
After normal istio uninstall (depending on the way istio was installed by helm or istioctl) following steps can be performed
Check if anything still exists in the istio-system namespace, if exists then delete manually, also remove the istio-system namespace
Check if there is a sidecar associated with any pod (sometimes sidecars not get cleaned up in case of failed uninstallation)
kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.namespace}{"\t"}{..image}{"\n\n"}{end}' -A | grep 'istio/proxyv' | grep -v istio-system
Get the CRD that is still in use and remove associated resources
kubectl get crds | grep 'istio.io' | cut -f1-1 -d "." | xargs -n1 -I{} bash -c " echo {} && kubectl get --all-namespaces {} -o wide && echo -e '---'"
Delete all the CRD
kubectl get crds | grep 'istio.io' | xargs -n1 -I{} sh -c "kubectl delete crd {}"
Edit the labels back (optional)
kubectl label default <namespace name> istio-injection=disabled
Just delete the ns
k delete ns istio-system
Deleting CRDs without needing to find the helm charts:
kubectl delete crd -l chart=istio
Hi if you installated via helm-template you can use these commands :
For CRD's:
$ helm template ${ISTIO_BASE_DIR}/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl delete -f -
$ kubectl delete crd $(kubectl get crd |grep istio)
For Deployment/NS..etc other resources:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system\
--values install/kubernetes/helm/istio/values-istio-demo.yaml \
--set global.controlPlaneSecurityEnabled=true \
--set global.mtls.enabled=true | kubectl delete -f -
Using helm 2.7.3. New to helm and kubernetes. I have two worker nodes, and I want to deploy to a specific node. I've assigned unique labels to each node. I then added nodeSelector to deployment.yaml. When I run helm install it appears to be ignoring the node selection and deploys randomly between the two worker nodes.
Would like to understand the best approach to node selection when deploying with helm.
You can use something like this:
helm install --name elasticsearch elastic/elasticsearch --set \
nodeSelector."beta\\.kubernetes\\.io/os"=linux
Note: Escaping . character! Hope this helps.
See the example:
kubectl label nodes <your desired node> databases=mysql --overwrite
Check the label:
kubectl get nodes --show-labels
Run the following command:
helm create test-chart && cd test-chart
helm install . --set nodeSelector.databases=mysql
in ansible task
- name: install etcd middleware
command:
chdir: /var/lib/kube/controlpanel/component
cmd: "{{tools.helm.path}} upgrade etcd ./etcd --install --namespace=middleware --set replicaCount=3 --set nodeSelector.\"xxx\\.yyy\\.local/node-role-middleware\"="
You can use this one example:
helm upgrade --install airflow apache-airflow/airflow --namespace airflow --create-namespace --set {nodeSelector="your-worker-node-name"}
--set allows setting for required parameters.
The following link provides the supported parameters of the mentioned example:
https://airflow.apache.org/docs/helm-chart/stable/parameters-ref.html#kubernetes
You can set many parameters using many --set. Look at the link:https://helm.sh/docs/helm/helm_install/. This is the example:
helm install --set foo=bar --set foo=newbar myredis ./redis
Just check that the nodeSelector parameter is supported by your chart.