Uninstall istio (all components) completely from kubernetes cluster - kubernetes

I installed istio using these commands:
VERSION = 1.0.5
GCP = gcloud
K8S = kubectl
#$(K8S) apply -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
#$(K8S) apply -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
#$(K8S) get pods -n istio-system
#$(K8S) label namespace default istio-injection=enabled
#$(K8S) get svc istio-ingressgateway -n istio-system
Now, how do I completely uninstall it including all containers/ingress/egress etc (everthing installed by istio-demo-auth.yaml?
Thanks.

If you used istioctl, it's pretty easy:
istioctl x uninstall --purge
Of course, it would be easier if that command were listed in istioctl --help...
Reference: https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio

Based on their documentation here, you can generate all specs as yml file then pipe it to simple kubectl's delete operation
istioctl manifest generate <your original installation options> | kubectl delete -f -
here's an example:
istioctl manifest generate --set profile=default | kubectl delete -f -
A drawback of this approach though is to remember all options you have used when you installed istio which might be quite hard to remember especially if you enabled specific components.
If you have installed istio using helm's chart, you can uninstall it easily
First, list all installed charts:
helm list -n istio-system
NAME NAMESPACE REVISION UPDATED STATUS
istiod istio-system 1 2020-03-07 15:01:56.141094 -0500 EST deployed
and then delete/uninstall the chart using the following syntax:
helm delete -n istio-system --purge istio-system
helm delete -n istio-system --purge istio-init
...
Check their website for more information on how to do this.
If you already installed istio using istioctl or helm in its own separate namespace, you can easily delete completely that namespace which will in turn delete all resources created inside it.
kubectl delete namespace istio-system

Just run kubectl delete for the files you applied.
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
You can find this in docs as well.

If you have installed it as described, then you will need to delete it in the same way.
kubectl delete -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
Then you would manually delete the folder, and istioctl, if you moved to anywhere.
IMPORTANT: Deleting a namespace is super comfortable to clean up, but you can't do it for all scenarios. In this situation, if you delete the namespace only, you are leaving all the permissions and credentials intact. Now, say you want to update Istio, and Istio team has made some security changes in their RBAC rules, but has not changed the name of the object. You would deploy the new yaml file, and it will throw an error saying the object (for example clusterrolebinding) already exists. If you don't pay attention to what that error was, you can end up with the worse type of errors (when there are no error, but something goes wrong).

Cleaning up Istio is a bit tricky, because of all the things it adds: CustomResourceDefinitions, ConfigMaps, MutatingWebhookConfigurations, etc. Just deleting the istio-system namespace is not sufficient. The safest bet is to use the uninstall instructions from istio.io for the method you used to install.
Kubectl: https://istio.io/docs/setup/kubernetes/install/kubernetes/#uninstall
Helm: https://istio.io/docs/setup/kubernetes/install/helm/#uninstall
When performing these steps, use the version of Istio you are attempting to remove. So if you are trying to remove Istio 1.0.2, grab that release from istio.io.

Don't forget to disable the injection:
kubectl delete -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
kubectl label default your-namespace istio-injection=disabled

Using the profile you used in installation, demo for example, run the following command
istioctl manifest generate --set profile=demo | kubectl delete -f -

After normal istio uninstall (depending on the way istio was installed by helm or istioctl) following steps can be performed
Check if anything still exists in the istio-system namespace, if exists then delete manually, also remove the istio-system namespace
Check if there is a sidecar associated with any pod (sometimes sidecars not get cleaned up in case of failed uninstallation)
kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.namespace}{"\t"}{..image}{"\n\n"}{end}' -A | grep 'istio/proxyv' | grep -v istio-system
Get the CRD that is still in use and remove associated resources
kubectl get crds | grep 'istio.io' | cut -f1-1 -d "." | xargs -n1 -I{} bash -c " echo {} && kubectl get --all-namespaces {} -o wide && echo -e '---'"
Delete all the CRD
kubectl get crds | grep 'istio.io' | xargs -n1 -I{} sh -c "kubectl delete crd {}"
Edit the labels back (optional)
kubectl label default <namespace name> istio-injection=disabled

Just delete the ns
k delete ns istio-system

Deleting CRDs without needing to find the helm charts:
kubectl delete crd -l chart=istio

Hi if you installated via helm-template you can use these commands :
For CRD's:
$ helm template ${ISTIO_BASE_DIR}/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl delete -f -
$ kubectl delete crd $(kubectl get crd |grep istio)
For Deployment/NS..etc other resources:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system\
--values install/kubernetes/helm/istio/values-istio-demo.yaml \
--set global.controlPlaneSecurityEnabled=true \
--set global.mtls.enabled=true | kubectl delete -f -

Related

Rancher helm chart, cannot find secret bootstrap-secret

So I am trying to deploy rancher on my K3S cluster.
I installed it using the documentation and helm: Rancher documentation
While I am getting access using my loadbalancer. I cannot find the secret to insert into the setup.
They discribe the following command for getting the token:
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'
When I run this I get the following error
Error from server (NotFound): secrets "bootstrap-secret" not found
And also I cannot find the bootstrap-secret inside the namespace cattle-system.
So can somebody help me out where I need to look?
I was with the same problem. So I figured it out with the following commands:
I installed the helm chart with "--set bootstrapPassword=Changeme123!", for example:
helm upgrade --install
--namespace cattle-system
--set hostname=rancher.example.com
--set replicas=3
--set bootstrapPassword=Changeme123!
rancher rancher-stable/rancher
I forced a hard reset, because even if I had setted the bootstrap password in the installation helm chart command, I was not able to login. So, I used the following command to hard reset:
kubectl -n cattle-system exec $(kubectl -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- reset-password
So, I hope that can help you.

Create namespace and secret, do patch only if not existing

In my CI I'm running a helm upgrade command to release an app.
But if it is a non existing app, I have to create the namespace, a secret and patch the serviceaccount. So I come up with this:
kubectl create namespace ${namespace} --dry-run=client -o yaml | kubectl apply -f -
kubectl create secret docker-registry gitlab-registry --namespace ${namespace} --docker-server="\${CI_REGISTRY}" --docker-username="\${CI_DEPLOY_USER}" --docker-password="\${CI_DEPLOY_PASSWORD}" --docker-email="\${GITLAB_USER_EMAIL}" -o yaml --dry-run=client | kubectl apply -f -
kubectl patch serviceaccount default -p '{"imagePullSecrets":[{"name":"gitlab-registry"}]}' --namespace ${namespace}
This is working, but I think it is not the perfect way as these three steps should only be done once.
: Only if app/namespace/secret is not existing
Helm provides the --create-namespace switch that will create the namespace of the release if it does not already exist.
The secret can be added in your helm chart and you can pass the variables (CI_REGISTRY, CI_DEPLOY_USER, etc.) in as helm chart values either as --set values or via the values.yaml file and using --values
The service account patching you can do as a post-install and/or a post-upgrade job (https://helm.sh/docs/topics/charts_hooks/)

Clean up Traefik CRDs

I've run a helm delete for my Traefik install on Kubernetes however I'm still seeing CRDs in the cluster.
How do you get rid of these?
CRDs can be deleted just as any other object in Kubernetes: https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#delete-a-customresourcedefinition
kubectl get <crd-name> -o yaml > crd.yaml
kubectl delete -f crd.yaml

Error: template: inject:469: function "appendMultusNetwork" not defined

istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--valuesFile inject-values.yaml \
--filename samples/sleep/sleep.yaml \
| kubectl apply -f -
While trying to inject istio sidecar container manually to pod. I got error -
Error: template: inject:469: function "appendMultusNetwork" not defined
https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/
As mentioned in comments I have tried to reproduce your issue on gke with istio 1.7.4 installed.
I've followed the documentation you mentioned and it worked without any issues.
1.Install istioctl and istio default profile
curl -sL https://istio.io/downloadIstioctl | sh -
export PATH=$PATH:$HOME/.istioctl/bin
istioctl install
2.Create samples/sleep directory and create sleep.yaml, for example with vi.
3.Create local copies of the configuration.
kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.config}' > inject-config.yaml
kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.values}' > inject-values.yaml
kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' > mesh-config.yaml
4.Apply it with istioctl kube-inject
istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--valuesFile inject-values.yaml \
--filename samples/sleep/sleep.yaml \
| kubectl apply -f -
5.Verify that the sidecar has been injected
kubectl get pods
NAME READY STATUS RESTARTS AGE
sleep-5768c96874-m65bg 2/2 Running 0 105s
So there are few things worth to check as it might might cause this issue::
Could you please check if you executed all your commands correctly?
Maybe you run older version of istio and you should follow older
documentation?
Maybe you changed something in above local copies of the
configuration and that cause the issue? If you did what exactly did you change?

List all the kubernetes resources related to a helm deployment or chart

I deployed a helm chart using helm install and after this I want to see if the pods/services/cms related to just this deployment have come up or failed. Is there a way to see this?
Using kubectl get pods and greping for the name works but it does not show the services and other resources that got deployed when this helm chart is deployed.
helm get manifest RELEASE_NAME
helm get all RELEASE_NAME
https://helm.sh/docs/helm/helm_get_manifest/
If you are using Helm3:
To list all resources managed by the helm, use label selector with label app.kubernetes.io/managed-by=Helm:
$ kubectl get all --all-namespaces -l='app.kubernetes.io/managed-by=Helm'
To list all resources managed by the helm and part of a specific release: (edit release-name)
kubectl get all --all-namespaces -l='app.kubernetes.io/managed-by=Helm,app.kubernetes.io/instance=release-name'
Update:
Labels key may vary over time, follow the official documentation for the latest labels.
I couldn't find anywhere that gave me what I wanted, so I wrote this one-liner using yq. It prints out all objects in Kind/name format. You might get some blank space if any manifests are nothing but comments.
helm get manifest $RELEASE_NAME | yq -N eval '[.kind, .metadata.name] | join("/")' - | sort
Published here: https://gist.github.com/bioshazard/e478d118fba9e26314bffebb88df1e33
By issuing:
kubectl get all -n <namespace> | grep ...
You will only query for the following resources:
pod
service
daemonset
deployment
replicaset
statefulset
job
cronjobs
I encourage you to follow this article for more explanation:
Studytonight.com: How to list all resources in a Kubernetes namespace
Using the example from the above link you can query the API for all resources by issuing:
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind -l LABEL=VALUE --ignore-not-found -o name
This command will query the API for all the resources types in the cluster and then query for each of the resources separately by label.
You can create resources in a Helm chart with labels and then query the API by specifying: -l LABEL=VALUE.
EXAMPLE
Assuming that you provisioned following Helm chart
$ helm install awesome-nginx stable/nginx-ingress
This Chart is deprecated but it's only for example purposes.
You can query the API for all resources with:
kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get --show-kind -l release=awesome-nginx --ignore-not-found -o name
where:
LABEL <- release
VALUE <- awesome-nginx (release name)
After that you should be able to see:
endpoints/awesome-nginx-nginx-ingress-controller
endpoints/awesome-nginx-nginx-ingress-default-backend
pod/awesome-nginx-nginx-ingress-controller-86b9c7d9c7-wwr8f
pod/awesome-nginx-nginx-ingress-default-backend-6979c95c78-xn9h2
serviceaccount/awesome-nginx-nginx-ingress
serviceaccount/awesome-nginx-nginx-ingress-backend
service/awesome-nginx-nginx-ingress-controller
service/awesome-nginx-nginx-ingress-default-backend
deployment.apps/awesome-nginx-nginx-ingress-controller
deployment.apps/awesome-nginx-nginx-ingress-default-backend
replicaset.apps/awesome-nginx-nginx-ingress-controller-86b9c7d9c7
replicaset.apps/awesome-nginx-nginx-ingress-default-backend-6979c95c78
podmetrics.metrics.k8s.io/awesome-nginx-nginx-ingress-controller-86b9c7d9c7-wwr8f
podmetrics.metrics.k8s.io/awesome-nginx-nginx-ingress-default-backend-6979c95c78-xn9h2
rolebinding.rbac.authorization.k8s.io/awesome-nginx-nginx-ingress
role.rbac.authorization.k8s.io/awesome-nginx-nginx-ingress
You can modify the output by changing the -o parameter.
Additional resources:
Github.com: Kubectl get all does not list all resources in a namespace #151
Stackoverflow.com: Questions: Listing all resources in a namespace
$ helm get manifest RELEASE-NAME
helm status RELEASE_NAME
This command shows the status of a named release. The status consists
of:
last deployment time
k8s namespace in which the release lives
state of the release (can be: unknown, deployed, uninstalled, superseded, failed, uninstalling, pending-install, pending-upgrade or
pending-rollback)
list of resources that this release consists of, sorted by kind
details on last test suite run, if applicable
additional notes provided by the chart
Usage: helm status RELEASE_NAME [flags]
Official docs
Also note that helm place some known labels/annotations on resource it manages, see here. You can use it with kubectl get ... -l ...
kubectl get all -n <namespace> | grep <helm chart keyword, ex: kibana, elasticsearch>
Should list all resources created by helm chart in a particular namespace