Istio upgrade from 1.4 to 1.5 - kubernetes

We have installed istion-1.4.0 from istio-demo.yml file by running the following command on k8s cluster - 1.15.1
kubectl apply -f istio-demo.yml
Now we need to upgrade our istio from 1.4.0 to 1.5.0 and as per my understanding its not straight forward, due to changes in istio components ( introducing of istiod and removing citadel,galley,policy & telemetry).
How can i move from kubectl to istoctl so that my future upgrade to istio in-line with.??

As I mentioned in comments I have followed a theme on istio discuss about upgrade created by#laurentiuspurba.
I have changed it a little for your use case, so an upgrade from 1.4 to 1.5.
Take a look at below steps to follow.
1.Follow istio documentation and install istioctl 1.4 and 1.5 with:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.4.0 sh -
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.0 sh -
2.Add the istioctl 1.4 to your path
cd istio-1.4.0
export PATH=$PWD/bin:$PATH
3.Install istio 1.4
istioctl manifest generate > $HOME/generated-manifest.yaml
kubectl create namespace istio-system
kubectl apply -f generated-manifest.yaml
4.Check if everything works correct.
kubectl get pod -n istio-system
kubectl get svc -n istio-system
istioctl version
5.Add the istioctl 1.5 to your path
cd istio-1.5.0
export PATH=$PWD/bin:$PATH
6.Install istio operator for future upgrade.
istioctl operator init
7.Prepare IstioOperator.yaml
nano IstioOperator.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: example-istiocontrolplane
spec:
profile: default
tag: 1.5.0
8.Before the upgrade use below commands
kubectl -n istio-system delete service/istio-galley deployment.apps/istio-galley
kubectl delete validatingwebhookconfiguration.admissionregistration.k8s.io/istio-galley
9.Upgrade from 1.4 to 1.5 with istioctl upgrade and prepared IstioOperator.yaml
istioctl upgrade -f IstioOperator.yaml
10.After the upgrade use below commands
kubectl -n istio-system delete deployment istio-citadel istio-galley istio-pilot istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete service istio-citadel istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete horizontalpodautoscaler.autoscaling/istio-pilot horizontalpodautoscaler.autoscaling/istio-telemetry
kubectl -n istio-system delete pdb istio-citadel istio-galley istio-pilot istio-policy istio-sidecar-injector istio-telemetry
kubectl -n istio-system delete deployment istiocoredns
kubectl -n istio-system delete service istiocoredns
11.Check if everything works correct.
kubectl get pod -n istio-system
kubectl get svc -n istio-system
istioctl version
12.I have deployed a bookinfo app to check if everything work correct.
kubectl label namespace default istio-injection=enabled
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
13.Results
curl -v xx.xx.xxx.xxx/productpage | grep HTTP
HTTP/1.1 200 OK
istioctl version
client version: 1.5.0
control plane version: 1.5.0
data plane version: 1.5.0 (8 proxies)
Hope you find this useful. If you have any questions let me know.

Related

update manifest of a kubernetes object

I have a k8s cluster and I have to update metrics-server (in the kube-system) namespace. I've tried to:
kubectl apply -n kube-system -f my-updated-metrics-server.yaml
and
kubectl replace -n kube-system -f my-updated-metrics-server.yaml
without success. What happens is that the deployment gets updated but then after a while (10-15 min), it gets to the previous status (before the apply/replace commands).
any thoughts?
UPDATED (as requested)
$ kubectl get ns |grep -iE 'argo|flux'
$

Error: template: inject:469: function "appendMultusNetwork" not defined

istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--valuesFile inject-values.yaml \
--filename samples/sleep/sleep.yaml \
| kubectl apply -f -
While trying to inject istio sidecar container manually to pod. I got error -
Error: template: inject:469: function "appendMultusNetwork" not defined
https://istio.io/latest/docs/setup/additional-setup/sidecar-injection/
As mentioned in comments I have tried to reproduce your issue on gke with istio 1.7.4 installed.
I've followed the documentation you mentioned and it worked without any issues.
1.Install istioctl and istio default profile
curl -sL https://istio.io/downloadIstioctl | sh -
export PATH=$PATH:$HOME/.istioctl/bin
istioctl install
2.Create samples/sleep directory and create sleep.yaml, for example with vi.
3.Create local copies of the configuration.
kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.config}' > inject-config.yaml
kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.values}' > inject-values.yaml
kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' > mesh-config.yaml
4.Apply it with istioctl kube-inject
istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--meshConfigFile mesh-config.yaml \
--valuesFile inject-values.yaml \
--filename samples/sleep/sleep.yaml \
| kubectl apply -f -
5.Verify that the sidecar has been injected
kubectl get pods
NAME READY STATUS RESTARTS AGE
sleep-5768c96874-m65bg 2/2 Running 0 105s
So there are few things worth to check as it might might cause this issue::
Could you please check if you executed all your commands correctly?
Maybe you run older version of istio and you should follow older
documentation?
Maybe you changed something in above local copies of the
configuration and that cause the issue? If you did what exactly did you change?

Uninstall istio (all components) completely from kubernetes cluster

I installed istio using these commands:
VERSION = 1.0.5
GCP = gcloud
K8S = kubectl
#$(K8S) apply -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
#$(K8S) apply -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
#$(K8S) get pods -n istio-system
#$(K8S) label namespace default istio-injection=enabled
#$(K8S) get svc istio-ingressgateway -n istio-system
Now, how do I completely uninstall it including all containers/ingress/egress etc (everthing installed by istio-demo-auth.yaml?
Thanks.
If you used istioctl, it's pretty easy:
istioctl x uninstall --purge
Of course, it would be easier if that command were listed in istioctl --help...
Reference: https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio
Based on their documentation here, you can generate all specs as yml file then pipe it to simple kubectl's delete operation
istioctl manifest generate <your original installation options> | kubectl delete -f -
here's an example:
istioctl manifest generate --set profile=default | kubectl delete -f -
A drawback of this approach though is to remember all options you have used when you installed istio which might be quite hard to remember especially if you enabled specific components.
If you have installed istio using helm's chart, you can uninstall it easily
First, list all installed charts:
helm list -n istio-system
NAME NAMESPACE REVISION UPDATED STATUS
istiod istio-system 1 2020-03-07 15:01:56.141094 -0500 EST deployed
and then delete/uninstall the chart using the following syntax:
helm delete -n istio-system --purge istio-system
helm delete -n istio-system --purge istio-init
...
Check their website for more information on how to do this.
If you already installed istio using istioctl or helm in its own separate namespace, you can easily delete completely that namespace which will in turn delete all resources created inside it.
kubectl delete namespace istio-system
Just run kubectl delete for the files you applied.
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
You can find this in docs as well.
If you have installed it as described, then you will need to delete it in the same way.
kubectl delete -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
Then you would manually delete the folder, and istioctl, if you moved to anywhere.
IMPORTANT: Deleting a namespace is super comfortable to clean up, but you can't do it for all scenarios. In this situation, if you delete the namespace only, you are leaving all the permissions and credentials intact. Now, say you want to update Istio, and Istio team has made some security changes in their RBAC rules, but has not changed the name of the object. You would deploy the new yaml file, and it will throw an error saying the object (for example clusterrolebinding) already exists. If you don't pay attention to what that error was, you can end up with the worse type of errors (when there are no error, but something goes wrong).
Cleaning up Istio is a bit tricky, because of all the things it adds: CustomResourceDefinitions, ConfigMaps, MutatingWebhookConfigurations, etc. Just deleting the istio-system namespace is not sufficient. The safest bet is to use the uninstall instructions from istio.io for the method you used to install.
Kubectl: https://istio.io/docs/setup/kubernetes/install/kubernetes/#uninstall
Helm: https://istio.io/docs/setup/kubernetes/install/helm/#uninstall
When performing these steps, use the version of Istio you are attempting to remove. So if you are trying to remove Istio 1.0.2, grab that release from istio.io.
Don't forget to disable the injection:
kubectl delete -f istio-$(VERSION)/install/kubernetes/helm/istio/templates/crds.yaml
kubectl delete -f istio-$(VERSION)/install/kubernetes/istio-demo-auth.yaml
kubectl label default your-namespace istio-injection=disabled
Using the profile you used in installation, demo for example, run the following command
istioctl manifest generate --set profile=demo | kubectl delete -f -
After normal istio uninstall (depending on the way istio was installed by helm or istioctl) following steps can be performed
Check if anything still exists in the istio-system namespace, if exists then delete manually, also remove the istio-system namespace
Check if there is a sidecar associated with any pod (sometimes sidecars not get cleaned up in case of failed uninstallation)
kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.namespace}{"\t"}{..image}{"\n\n"}{end}' -A | grep 'istio/proxyv' | grep -v istio-system
Get the CRD that is still in use and remove associated resources
kubectl get crds | grep 'istio.io' | cut -f1-1 -d "." | xargs -n1 -I{} bash -c " echo {} && kubectl get --all-namespaces {} -o wide && echo -e '---'"
Delete all the CRD
kubectl get crds | grep 'istio.io' | xargs -n1 -I{} sh -c "kubectl delete crd {}"
Edit the labels back (optional)
kubectl label default <namespace name> istio-injection=disabled
Just delete the ns
k delete ns istio-system
Deleting CRDs without needing to find the helm charts:
kubectl delete crd -l chart=istio
Hi if you installated via helm-template you can use these commands :
For CRD's:
$ helm template ${ISTIO_BASE_DIR}/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl delete -f -
$ kubectl delete crd $(kubectl get crd |grep istio)
For Deployment/NS..etc other resources:
$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system\
--values install/kubernetes/helm/istio/values-istio-demo.yaml \
--set global.controlPlaneSecurityEnabled=true \
--set global.mtls.enabled=true | kubectl delete -f -

Why helm upgrade --install failed when previous install is failure?

This is the helm and tiller version:
> helm version --tiller-namespace data-devops
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
The previous helm installation failed:
helm ls --tiller-namespace data-devops
NAME REVISION UPDATED STATUS CHART NAMESPACE
java-maven-app 1 Thu Aug 9 13:51:44 2018 FAILED java-maven-app-1.0.0 data-devops
When I tried to install it again using this command, it failed:
helm --tiller-namespace data-devops upgrade java-maven-app helm-chart --install \
--namespace data-devops \
--values helm-chart/values/stg-stable.yaml
Error: UPGRADE FAILED: "java-maven-app" has no deployed releases
Is the helm upgrade --install command going to fail, if the previous installation failed? I am expecting it to force install. Any idea?
This is or has been a helm issue for a while. It only affects the situation where the first install of a chart fails and has up to helm 2.7 required a manual delete of the failed release before correcting the issue and installing again. However there is now a --force flag available to address this case - https://github.com/helm/helm/issues/4004
It happens when a deployment fails as unexpected.
First, check the status of the helm release deployment;
❯ helm ls -n $namespace
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
Most probably you will see nothing about the problematic helm deployment. So, check the status of the deployment with the -a option;
❯ helm list -n $namespace -a
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
$release_name $namespace 7 $update_date pending-upgrade $chart_name $app_version
As you can see the deployment stuck with the pending-upgrade status.
Check the helm deployment secrets;
❯ kubectl get secret -n $namespace 42s ⎈ eks_non-prod/monitoring
NAME TYPE DATA AGE
sh.helm.release.v1.$namespace.v1 helm.sh/release.v1 1 2d21h
sh.helm.release.v1.$namespace.v2 helm.sh/release.v1 1 21h
sh.helm.release.v1.$namespace.v3 helm.sh/release.v1 1 20h
sh.helm.release.v1.$namespace.v4 helm.sh/release.v1 1 19h
sh.helm.release.v1.$namespace.v5 helm.sh/release.v1 1 18h
sh.helm.release.v1.$namespace.v6 helm.sh/release.v1 1 17h
sh.helm.release.v1.$namespace.v7 helm.sh/release.v1 1 16h
and describe the last one;
❯ kubectl describe secret sh.helm.release.v1.$namespace.v7
Name: sh.helm.release.v1.$namespace.v7
Namespace: $namespace
Labels: modifiedAt=1611503377
name=$namespace
owner=helm
status=pending-upgrade
version=7
Annotations: <none>
Type: helm.sh/release.v1
Data
====
release: 792744 bytes
You will see the secret has the same status with the failed deployment. So delete the secret;
❯ kubectl delete secret sh.helm.release.v1.$namespace.v7
Now, you should be able to upgrade the helm release. You can check the status of the helm release after the upgrade;
❯ helm list -n $namespace -a
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
$release_name $namespace 7 $update_date deployed $chart_name $app_version
Try:
helm delete --purge <deployment>
This will do the trick
and for helm3 onwards you need to uninstall eg.
helm uninstall <deployment> -n <namespace>
Just to add...
I have often seen the Error: UPGRADE FAILED: "my-app" has no deployed releases error in Helm 3.
Almost every time, the error was in either kubectl, aws-cli or aws-iam-authenticator not Helm. Seems that a lot of problems seem to bubble-up to this exception, which is not ideal.
To diagnose the true issue you can run simple commands in one or more of these tools if you are using them and you should be able to quickly diagnose your problem.
For example:
aws-cli - aws --version to ensure you have the cli installed.
aws-iam-authenticator - aws-iam-authenticator version to check that this is correctly installed.
kubectl - kubectl version will show if the tool is installed.
kubectl - kubectl config current-context will show if you have provided a valid config that can connect to Kubernetes.

how to delete tiller from kubernetes cluster

Tiller is not working properly in my kubernetes cluster. I want to delete everything Tiller. Tiller (2.5.1) has 1 Deployment, 1 ReplicaSet and 1 Pod.
I tried: kubectl delete deployment tiller-deploy -n kube-system
results in "deployment "tiller-deploy" deleted"
however, tiller-deploy is immediately recreated
kubectl get deployments -n kube-system shows tiller-deploy running again
I also tried: kubectl delete rs tiller-deploy-393110584 -n kube-system
results in "replicaset "tiller-deploy-2745651589" deleted"
however, tiller-deploy-2745651589 is immediately recreated
kubectl get rs -n kube-system shows tiller-deploy-2745651589 running again
What is the correct way to permanently delete Tiller?
To uninstall tiller from a kubernetes cluster:
helm reset
To delete failed tiller from a kubernetes cluster:
helm reset --force
If you want to remove tiller from your cluster the cleanest way it's by removing all the components deployed during the installation.
If you already know the namespace where tiller its deployed:
$ kubectl delete all -l app=helm -n kube-system
pod "tiller-deploy-8557598fbc-5b2g7" deleted
service "tiller-deploy" deleted
deployment.apps "tiller-deploy" deleted
replicaset.apps "tiller-deploy-75f6c87b87" deleted
replicaset.apps "tiller-deploy-8557598fbc" deleted
Be careful with the command, will delete all in the namespace indicated and
with the corresponding label.
where app its the label assigned and will identify all component(replication controller, deployments, service, etc).
You can describe the pod to verify the labels:
$ kubectl describes pod tiller-deploy-8557598fbc-5b2g7 -n kube-system
Name: tiller-deploy-8557598fbc-5b2g7
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: srvlpi03 / 192.168.1.133
Start Time: Tue, 20 Aug 2019 15:51:03 -0400
Labels: app = helm
name = tiller
pod-template-hash = 8557598fbc
You have to uninstall 3 things to completely get rid of tiller:
Deployment
Service
Secret
kubectl delete deployment -n some-namespace tiller-deploy
kubectl delete svc -n some-namespace tiller-deploy
kubectl delete secret -n some-namespace tiller-secret
Be sure to backup the secret as it store all the certificates if TLS is enabled.
You can also try below command
kubectl delete deployment tiller-deploy --namespace kube-system
Turns out that it was running as replicaset:
kubectl delete replicasets -n kube-system tiller-deploy-6fdb84698b
worked for me
helm reset --force didn't remove the tiller.
Kubectl get hpa --all-namespaces( OR -n kube-system)
In normal tiller deployment, they use replica set. For your set up there might be a HorizontalPodAutoscaler object which is targeting the replica sets for tiller.
You can delete the HPA first and then delete the associated replicasets, pods, configmaps OR you can reset helm using "helm reset" command.
don't forget
kubectl -n kube-system delete service tiller-deploy