I have upgraded istio from 1.6.0 to 1.6.8. Everything went without problems.
Verification with kubectl get pods -n default -l istio.io/rev={revision} returns that all pods are running by using 1.6.8, but istioctl version returns
client version: 1.6.8
istiod version: 1.6.8
pilot version: 1.6.0
data plane version: 1.6.8 (12 proxies)
Which indicates that pilot is still running on old version. I'm not able to find any information how to upgrade it without reinstalling whole istio.
There is clean 1.6.0 install
istioctl version
client version: 1.6.8
control plane version: 1.6.0
data plane version: 1.6.0 (3 proxies)
There is 1.6.8 after canary install
istioctl version
client version: 1.6.8
pilot version: 1.6.0
istiod version: 1.6.8
data plane version: 1.6.8 (2 proxies), 1.6.0 (7 proxies)
Why does this happen? Because there are 2 versions working, original and canary.
As mentioned here
It was separated from control plane version into 2 parts: pilot version and istiod version.
Btw, it is istio-ingressgateway which uses 1.6.0 in data plane version. It seems intended to update ingress-gateway
I'm not able to find any information how to upgrade it without reinstalling whole istio.
If I understand correct you're not able to upgrade it with this version, the main issue here is that there is no option to delete old control plane, that's already covered in the 1.7 version.
The same thing happened in this tutorial
Workaround for this would be to install version higher than 1.7, then if you check documentation there are steps to Uninstall old control plane and Uninstall canary control plane.
Additional resources:
https://github.com/istio/istio/issues/18900
https://github.com/istio/istio/issues/23889
https://github.com/istio/istio/issues/23923
Related
Question is: we have some version of kube-prometheus-stack (https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack) for example version 20.0.0. I want to install just version 40.0.0. Based on docs I should install crd's by using kubectl apply -f somecrd. Can I just install version 40.0.0 which is just working and has all of those crd's alteady installed?
Thanks
Update from 20.0.0 to 24.0.0 - always problems with crd
I just want the latest version of kube-prometheus-stack
I am using microk8s to run Kubernetes on my Ubuntu server. I am using the helm v3 as my helm command.
This is the result of the helm version command:
version.BuildInfo{Version:"v3.9.2", GitCommit:"1addefbfe665c350f4daf868a9adc5600cc064fd", GitTreeState:"clean", GoVersion:"go1.17.12"}
I am trying to run this helm chart on this K8s instance:
apiVersion: v2
name: myTest
description: The test daemon (test) helm chart
type: application
version: 1.4.0
appVersion: v1.18.0
kubeVersion: ">= 1.19.0"
...
But I am getting this error:
INSTALLATION FAILED: chart requires kubeVersion: >= 1.19.0 which is incompatible with Kubernetes v1.19.15-34+c064bb32deff78
I tried different versions of Microk8s as 1.21, 1.24, and 1.19 but there is the same result.
I installed this service on minikube without any problem :(
According to the Semantic Versioning specification you have a pre-release version of Kubernetes. (This is possibly an issue in microk8s's release process.) The Helm documentation for the kubeVersion: field states that it depends on the Go github.com/Masterminds/semver package. Its documentation notes:
SemVer comparisons using constraints without a prerelease comparator will skip prerelease versions. For example, >=1.2.3 will skip prereleases when looking at a list of releases while >=1.2.3-0 will evaluate and find prereleases.
So setting in your Chart.yaml that you're willing to tolerate pre-release versions should address this:
kubeVersion: ">= 1.19.0-0" # adding a -0 on the end
Is Helm3 backward compatible ? The official documentation says that it is compatible with n-3 version it was compiled against but I am not clear on this.
Can helm 3.9 work on kubernetes 1.21.x for example, if I compile on 1.21?
First I installed lens on my mac, when I try to shell one of the pods, there's message said that I don't have any kubectl installed, so I install kubectl and it works properly.
Now I try to change configmaps but I get an error
kubectl/1.18.20/kubectl not found
When I check the kubectl folder there's 2 kubectl version 1.18.20 and 1.21.
1.21 is the one that I installed before.
How can I move kubectl version that has define in lens (1.18.20) and change it to 1.21 ?
Note:
Lens: 5.2.0-latest.20210908.1
Electron: 12.0.17
Chrome: 89.0.4389.128
Node: 14.16.0
© 2021 Mirantis, Inc.
Thanks in advance, sorry for bad English
You can set kubectl path at File -> Preference -> Kubernetes -> PATH TO KUBECTL BINARY. Or you can also check "Download kubectl binaries matching the Kubernetes cluster version", this way Lens will use the same version as your target cluster.
By the way, you should use latest v5.2.5.
I'm new to kubernetes and I'm setting up my first testing cluster. However, I'll get this error when I set up the master node. But I'm not sure how to fix it.
[ERROR KubeletVersion]: the kubelet version is higher than the control plane version.
This is not a supported version skew and may lead to a malfunctional cluster.
Kubelet version: "1.12.0-rc.1" Control plane version: "1.11.3"
The host is fully patched to the latest levels
CentOS Linux release 7.5.1804 (Core)
Many Thanks
S
I hit the same problem and used the kubeadm option: --kubernetes-version=v1.12.0-rc.1
sudo kubeadm init --pod-network-cidr=172.16.0.0/12 --kubernetes-version=v1.12.0-rc.1
I'm using a JVM image that was prepared a few weeks ago and have just updated the packages. Kubeadm, kubectl and kubelet all now return version v1.12.0-rc.1 when asked but when 'kubeadm init' is called it kicks off with the previous version.
[init] using Kubernetes version: v1.11.3
specifying the (control plane) version did the trick.
Install the same version of kubelet & kubeadm
yum -y remove kubelet
yum -y install kubelet-1.11.3-0 kubeadm-1.11.3-0
I'm getting the same error on a clean Centos 7 install after fully updating with yum update, and then applying the instructions from https://kubernetes.io/docs/setup/independent/install-kubeadm/ for setup.
Adding the option for --ignore-preflight-errors=KubeletVersion allows the installer to continue but the installation is non-working afterwards.
I was able to remove everything and reinstall matching versions with the following:
yum -y remove kubelet kubeadm kubectl
yum install -y --disableexcludes=kubernetes kubeadm-1.11.3-0.x86_64 kubectl-1.11.3-0.x86_64 kubelet-1.11.3-0.x86_64