I want to roll back to previous version of Kubernetes - kubernetes

I want to roll back to specific version of kubernetes. My current version is 1.21.
Is there any system specifications for kubernetes?

If you are using a managed service, you probably won't be able to roll back, and I would strongly recommend AGAINST rolling back even if you can.
Managed services like GKE, AKS and EKS will only allow you to pick from the latest couple of versions (normally between 3-4 minor versions), but will not allow you to downgrade a minor version (e.g. you can't downgrade from 1.21 to 1.20 for example see here for GKE example)
Rolling back a version will re-introduce any bugs and security issues that were fixed by the upgrade. So essentially, you are making your cluster less secure by downgrading.
Clients such as kubectl will also flag up skew warnings such as in this question, and the rolled back clusters will start rejecting deployments if you've already updated them for new apiVersions
For example if the version you migrated from had an API version something/v1beta and the new version required you to use something/v1 then if you tried to deploy a deployment on the rolled back cluster that used something/v1 (to meet the new cluster version), the rolled back cluster would reject that.

Related

GKE automatically restarted cluster

Auto-Scaling is OFF and Cluster-Upgrade is OFF but still dont know why cluster get restarted today morning. All Nodes got replace/change and All Pods got restarted
Reason is that it got restarted due to node got upgrade from 1.22 to 1.23 version
I have Regular Version in GKE Cluster
You should double-check with Google support (if you have a support plan), but I know from painful experience that if you're running a version of GKE that falls out of support, they may force-upgrade you to keep you within support, even if you have cluster upgrade turned off -- unless you use a maintenance exclusion.
The REGULAR channel release notes are here: https://cloud.google.com/kubernetes-engine/docs/release-notes-regular
The December 5th one is probably the one that affected you.
If you disable node auto-upgrade, you are responsible for ensuring that the cluster's nodes run a version compatible with the cluster's version, and that the version adheres to the Kubernetes version and version skew support policy.
Nodes running versions that have reached their end of life date are auto-upgraded even when auto-upgrade is disabled, to ensure supportability and version skew compatibility.
For more information you can read this link.
You can also check this link for temporary mitigation.

How to approach update of kops based kubernetes api's when upgrading the cluster?

Currently, we run kops based cluster of the version 15. We are planning to upgrade it to the version 16 first and then further. However, api versions for various kubernetes services in yaml's will also need to change. How would you address this issue before the cluster upgrade? Is there any way to enumerate all objects in the cluster with incompatible api versions or what would be the best approach for it? I suspect the objects created by kops, e.g. kube-system objects will be upgraded automatically.
When you upgrade the cluster, the API server will take care to upgrade all existing resources in the cluster. The problem arise when you want to deploy more resources and after the upgrade these are still using the old API versions. In this case your deployment (say kubectl apply) will fail.
I.e nothing already running in the cluster will break. But future deployments will if they still use old versions.
The resources managed by kOps already use new API versions.

Things to do before upgrading Kubernetes cluster

I have production stage hosted in Google Kubernetes Engine with Kubernetes version 1.12.9-gke.15.
My team is planning to upgrade it to Kubernetes version 1.13.11-gke.5.
A capture of list of Kubernetes version
I have read some articles to upgrade Kubernetes. However, they use kubeadm not GKE.
How to update api versions list in Kubernetes here's a example that use GKE.
If you guys have experience in upgrading kubernetes cluster in GKE or even kubeadm. Please share what should i do before upgrading the version ?
Should i upgrade the version to 1.13.7-gke.24 and then to 1.13.9-gke.3 and so on ?
You first should check if you are not using any depreciated features. For example check the Changelogs for version 1.12 and 1.13 to make sure you won't loose any functionality after the upgrade.
You will have to remember that if you have just one master node you will loose access to if for few minutes while control plane is being updated. After master node is set then worker nodes will follow.
There is a great post about Kubernetes best practices: upgrading your clusters with zero downtime, which talks about location for nodes and a beta option being Regional
When creating your cluster, be sure to select the “regional” option:
And that’s it! Kubernetes Engine automatically creates your nodes and masters in three zones, with the masters behind a load-balanced IP address, so the Kubernetes API will continue to work during an upgrade.
And they explain how does Rolling update works and how to do them.
Also you might consider familiarizing yourself with documentation for Cluster upgrades, as it discusses how automatic and manual upgrades work on GKE.
As you can see from your current version 1.12.9-gke.15 you cannot upgrade to 1.14.6-gke.1. You will need to upgrade to 1.13.11-gke.5 and once this is done you will be able to upgrade to latest GKE version.
GCP Kubernetes is upgraded manually and generally does not require you to do much. But if you are you looking for manual upgrade options maybe this will help.
https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster
A point worth mentioning is too, make sure you have persistence volumes for services that require to do so viz. like DB, etc And for these, you will have to back them up manually.

Updating StatefulSets in Kubernetes with a propietary vendor?

I could not be understanding Kubernetes correctly but our application relies on a proprietary closed-source vendor that in turn relies on Solr. I've read articles on rolling updates with StatefulSets but they seem to be dependent on the application being aware and accounting for new schema versions, which we have no ability to do without decompiling and jumping through a lot of hoops. Let me describe what we're trying to do:
WebService Version 1 needs to be upgraded to WebService Version 2, this upgrade is none of our code and just the vendor code our code relies on. Think of it like updating the OS.
However WebService Version 1 relies on Solr Version 1. The managed schema is different and there are breaking changes between Solr Version 1 and 2. Both the Solr version and schemas are different. If WebService Version 1 hits Solr Version 2 it won't work, or worse run break Solr Version 2. The same is true in reverse, if we update WebService Version 2 and it gets Solr Version 1 it will break that.
The only thing I can think of is to get Kubernetes to basically spin up a pod for each version and not bring down 1 until 2 is up for both WebService and Solr.
This seems not right, am I understanding this correctly?
This is not really a problem Kubernetes can solve. First work out how you would do it by hand, then you can start working out how to automate it. If zero-downtime is a requirement, the best thing I can imagine is launching the new Solr cluster separately rather than doing an in-place upgrade, then launch the new app separately pointing at the new Solr. But you will need to work out how to sync data between the two Solr clusters in real time during the upgrade. But again, Kubernetes neither helps nor hinders here, the problems are not in launching or managing the containers, it's a logistical issue in your architecture.
It seems that what the canary release strategy with Solr suggests is simply having a new StatefulSet with the same labels as the one with the previous version.
Since labels can be assigned to many objects and network-level, services route requests based on these, what will happen is that requests will be redirected to both StatefulSets, emulating the canary release model.
Following this logic, you can have a v1 StatefulSet with, say, 8 replicas and another v2 StatefulSet with 2. So, ~80% of requests should hit v1 and ~20% v2 (not actually, just to illustrate).
From there, you can play with the number of replicas of each StatefulSet until you "roll out" 100% of replicas of v2, with no downtime.
Now, this can work in your scenario if you label each duo (application + Solr version) in aforementioned way.
Each duo would receive an ~N% of requests, depending on the number of replicas it has. You can slowly decrease the number of replicas of *duo* v1 and increase the next updated version.
This approach has the downside of using more resources as you will be running two versions of your full application stack. However, there is no downtime when upgrading the whole stack and you can control the percentage of "roll out".

How to update Kubernetes Cluster to the latest version available?

I began to try Google Container Engine recently. I would you like to upgrade the Kubernetes Cluster to the latest version available, if possible without downtime. Is there any way to do this?
Unfortunately, the best answer we currently have is to create a new cluster and move your resources over, then delete the old one.
We are very actively working on making cluster upgrades reliable (both nodes and the master), but upgrades are unlikely to work for the majority of currently existing clusters.
We now have a checked-in upgrade tool for master and nodes: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/gce/upgrade.sh