I know that you can start minikube with a different K8s version with --kubernetes-version.
But how can I let minikube list all versions which it supports?
I had a look at the command reference of start, but could not find a way up to now.
In my case I would like to know which one is the latest v1.17.X version which is supported.
On the github release page I found that v1.17.12 is today the latest version in the 17.x series. But it would be nice, if I minikube or kubectl could tell me this.
#Esteban Garcia is right but I would like to expand on this topic a bit more with the help of the official documentation:
By default, minikube installs the latest stable version of Kubernetes
that was available at the time of the minikube release. You may select
a different Kubernetes release by using the --kubernetes-version
flag, for example:
minikube start --kubernetes-version=v1.11.10
minikube follows the Kubernetes Version and Version Skew Support
Policy, so we guarantee support for the latest build for the last
3 minor Kubernetes releases. When practical, minikube aims to support
older releases as well so that users can emulate legacy environments.
For up to date information on supported versions, see
OldestKubernetesVersion and NewestKubernetesVersion in
constants.go.
The following command may be helpful:
minikube config defaults kubernetes-version
Related
Kubernetes (v1.10.8) installed on my cloud by kismatic (v1.12.0). How I can update kubernetes to the latest version with kubeadm?
With such version difference - we currently have v1.23 (see official supported releases) - I would consider creating the cluster from the beginning.
If this is not possible, you should upgrade them step by step (from version to version). Here you can find guide that will help to upgrade kubeadm clusters.
A link to older versions you can find here, but
NOTE:
Kubernetes v1.19 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot.
However, you have to have in mind that upgrading through so many versions can cause other issues, so I recommend using the first option.
I am using kOps to perform a manual cluster upgrade (from 1.17 to 1.18) as explained at https://kops.sigs.k8s.io/operations/updates_and_upgrades/#upgrading-kubernetes
I've noticed that kOps does not update the ami-image defined at spec.image at ig nodes, that means after cluster upgrade nodes are going to use the same base OS despite the kubernetes upgrade. But if you install 1.18 from scratch kOps uses the latest image available for that version.
should i update the version and configure it the same as the one kOps would use in case of an installation from scratch?
In 1.18 ami has moved from Debian to Ubuntu, should i take any precautions due to the change of operating system?
if you edit the manifests directly and do "kops update" etc ... then you need to also update the images, another alternative is to let kops do it for you by running "kops upgrade cluster " it will update the remote state and set the correct defaults etc ..
regarding the image change, i don't see any major issues there, what you can do is grab the current ami and do "sort of rollbacks" by replacing the image and updating the cluster ( or applying previous version of the manifest assuming you have s3 revisions on the state )
There was a bug up until kOps 1.18.2 where Ubuntu images were considered "custom" and therefore not upgraded by kops upgrade. See this bug
As of 1.18.2, you should see upgrades for Ubuntu as well.
There is no particular need to take any precaution when switching from Debian to Ubuntu unless you are using kOps hooks that would be Debian. kOps will take care of this change for you.
Using Ansible for deploying Kubernetes according to the official contrib repository, it installed a Kubernetes 1.2 for me, although 1.3.x is current. How can I specify the version?
Default value for roles is kube_version: 1.2.4.
You can override it by calling: ./deploy-cluster.sh -e kube_version=1.3.5
In principle, one could simply add
kube_version: 1.3.5
to the all.yml file. However, at least on RedHat, this does nothing. This is because other settings affect the Kubernetes version number, too. In case of RedHat,
kube_version: 1.3.0
kube_source_type: distribution-rpm
kube_rpm_url_base: https://kojipkgs.fedoraproject.org/packages/kubernetes/1.3.0/0.2.git507d3a7.fc26/x86_64
kube_rpm_url_sufix: 1.3.0-0.2.git507d3a7.fc26.x86_64.rpm
does the trick of upgrading the current playbooks (as of August 2016) to Kubernetes 1.3.0. (The kube_version may be even superfluous here.) Another possibility, which should work for all flavours of Linux, is
kube_version: 1.3.5
kube_source_type: github-release
However, at least as of August 2016, this leads to a deployment error, possibly because the directory structure of the Kubernetes source tree has changed between 1.2.0 and 1.3.5.
Other possible combinations of these settings can be found in the comments of Kubernetes' main.yml file, however, all this trouble suggests that it is best to wait for the Ansible Kubernetes files to be updated instead of forcing a newer version.
Is it recommended to deploy Kubernetes 1.2 on a bare-metal Ubuntu/ RedHat production cluster? If so, what is the recommended SDN tool (flanneld or OvS), docker version and etcd version to use?
Here is the getting started guide for Ubuntu. It hasn't been updated since Kubernetes v1.1.8, but it should still be applicable for v1.2.4. That getting started guide uses flannel, but you can also use Calico (Guide). The list of Kubernetes getting started guides might be a good place to start.
docker version need to be 1.2+
you can found flannel/etcd version in the script of download-release.sh
I want to upgrade my cluster to use the newest version of Kubernetes. I see Google Container Engine has the following tool:
https://cloud.google.com/container-engine/docs/clusters/upgrade?hl=en
However, after I upgrade my cluster and everything finishes successfully, when I see my cluster on the web console I still see the old version (1.9.3). When you create a new cluster version is 1.0.1, so I expect my cluster to upgrade to that version. I also tried upgrading to 0.21.4 with the same results.
Is there something I'm doing wrong?
The web console may be reporting your initial cluster version rather than the current version of you master and nodes. If you want to see all of the versions for your cluster, try running
gcloud beta container clusters --zone=<zone> describe <cluster-name> | grep -i version
and it should print out something like
currentMasterVersion: 0.21.4
currentNodeVersion: 0.19.3
initialClusterVersion: 0.19.3
If your initial cluster version was 0.19.3 then your master won't have been upgraded to 1.0.x yet (but you should have received a notice that you will be upgraded soon).
Once your master has been upgraded, you can follow the instructions at the link you found to upgrade your nodes to the same version as your master.