Upgrading the AKS dashboard? - kubernetes

I am experimenting with the managed Azure Kubernetes service. I was able to start with the standard 1.8.3 version and then upgraded to 1.9.6. After the upgrade, I noticed that the Kubernetes dashboard still shows 1.8.3 as the version. Is the dashboard supposed to be automatically upgraded or do I have to upgrade it manually?

Never mind. I see that, as of today (4/20/2018), the most recent version of the Kubernetes dashboard is still 1.8.3 so it didn't need to be upgraded.

Related

How update kubernetes v1.10.8 and kismatic v1.12.0 to last with kubeadm

Kubernetes (v1.10.8) installed on my cloud by kismatic (v1.12.0). How I can update kubernetes to the latest version with kubeadm?
With such version difference - we currently have v1.23 (see official supported releases) - I would consider creating the cluster from the beginning.
If this is not possible, you should upgrade them step by step (from version to version). Here you can find guide that will help to upgrade kubeadm clusters.
A link to older versions you can find here, but
NOTE:
Kubernetes v1.19 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot.
However, you have to have in mind that upgrading through so many versions can cause other issues, so I recommend using the first option.

How to upgrade istio 1.4.3 to latest with zero downtime

I am newly hired engineer who started working with istio recently. My application is currently running on istio 1.4.3 and having issues when i tried to upgrade to latest using istioctl upgrade.
Below are the steps i tried
1) Verified the versions using istioctl version and saw that control plane and data plane are running on 1.4.3 whereas client version is 1.5.1 (the version i planned to upgrade).
2) Tried istioctl upgrade and seen a message “cannot upgrade because of mismatch of versions in istio components”.
3) As it was my dev environment, i decided to reinstall using istioctl manifest apply --profile default
4) Above step cost me a lot of time, because i lost all the settings related to ingress gateway connected to AWS ALB, instead ingress controller created a classic load balancer which is not part of our previous set-up.
5) I also lost setting related to prometheus, grafana, kiali.
6) Now i am planning upgrade my prod without messing the current settings, please suggest a correct way to upgrade istio to latest version with zero downtime.
what is the best way to do this upgrade, can you point out any link to documentation apart from what is mentioned in istio website ? Help is much appreciated
can you point out any link to documentation apart from what is mentioned in istio website
https://istio.io has the most comprehensive information on the topic.
There are some prerequisites for the Istio upgrade as well.
- Istio version 1.4.4 or higher is installed.
- Your Istio installation was installed using istioctl.
It looks like your Istio version is a tiny step below minimum supported one :)
what is the best way to do this upgrade,
Usually it is recommended to go 1.4 --> 1.5 and only then 1.5 --> 1.6.
I have found the following document that describes an "experimental feature, which is intended for evaluation purposes only".
But the minimal version for it is 1.3.3 or higher, which might do the trick for you.
I hope that helps.

How to update kubernetes-dashboard with kubespray

I have no idea how to update my kubernetes-dashboard, its currently version 1.10.2 but i need to update it to Beta 2.0 v8. I'm fairly new to kubernetes, does anyone know how to update? I used kubespray to set up the clusters
You can find all the information in the repository.
Pay attention to compatibility, version v2.0.0-beta8 only works with kubernetes > 1.16
I had issues in just moving towards a 2.x.x release as KUbespray creates ressources templated from kubernetes apps role which does not fully match anymore. We disabled the dashboard install in kubespray and just installed in our ansible rollout by the official docs which is fine. Same for helm 3 btw.

What's upgraded exactly when you upgrade a Service Fabric cluster?

As explained in the article Controlling the fabric version that runs on your Cluster, you can choose which version of Service Fabric you want Azure to create for you.
The ServiceFabric nuget package seem to have the same version numbers as the clusters, but older versions of the packages work just fine with newer versions of the cluster.
Now, the release notes for version 5.4.145 state a list of improvements, and mentions that some older versions won't be supported anymore.
What I'm failing to understand is -
Will I get the list of improvements just by upgrading my cluster, or do I also have to upgrade my nuget packages?
Similarly, does it mean I have to upgrade my nuget packages soon, otherwise I'm at risk of running deprecated code?
Would also be nice to get some clarification of what exactly is upgraded when I upgrade a cluster, what's upgraded when I upgrade my packages, and how the two upgrades relate to each other.
There's a difference between the Runtime and the SDK. When the cluster is upgraded, it gets a new runtime. Any improvements in that runtime will be available to existing services running in the cluster.
Upgrading the SDK (or the Nuget packages) will result in new functionality to become available to applications (services/actors) built on top of the cluster runtime.
I'd recommend updating Nuget packages soon after upgrading the cluster to keep them in sync.

Upgrade cluster on Google Container Engine

I want to upgrade my cluster to use the newest version of Kubernetes. I see Google Container Engine has the following tool:
https://cloud.google.com/container-engine/docs/clusters/upgrade?hl=en
However, after I upgrade my cluster and everything finishes successfully, when I see my cluster on the web console I still see the old version (1.9.3). When you create a new cluster version is 1.0.1, so I expect my cluster to upgrade to that version. I also tried upgrading to 0.21.4 with the same results.
Is there something I'm doing wrong?
The web console may be reporting your initial cluster version rather than the current version of you master and nodes. If you want to see all of the versions for your cluster, try running
gcloud beta container clusters --zone=<zone> describe <cluster-name> | grep -i version
and it should print out something like
currentMasterVersion: 0.21.4
currentNodeVersion: 0.19.3
initialClusterVersion: 0.19.3
If your initial cluster version was 0.19.3 then your master won't have been upgraded to 1.0.x yet (but you should have received a notice that you will be upgraded soon).
Once your master has been upgraded, you can follow the instructions at the link you found to upgrade your nodes to the same version as your master.