As in-tree plug-ins are going to be deprecated, third party storage provider is installed as default in the Kubernetes cluster. So my doubt is if I don’t enable automatic CSI migration, what will happen if I create new workloads with pvc using CSI driver? And what will happen to my existing workload which is using in-tree plugins. Quite confused on what is the exact use of automatic CSI migration
If you don't enable automatic CSI migration, existing workloads that utilize PVCs backed by in-tree drivers will continue to use the in-tree driver.When enabled, any new workloads that use a CSI driver will automatically be migrated to the new CSI driver.The purpose of automatic CSI migration in Kubernetes is to replace existing in-tree storage plugins with a corresponding CSI driver.
If automatic CSI migration is not enabled and new workloads are created with PVCs using a CSI driver, the workloads will still be able to use the in-tree storage plugin. However, they will not be able to take advantage of any features provided by the CSI driver. It recommends that you enable automatic CSI migration to ensure that your workloads are running on the most up-to-date and optimized storage plugins.
CSI migration is to migrate existing workloads from in-tree storage plugins to CSI-based storage plugins.The new CSI-based plugins will be able to take advantage of the improved features and improved performance that comes with the new plugins.
For reference follow the official doc.
Related
Currently, we run kops based cluster of the version 15. We are planning to upgrade it to the version 16 first and then further. However, api versions for various kubernetes services in yaml's will also need to change. How would you address this issue before the cluster upgrade? Is there any way to enumerate all objects in the cluster with incompatible api versions or what would be the best approach for it? I suspect the objects created by kops, e.g. kube-system objects will be upgraded automatically.
When you upgrade the cluster, the API server will take care to upgrade all existing resources in the cluster. The problem arise when you want to deploy more resources and after the upgrade these are still using the old API versions. In this case your deployment (say kubectl apply) will fail.
I.e nothing already running in the cluster will break. But future deployments will if they still use old versions.
The resources managed by kOps already use new API versions.
As I dive into the world of Cloud Composer, Airflow, Google Kubernetes Engine, and Kubernetes I've not yet found a good answer to what exactly makes Cloud Composer better than Helm and GKE.
Here are some things I've found that could be unique to Composer but mostly seem like they could be handled by GKE.
On their homepage:
End-to-end integration with Google Cloud products including BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub, and AI Platform gives users the freedom to fully orchestrate their pipeline.
On the features page:
Identity-Aware Proxy protects the interface
Cloud Composer associates a Cloud Storage bucket with the environment. The associated bucket stores the DAGs, logs, custom plugins, and data for the environment.
The downsides of Composer I've seen include:
It takes many hours to spin up a new instance
It doesn't support Kubernetes Executor
It is risky to change the underlying GKE config because it could be changed back by a composer update
There are often errors that happen when auto-scaling often happen but are documented as known
Upgrading environments is still beta
To be clear, I'm not saying Cloud Composer is bad. I'm just having trouble seeing why people like it. When I've asked folks why it is better than Helm + GKE they haven't had any compelling answers despite that they can tell many stories of Composer being unpredictable and having lots of issues.
Are you comparing the same things?
On one side, GKE, you have a container orchestrator. Declare that you want, it will deploy and maintain the stability of the cluster according with declared configuration. This configuration can be packaged with helm to write it in an easier mode. Because you deploy container, you can use the language that you want in your services.
On the other side, you have a workflow manager, with scheduler, retry policies, parallel task, context forwarding. you write DAG in python (only!) and you have operators to interact with external product/services. It's mainly designed for data processing and used a lot by data scientist and data engineering team.
Note: Cloud Composer is deployed on top of GKE (scheduler and worker), redis, app engine and Cloud SQL.
You compare 2 different worlds: Ops world (GKE/Helm) and the App/Data world (Composer/Airflow). Have a look to this new video
Update 1:
My bad, I didn't understand!!! Anyway, personally I don't want to manage things by myself: a cluster, the update of K8S, VM patching, replicas, snapshot, backup/restore,...
If someone can do this for me, I prefer, and managed services are perfect for me!!
Do you ask yourselves this question about Cloud SQL and a database managed by yourselves on a Compute Engine instance? If not (because Cloud SQL solve a lot of boring issues), my opinion is the same for Composer.
But it's an opinion, I didn't test both and compare the performance, cost and easiness.
I have production stage hosted in Google Kubernetes Engine with Kubernetes version 1.12.9-gke.15.
My team is planning to upgrade it to Kubernetes version 1.13.11-gke.5.
A capture of list of Kubernetes version
I have read some articles to upgrade Kubernetes. However, they use kubeadm not GKE.
How to update api versions list in Kubernetes here's a example that use GKE.
If you guys have experience in upgrading kubernetes cluster in GKE or even kubeadm. Please share what should i do before upgrading the version ?
Should i upgrade the version to 1.13.7-gke.24 and then to 1.13.9-gke.3 and so on ?
You first should check if you are not using any depreciated features. For example check the Changelogs for version 1.12 and 1.13 to make sure you won't loose any functionality after the upgrade.
You will have to remember that if you have just one master node you will loose access to if for few minutes while control plane is being updated. After master node is set then worker nodes will follow.
There is a great post about Kubernetes best practices: upgrading your clusters with zero downtime, which talks about location for nodes and a beta option being Regional
When creating your cluster, be sure to select the “regional” option:
And that’s it! Kubernetes Engine automatically creates your nodes and masters in three zones, with the masters behind a load-balanced IP address, so the Kubernetes API will continue to work during an upgrade.
And they explain how does Rolling update works and how to do them.
Also you might consider familiarizing yourself with documentation for Cluster upgrades, as it discusses how automatic and manual upgrades work on GKE.
As you can see from your current version 1.12.9-gke.15 you cannot upgrade to 1.14.6-gke.1. You will need to upgrade to 1.13.11-gke.5 and once this is done you will be able to upgrade to latest GKE version.
GCP Kubernetes is upgraded manually and generally does not require you to do much. But if you are you looking for manual upgrade options maybe this will help.
https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster
A point worth mentioning is too, make sure you have persistence volumes for services that require to do so viz. like DB, etc And for these, you will have to back them up manually.
I have a problem to migrate kubernetes cluster to other google project, not so familiar with GKE. Assuming my cluster is k8s-prod-xyz in xyz-proj project.
Now, i have a new project called xyz-new-proj and the Kubernetes cluster is still empty. I want to move or migrate the k8s-prod-xyz from xyz-proj to xyz-new-proj.
Node, PVC, Services, etc should be transfered or migrated. Have you guys experienced this case ? Or should i create new Kubernetes cluster in new project and then run the deployment from zero ?
You can use GKE feature Clone an existing cluster (however this works only within the same project) along with Heptio Velero tool. I guess the solution described in this article is currently the fastest and most convenient way of performing such migration.
How Can I enable batch/v2alpha1API for a google container engine cluster ?
Which is by passing
--runtime-config=batch/v2alpha1=true
to the API server .
I'm using version kubernetes 1.7.6.
Where should I go to enable that !!!
You cannot change the runtime configuration of the Kubernetes apiserver in Google Container Engine and by policy alpha APIs are not enabled because they have no official support policy. From https://kubernetes.io/docs/concepts/overview/kubernetes-api/:
Alpha level:
The version names contain alpha (e.g. v1alpha1).
May be buggy.
Enabling the feature may expose bugs.
Disabled by default.
Support for feature may be dropped at any time without notice.
The API may change in incompatible ways in a later software release without notice.
Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
If you want that particular alpha API enabled in Google Container Engine, you can create an alpha cluster.