Transferring Kubernetes cluster to other project include PVC - kubernetes

I have a problem to migrate kubernetes cluster to other google project, not so familiar with GKE. Assuming my cluster is k8s-prod-xyz in xyz-proj project.
Now, i have a new project called xyz-new-proj and the Kubernetes cluster is still empty. I want to move or migrate the k8s-prod-xyz from xyz-proj to xyz-new-proj.
Node, PVC, Services, etc should be transfered or migrated. Have you guys experienced this case ? Or should i create new Kubernetes cluster in new project and then run the deployment from zero ?

You can use GKE feature Clone an existing cluster (however this works only within the same project) along with Heptio Velero tool. I guess the solution described in this article is currently the fastest and most convenient way of performing such migration.

Related

Migrating Kubernetes cluster created by Rancher to a different Rancher instance

I currently have Air-Gapped Docker-installed Rancher v2.4.8 hosting a few clusters created in the Rancher UI.
I created a new Air-Gapped Rancher v2.6.6 instance on RKE Kubernetes cluster and I need to migrate those clusters.
Question is, After I upgrade the Docker instance, Is there any solution to migrate them? I saw an article from 2021 stating it's not possible, But maybe there's a way now or a solid workaround?
Thanks everyone.
I searched online for solutions, But I am planning this migration ahead to avoid problems.

Can new Rancher version be used for local cluster only?

I have been working with kubernetes in a staging environment for a couple of month and want to switch to production, I came across a tool called Rancher almost 2 weeks ago and since then am going through their documents.
It was recommended by the developers and also in the community not to use rancher in production kubernete and preferably create a separated cluster for that and add an agent to your main production cluster from that one.
However in the latest stable version, there is actually an option you can tick to use the rancher only for local cluster so this question came to my mind that:
If the latest stable version of rancher is modified to be deployed on production cluster itself rather than having dedicated cluster? and if there is any security or restarting issues can happen that deletes all the configurations for other components on cluster
Note: on another staging environment I installed on the local clustor an instance of wordpress and ghost and both were working fine.
I still think the best option for you would be to have fully accessible own cluster and you wont be dependent to rancher cloud solutions. I am not saying Rancher is bad - no. Just If you are talking about PRODUCTION environment - my personal opinion cluster should be own. Sure arguable topic.
What I can mention also here - you can use any of Useful Interactive Terminal and Graphical UI Tools for Kubernetes . for example Octant
Octant is a browser-based UI aimed at application developers giving
them visibility into how their application is running. I also think
this tool can really benefit anyone using K8s, especially if you
forget the various options to kubectl to inspect your K8s Cluster
and/or workloads. Octant is also a VMware Open Source project and it
is supported on Windows, Mac and Linux (including ARM) and runs
locally on a system that has access to a K8S Cluster. After installing
Octant, just type octant and it will start listening on localhost:7777
and you just launch your web browser to access the UI.

How to approach update of kops based kubernetes api's when upgrading the cluster?

Currently, we run kops based cluster of the version 15. We are planning to upgrade it to the version 16 first and then further. However, api versions for various kubernetes services in yaml's will also need to change. How would you address this issue before the cluster upgrade? Is there any way to enumerate all objects in the cluster with incompatible api versions or what would be the best approach for it? I suspect the objects created by kops, e.g. kube-system objects will be upgraded automatically.
When you upgrade the cluster, the API server will take care to upgrade all existing resources in the cluster. The problem arise when you want to deploy more resources and after the upgrade these are still using the old API versions. In this case your deployment (say kubectl apply) will fail.
I.e nothing already running in the cluster will break. But future deployments will if they still use old versions.
The resources managed by kOps already use new API versions.

creating a proper kubeconfig file for a 2 node gentoo linux kubernetes cluster

I have two servers at my home with Gentoo Linux ~amd64.I would like to install Kubernetes on them to play with it a bit.
Gentoo now packages all the Kubernetes related dependencies under one package called sys-cluster/kubernetes and the latest version available at the moment is 1.18.3.
the last time I played with Kubernetes was several years ago and I think I completely forgot everything.
so I installed kubernetes on both servers. since I use systemd and the package contains only kubelet systemd service I created systemd init scripts for also kube-apiserver, kube-controller-manager, kube-proxy and kube-scheduler.
now this package also comes with kubeadm but I would like to know how to install and configure kubernetes manually.
now I want to create a kubeconfig file for my cluster configuration.
I googled and found the following url: http://docs.shippable.com/deploy/tutorial/create-kubeconfig-for-self-hosted-kubernetes-cluster/
the first step is Make sure you can access the cluster but I thought I wanted to create kubeconfig in order for the services to properly know how to access my cluster!
this web site already talks about secrets that where already configured which aren't.. i'm starting from scratch and this is not probably the way to go.
In general I want to know how to properly create a kubeconfig file for my setup, then i'll configure the services to use this kubeconfig file and go on from there.
so any information regarding this issue would be greatly appreciated.
so I asked this also in Kubernetes slack channel and they provided me this project: https://github.com/kelseyhightower/kubernetes-the-hard-way
it's a documentation project on how to configure kubernetes the hard way, in the documentation they set it up in google cloud, but it's easy to understand what they did on cloud and how to configure the same on your network.

Things to do before upgrading Kubernetes cluster

I have production stage hosted in Google Kubernetes Engine with Kubernetes version 1.12.9-gke.15.
My team is planning to upgrade it to Kubernetes version 1.13.11-gke.5.
A capture of list of Kubernetes version
I have read some articles to upgrade Kubernetes. However, they use kubeadm not GKE.
How to update api versions list in Kubernetes here's a example that use GKE.
If you guys have experience in upgrading kubernetes cluster in GKE or even kubeadm. Please share what should i do before upgrading the version ?
Should i upgrade the version to 1.13.7-gke.24 and then to 1.13.9-gke.3 and so on ?
You first should check if you are not using any depreciated features. For example check the Changelogs for version 1.12 and 1.13 to make sure you won't loose any functionality after the upgrade.
You will have to remember that if you have just one master node you will loose access to if for few minutes while control plane is being updated. After master node is set then worker nodes will follow.
There is a great post about Kubernetes best practices: upgrading your clusters with zero downtime, which talks about location for nodes and a beta option being Regional
When creating your cluster, be sure to select the “regional” option:
And that’s it! Kubernetes Engine automatically creates your nodes and masters in three zones, with the masters behind a load-balanced IP address, so the Kubernetes API will continue to work during an upgrade.
And they explain how does Rolling update works and how to do them.
Also you might consider familiarizing yourself with documentation for Cluster upgrades, as it discusses how automatic and manual upgrades work on GKE.
As you can see from your current version 1.12.9-gke.15 you cannot upgrade to 1.14.6-gke.1. You will need to upgrade to 1.13.11-gke.5 and once this is done you will be able to upgrade to latest GKE version.
GCP Kubernetes is upgraded manually and generally does not require you to do much. But if you are you looking for manual upgrade options maybe this will help.
https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster
A point worth mentioning is too, make sure you have persistence volumes for services that require to do so viz. like DB, etc And for these, you will have to back them up manually.