How can I migrate clusters from a rancher to another one - kubernetes

rancher
I have two ranchers, (rancher1, rancher2), I have some clusters in my rancher1, my objectif is to migrate all cluster from rancher1 to rancher2.
(rancher2 is deployed using ha rke, however rancher1 is deployed using docker)
Anyone have an idea ?

simple. unjoin the downstream cluster from rancher-1 and then join it to rancher-2.

This doc shows how to migrate from one Rancher to another:
https://rancher.com/docs/rancher/v2.5/en/backups/migrating-rancher/

Related

Kubernetes Deploy Mode

We wanna use TDengine on Kubernetes. But I dont see any docs,is there any problem runing in k8s or somethings?
As helm chart is popular in kubernetes to deploy service,If can use helm install tdengine to install a cluster in kubernetes,that will be wonderful.
If possible,I can contribute helm chart and test it in my cluster.
https://github.com/taosdata/TDengine-Operator/tree/3.0/helm/tdengine
how about these two guys
https://docs.tdengine.com/deployment/helm/
it could help you on TDengine database K8s cluster with Helm

How to run multiple hazelcast cluster in one deployment?

I'm deploying hazelcast on k8s using the helmchart on github currently on revision 5.3.2.
How would one go about running two clusters, say dev_cache and qa_cache in one helm deployment each with different members? Is that possible?
I see the fields
hazelcast:
javaOptions:
existingConfigMap: xxx
and
configurationFiles: #any additional Hazelcast configuration files
in the values.yaml but am unable to find any documentation on how to use them.
In one Helm deployment, you always run one Hazelcast cluster. You need to run Helm command twice to create 2 separate Hazelcast clusters.

create a cluster in EKS in a unsupported version

I want to create a cluster under EKS in a version that got recently deprecated 1.15 to test something version specific.
my below command is failing
eksctl create cluster --name playgroundkubernetes --region us-east-1 --version 1.15 --nodegroup-name standard-workers --node-type t2.medium --managed
is there a workaround where i can create a cluster in version 1.15.
No it's not possible to create a brand new EKS cluster with a deprecated version. The only option would be to deploy your own cluster (DIY) with something like KOPS or the like.
In addition to mreferre's comment, if you're trying to just create a Kubernetes cluster and don't need it to be in AWS, you could use Kind (https://kind.sigs.k8s.io/docs/user/quick-start/) or similar to create something much more quickly and probably more cheaply.

How to install Kubernetes v1.10.11 on a GCP cluster?

There was recently a Kubernetes security hole that was patched in v1.10.11 (among other versions), so I would like to upgrade to that version. I am currently on v1.10.9. However, when running the command gcloud container get-server-config to get the list of valid node versions, v1.10.11 doesn't show up. Instead, it jumps straight from v1.10.9 to v1.11.2.
Does anyone have any idea why I cannot seem to use the usual gcloud container clusters upgrade [CLUSTER_NAME] --cluster-version [CLUSTER_VERSION] to upgrade to this version?
Thanks in advance!
Based on:
https://cloud.google.com/kubernetes-engine/docs/security-bulletins#december-3-2018
If you have Kubernetes in v1.10.9 you should (to patch this security hole) update your GKE Cluster to 1.10.9-gke.5.
The following Kubernetes versions are now available for new clusters and for opt-in master upgrades for existing clusters:
1.9.7-gke.11,
1.10.6-gke.11,
1.10.7-gke.11,
1.10.9-gke.5,
1.11.2-gke.18
Please validate your Scheduled master auto-upgrades option in GKE.
If it's enabled your cluster masters were auto-upgraded by Google and the next possible version to update is further version so v1.11.2, what is showing by GKE for you.

How to update kubernetes cluster

I am working with Kube-Aws by coreos to generate a cloud formation script and deploy it as part of my stack,
I would like to upgrade my kubernetes cluster to a newer version.
I don't mind creating a new cluster, but what I do mind is recreating all the deployments/services etc...
Is there any way to take the configuration and replace/transfer them to the new cluster? maybe copy the entire etcd data? will that help?
Use kubectl get --export=true on all the resources that you want to move into a new cluster and then restore them that way.
kubectl get <pods,services,deployments,whatever> --export=true --all-namespaces=true