Edit extraPortMappings in kind cluster - kubernetes

I've scanned through all resources, still cannot find a way to change extraPortMappings in Kind cluster without deleting and creating again.
Is it possible and how?

It's not said explicitly in the official docs, but I found some references that confirm: your thoughts are correct and changing extraPortMappings (as well as other cluster settings) is only possible with recreation of the kind cluster.
if you use extraPortMappings in your config, they are “fixed” and
cannot be modified, unless you recreate the cluster.
Source - Issues I Came Across
Note that the cluster configuration cannot be changed. The only
workaround is to delete the cluster (see below) and create another one
with the new configuration.
Source - Kind Installation
However, there are obvious disadvantages in the configuration and
update of the cluster, and the cluster can only be configured and
updated by recreating the cluster. So you need to consider
configuration options when you initialize the cluster.
Source - Restrictions

Related

Off-Loading of k8s deployments to different cluster in case of high loads

Since I am unable to find anything on google or the official docs, I have a question.
I have a local minikube cluster with deployment, service and ingress, which is working fine. Now when the load on my local cluster becomes too high I want to automatically switch to a remote cluster.
Is this possible?
How would I achieve this?
Thank you in advance
EDIT:
A remote cluster in my case would be a rancher Kubernetes cluster, but as long as the resources on my local one are sufficient I want to stay there.
So lets say my local cluster has enough resources to run two replicas of my application, but when a third one is needed to distribute the load, it should be deployed to the remote rancher cluster. (I hope that is clearer now)
I imagine it would be doable with kubefed (https://github.com/kubernetes-sigs/kubefed) when using the ReplicaSchedulingPreferences (https://github.com/kubernetes-sigs/kubefed/blob/master/docs/userguide.md#replicaschedulingpreference) and just weighting the local cluster very high and the remote one very low and then setting spec.rebalance to true to distribute it in case of high loads, but that approach seems a bit like a workaround.
Your idea of using Kubefed sounds good but there is an another option: Multicluster-Scheduler.
Multicluster-scheduler is a system of Kubernetes controllers that
intelligently schedules workloads across clusters. It is simple to use
and simple to integrate with other tools.
To be able to make a better choice for your use case you can read through the Comparison with Kubefed (Federation v2).
All the necessary info can be found in the provided GitHub thread.
Please let me know if that helped.

Migration K8S cluster

we have several clusters. Right now, we want to upgrade a K8S cluster replacing it for new one.
We handle the deployments with CICD, so, when the new cluster is ready, we will start to move apps to the new cluster running the pipelines.
We're facing a problem with DNS.
All the apps in the kubernetes cluster is resolved by a wildcard DNS.
Besides, we need to do the migration in multiple steps, so, we can't change the wildcard to the new cluster, because the old cluster is going to host some apps for a while and need to interact between them
Any good solution or alternative to get the migration done smoothly?
And what would be a best practice about DNS to avoid this situation in the future?
Thank you in advance.
You can put in specific DNS records for each hostname as they need to migrate.
Say your wildcard is for *.mycompany.com...
app1.mycompany.com is getting migrated
app2.mycompany.com is staying put until the next batch
Add a record for app2.mycompany.com pointing to the old cluster, and switch the wildcard record to point to the new cluster.
Now app1.mycompany.com will resolve to the new cluster, but the more specific record for app2.mycompany.com will trump the wildcard and keep pointing to the old cluster.
When it's time for app2's DNS cutover, delete the record and the wildcard will take over.

k8s - what happens to persistent storage when cluster is deleted/?

Do they get permanently deleted as well?
I imagine they do, since they are a part of a cluster, but I'm new to k8s and I can't find this info online.
If they do get deleted, what would be the preferred solution to keep the data for a cluster that sometimes gets completely deleted and re-deployed?
Thanks
According to documentation you can avoid complete PersistentVolume deletion by using retain reclaiming policies.
In this case even after PersistentVolume deletion it still exist in external infrastructure, like AWS EBS. So it is possible to recover or reuse existed data.
You can find more details here and here

Recreating GCP kubernetes cluster

I'm looking to understand how to recreate my cluster. There's a cluster-level setting to specify the IP range for nodes created within it, which I want to use so I can set a decent firewall rule. However, it looks like that can't be changed once the cluster is created.
I have a number of namespaces, deployments, services, secrets, persistent volumes and claims. If I wanted to transfer them all to a new cluster, should I just kubectl get all --namespace=whatever --format=yaml, kubectl delete -f, and then kubectl apply -f on the new cluster?
Would something so crude work for mapping to the same load balancers / public IPs, persistent volumes, secrets, etc?
As you can see the backup and the migration of whole clusters is quite a discussed matter and still an open issue on Kubernetes github as well:
https://github.com/kubernetes/kubernetes/issues/24229
Therefore I do not believe that the command that you posted might be considered a solution or work. I think it will fail due to different resources that are cluster dependent and IPs. Moreover since this kind of use is not supported It will lead for to multiple issues.
Lets say that you change zone of the cluster, how could be possible to move the PV if the disk cannot be attached to an instance in a different zone (or possibly if you migrate to a different cloud service)?
More important I would not risk to delete my production to run a command that is not documented or indicated as best practise. You could try it on test namespace, but I would not suggest to go further.
You can check reshifter and ark since they might cover your needs. I have never tested them but they are mentioned in the thread, so they might be of your interest.
I tried this approach in one of my test cluster obtaining:
Error from server (Conflict): Operation cannot be fulfilled
Error from server (Conflict): Operation cannot be fulfilled
Error from server (Forbidden): [...]
Honestly I believe that for a limited subset of resources it might be possible (Note that some resources were created correctly) , but it cannot be considered at all a way to migrate.

Correct way to define k8s-user-startup-script

This is like a follow-up question of: Recommended way to persistently change kube-env variables
I was playing around with the possibility to define a k8s-user-startup-script for GKE instances (I want to install additional software to each node).
Adding k8s-user-startup-script to an Instance Group Template "Custom Metadata" works, but that is overwritten by gcloud container clusters upgrade which creates a new Instance Template without "inheriting" the additional k8s-user-startup-script Metadata from the current template.
I've also tried to add a k8s-user-startup-script to the project metadata (I thought that would be inherited by all instances of my project like described here) but that is not taken into account.
What is the correct way to define a k8s-user-startup-script that persists cluster upgrades?
Or, more general, what is the desired way to customize the GKE nodes?
Google Container Engine doesn't support custom startup scripts for nodes.
As I mentioned in Recommended way to persistently change kube-env variables you can use a DaemonSet to customize your nodes. A DaemonSet running in privileged mode can do pretty much anything that you could do with a startup script, with the caveat that it is done slightly later in the node bring-up lifecycle. Since a DaemonSet will run on all nodes in your cluster, it will be automatically applied to any new nodes that join (via cluster resize) and because it is a Kubernetes API object, it will be persisted across OS upgrades.