we have several clusters. Right now, we want to upgrade a K8S cluster replacing it for new one.
We handle the deployments with CICD, so, when the new cluster is ready, we will start to move apps to the new cluster running the pipelines.
We're facing a problem with DNS.
All the apps in the kubernetes cluster is resolved by a wildcard DNS.
Besides, we need to do the migration in multiple steps, so, we can't change the wildcard to the new cluster, because the old cluster is going to host some apps for a while and need to interact between them
Any good solution or alternative to get the migration done smoothly?
And what would be a best practice about DNS to avoid this situation in the future?
Thank you in advance.
You can put in specific DNS records for each hostname as they need to migrate.
Say your wildcard is for *.mycompany.com...
app1.mycompany.com is getting migrated
app2.mycompany.com is staying put until the next batch
Add a record for app2.mycompany.com pointing to the old cluster, and switch the wildcard record to point to the new cluster.
Now app1.mycompany.com will resolve to the new cluster, but the more specific record for app2.mycompany.com will trump the wildcard and keep pointing to the old cluster.
When it's time for app2's DNS cutover, delete the record and the wildcard will take over.
Related
I would like to migrate an application from one GKE cluster to another, and I'm wondering how to accomplish this while avoiding any downtime for this process.
The application is an HTTP web backend.
Usually how I'd usually handle this in a non GCP/K8S context is have a load balancer in front of the application, setup a new web backend and then just update the appropriate IP address in the load balancer to point from the old IP to the new IP. This would essentially have 0 downtime while also allowing for a seemless rollback if anything goes wrong.
I do not see why this should not work for this context as well however I'm not 100% sure. And if there is a more robust or alternative way to do this (GCP/GKE friendly way), I'd like to investigate that.
So to summarize my question, does GCP/GKE support this type of migration functionality? If not, is there any implications I need to be aware of with my usual load balancer approach mentioned above?
The reason for migrating is the current k8s cluster is running quite an old version (1.18) and if doing an GKE version upgrade to something more recent like 1.22, I suspect a lot of incompatibilities as well risk.
I see 2 approaches:
In the new cluster get a new IP address and update the DNS record to point to the new load balancer
See if you can switch to Multi-cluster gateways, however that would probably require you to use approach 1 to switch to multi-cluster gateways as well: https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways
Few pain points you'll run into:
As someone used to DIY Kubernetes I hate GKE's managed Ingress Certs, because they make it very hard to pre-provision HTTPS certs in advance on the new cluster. (GKE's defacto method of provisioning HTTPS certs is to update DNS to point to the LB, and then wait 10-60 minutes. That means if you cutover to a new cluster the new cluster's HTTPS cert supplied by a managedcertificate Custom Resource, won't be ready in advance.)
It is possible to use pre-provision HTTPS certs using ACME-DNS challenge on GCP, but it's poorly documented and a god awful UX(user experience), there's no GUI, and the CLI API is terrible.
You can do it using gcloud services enable certificatemanager.googleapis.com, but I'd highly recommend against
that certificate-manager service that went GA in June, 2022. The UX is painful.
GKE's official docs are pretty bad when it comes to this scenario
You basically want to do 2 things:
Follow this how to guide for a zero downtime HTTPS cutover from cluster1 to cluster2 by leveraging Lets Encrypt Free Cert
https://gist.github.com/neoakris/4aafeac7628995da8dd423f1702c975b
(I know link only answers are bad, but it's github (great uptime) and it's way too long and nuanced to post here.)
Use Velero to migrate workloads from cluster1 to cluster2 (it migrate can do CRDs, CRs, generic yaml objects, and PV/PVCs. One thing of note is Velero works best when you're migrate to and from a cluster of the same version, if you go from a really old version to a really new version you could encounter issues where kubernetes yaml APIs got removed in the new version. Going from old version to new version can be done, but it's best left to an experienced hand. For Happy Path results migrating to and from a cluster of the same version is best.)
I've scanned through all resources, still cannot find a way to change extraPortMappings in Kind cluster without deleting and creating again.
Is it possible and how?
It's not said explicitly in the official docs, but I found some references that confirm: your thoughts are correct and changing extraPortMappings (as well as other cluster settings) is only possible with recreation of the kind cluster.
if you use extraPortMappings in your config, they are “fixed” and
cannot be modified, unless you recreate the cluster.
Source - Issues I Came Across
Note that the cluster configuration cannot be changed. The only
workaround is to delete the cluster (see below) and create another one
with the new configuration.
Source - Kind Installation
However, there are obvious disadvantages in the configuration and
update of the cluster, and the cluster can only be configured and
updated by recreating the cluster. So you need to consider
configuration options when you initialize the cluster.
Source - Restrictions
There are two parts to this.
I am using kops v1.17.0 to standup kubernetes cluster on ec2 instances. I am followinf these docs for doing so. https://kubernetes.io/docs/setup/production-environment/tools/kops/
on of the points go as follows.
kops has a strong opinion on the cluster name: it should be a valid
DNS name.
this got me confused. Can my cluster serve requests to only one DNS and its subdomains?
I tried this on a domain example.com I created a hosted zone for it. created a cluster named example.com.k8s.local.
I pointed this domain to my clusters load balancer. and I can access example.com. All good till now.
now, I want one of the services in my cluster to be served on abc.com. I created another hosted zone, and a new record set within it which points to this load balancer. I am expecting to visit abc.com and see this service but all I see is nginx 404 not found
Is this happening because of the first point I mentioned or totally separate issue? If it is because of 1st point is there aa way around or one cluster is always tied to one domain in the kops world?
As far as the first part is concerned, Yes I can serve multiple domains from same kubernetes cluster with this setup. upto certain version there was a hard requirement of matching domain name with cluster name, its not the case anymore.
Couple of things you need to consider. while issuing a certificate from ACM, make sure all your domains are listed
example
example.com
*example.com
bar.com
*.bar.com
make sure that all of the domains are validated and are not in pending or any other state.
I think reason for second issue was one of the domains in my certificate generated by ACM was invalid state and thus in pending state.
#jt97 ^^
I'm looking to understand how to recreate my cluster. There's a cluster-level setting to specify the IP range for nodes created within it, which I want to use so I can set a decent firewall rule. However, it looks like that can't be changed once the cluster is created.
I have a number of namespaces, deployments, services, secrets, persistent volumes and claims. If I wanted to transfer them all to a new cluster, should I just kubectl get all --namespace=whatever --format=yaml, kubectl delete -f, and then kubectl apply -f on the new cluster?
Would something so crude work for mapping to the same load balancers / public IPs, persistent volumes, secrets, etc?
As you can see the backup and the migration of whole clusters is quite a discussed matter and still an open issue on Kubernetes github as well:
https://github.com/kubernetes/kubernetes/issues/24229
Therefore I do not believe that the command that you posted might be considered a solution or work. I think it will fail due to different resources that are cluster dependent and IPs. Moreover since this kind of use is not supported It will lead for to multiple issues.
Lets say that you change zone of the cluster, how could be possible to move the PV if the disk cannot be attached to an instance in a different zone (or possibly if you migrate to a different cloud service)?
More important I would not risk to delete my production to run a command that is not documented or indicated as best practise. You could try it on test namespace, but I would not suggest to go further.
You can check reshifter and ark since they might cover your needs. I have never tested them but they are mentioned in the thread, so they might be of your interest.
I tried this approach in one of my test cluster obtaining:
Error from server (Conflict): Operation cannot be fulfilled
Error from server (Conflict): Operation cannot be fulfilled
Error from server (Forbidden): [...]
Honestly I believe that for a limited subset of resources it might be possible (Note that some resources were created correctly) , but it cannot be considered at all a way to migrate.
I am trying to deploy my Docker images using Kubernetes orchestration tools.When I am reading about Kubernetes, I am seeing documentation and many YouTube video tutorial of working with Kubernetes. In there I only found that creation of pods, services and creation of that .yml files. Here I have doubts and I am adding below section,
When I am using Kubernetes, how I can create clusters and nodes ?
Can I deploy my current docker-compose build image directly using pods only? Why I need to create services yml file?
I new to containerizing, Docker and Kubernetes world.
My favorite way to create clusters is kubespray because I find ansible very easy to read and troubleshoot, unlike more monolithic "run this binary" mechanisms for creating clusters. The kubespray repo has a vagrant configuration file, so you can even try out a full cluster on your local machine, to see what it will do "for real"
But with the popularity of kubernetes, I'd bet if you ask 5 people you'll get 10 answers to that question, so ultimately pick the one you find easiest to reason about, because almost without fail you will need to debug those mechanisms when something inevitably goes wrong
The short version, as Hitesh said, is "yes," but the long version is that one will need to be careful because local docker containers and kubernetes clusters are trying to solve different problems, and (as a general rule) one could not easily swap one in place of the other.
As for the second part of your question, a Service in kubernetes is designed to decouple the current provider of some networked functionality from the long-lived "promise" that such functionality will exist and work. That's because in kubernetes, the Pods (and Nodes, for that matter) are disposable and subject to termination at almost any time. It would be severely problematic if the consumer of a networked service needed to constantly update its IP address/ports/etc to account for the coming-and-going of Pods. This is actually the exact same problem that AWS's Elastic Load Balancers are trying to solve, and kubernetes will cheerfully provision an ELB to represent a Service if you indicate that is what you would like (and similar behavior for other cloud providers)
If you are not yet comfortable with containers and docker as concepts, then I would strongly recommend starting with those topics, and moving on to understanding how kubernetes interacts with those two things after you have a solid foundation. Else, a lot of the terminology -- and even the problems kubernetes is trying to solve -- may continue to seem opaque