Redis Cluster data migration in two differnt Kubernetes Clusters - kubernetes

Is there anyway we can migrate Redis Cluster data that are running inside 2 different Kubernetes cluster? How we can communicate between Redis stateful pods which are running on two different Kubernetes Clusters?
We have two Redis Clusters which are running on two different Kubernetes Clusters X & Y. I want to transfer data from redis-X to redis-Y cluster. How we can establish connection between redis-X and redis-Y clusters so that we can migrate data?
Any help or hint is appreciated.

There are two possible approaches to establish connection between clusters:
Built-in solutions
3rd party solution
Built-in solutions
NodePort - Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service routes, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>
LoadBalancer - Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer routes, are automatically created.
ingress (both 1st and 3rd party implementations) - more flexible then previous two, but only works with HTTP/HTTPS.
Read more: Kubernets services, NGINX ingress
3rd party solution
Istio supports multi-cluster deployment model. However, if you don't have service mesh deployed, doing so may be too much for single task use.
Once you have connection established between clusters you can migrate Redis using MIGRATE command, or redis-migrate-tool proposed in comments.

Here is a new approach using Skupper.
https://github.com/bryonbaker/rhai-redis-demo
It is a full step by step demo I wrote that shows a globally distributed real-time replicated cache from on-premises and across four kubernetes clusters. The Redis cache is replicated from on-premises to Sydney, London and and New York.
No NodePorts, Submariner, or Istio Federation required. It all uses standard routes.
The demo is based on OpenShift - but it will work with any flavour of xKS.

Related

Kubernetes service discovery, CNIs and Istio differences

I was making some research about how K8s resolves the services using the clusterIP services and how CNIs like WeaveNet or how service meshes like Istio provide additional features to this functionality. However, I'm new on the topic and I'd like to share here what I've found to see if somebody can expand and correct my points:
Istiod has a service registry. This service registry is filled with the entries coming from K8s services clusterIPs (which in turn is the service registry of K8s) and other possible external services defined with Kind: ServiceEntry
(see seciton 5.5 of book istio in action)
This service registry is then mixed with more information about virtualservices and destination rules. These new/added K8s kinds are CRDs from Istio. They are what give the features of L7 load balancing that allow to distribute traffic by HTTP headers or URI path.
Without Istio, K8s has different (3) ways to implement the clusterIPs services concept. This services provide load balancing at L4.
https://kubernetes.io/docs/concepts/services-networking/service/
The most extended one nowadays is the iptables proxy mode. The iptables of the Linux machine are populated in bases of what theh kube-proxy provides. Kube-proxy gets those data from the kube-apiserver and (problably the core-dns). The kube-apisever will in turn consult the etcd database to know about the k8s clusterIP services. The entry of the iptables is populated with a the clusterIP->pod IP with only one pod IP out of the many pod that a deployment behind the clusterIP could be.
Any piece of code/application inside of the container could make calls directy to the kube-apiserver if using the correct authentication and get the pod address but that would be not practic
K8s can use CNIs (container network interfaces). One example of this would be Weavenet.
https://www.weave.works/docs/net/latest/overview/
Wevenet creates a new layer 2 network using Linux kernel features. One daemon sets up this L2 network and manages the routing between machines and there are various ways to attach machines to the network.
In this network the containers can be exposed to the outside world.
Weavenet implements a micro DNS server at each node. You simply name containers and the routing just can work without the use of services, including the load balancing across multiple continers with the same name.

Kubernetes Cluster Entrypoint On-Prem

I want to set up a Kubernetes cluster on-Prem directly on VMs. Since we are not talking about using a cloud provider exposing my service as type LoadBalancer may not directly make sense. I understand MetalLB could be an option but I don’t have a pool of IP addresses to assign it.
I want an entry point into my cluster to which I can point my DNS A record to. I have a couple of solutions in mind but not sure if one is better than the other or there are other better solutions.
Exposing my service on NodePort and using an external load balancer. I can make the external LB my entry point.
Running an Ingress Controller on NodePort which routes traffic to my ClusterIP services internally. I could load balance between the Ingress NodePorts using an external LB and make it my entry point.
I only want to expose one service from my cluster to the outside world. In that case I am not sure how using Ingress will add any advantage.
Please help me out my sharing your thoughts and suggestions!

Exposing service to the internet from a bare metal kubernetes cluster

I'm running a Kuberenets with 1 master and 2 slaves. I have a deployment and service pointing to it with type of NodePort. I'm able to access the service from the workers themselves, but I want to expose the service in a way it will load balance between the workers and without specifying a port. I'm running on bare-metal, so I can't expose the service as a LoadBalancer and use google/amazon load balancing.
How can I do that?
You can use metalLB which hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
It has two features that work together to provide this service: address allocation, and external announcemen
MetalLB requires the following to function:
A Kubernetes cluster, running Kubernetes 1.13.0 or later, that does not already have network load-balancing functionality.
A cluster network configuration that can coexist with MetalLB.
Some IPv4 addresses for MetalLB to hand out.
Depending on the operating mode, you may need one or more routers capable of speaking BGP.

Kubernetes services within cluster

I am trying to set up a conventional web app with a database in Kubernetes. I have accomplished it by configuring 2 services and 2 deployments - one for the app and one for the database. Now I would like to make my database accessible only from the app pods, ie not expose it to outside world like a service. Is it possible using only Kubernetes configuration?
There are following ways to expose the pods.
purpose is inter-service communication
Internally expose
service type=clusterIP
Headless-service clusterIP: None is used for database pods
Sometimes you don’t need or want load-balancing and a single service IP. headless-services
Externally expose
Exposing service to the customers.
service type=NodePort or type=LoadBalancer

Kubernetes External Load Balancer Service on DigitalOcean

I'm building a container cluster using CoreOs and Kubernetes on DigitalOcean, and I've seen that in order to expose a Pod to the world you have to create a Service with Type: LoadBalancer. I think this is the optimal solution so that you don't need to add external load balancer outside kubernetes like nginx or haproxy. I was wondering if it is possible to create this using DO's Floating IP.
Things have changed, DigitalOcean created their own cloud provider implementation as answered here and they are maintaining a Kubernetes "Cloud Controller Manager" implementation:
Kubernetes Cloud Controller Manager for DigitalOcean
Currently digitalocean-cloud-controller-manager implements:
nodecontroller - updates nodes with cloud provider specific labels and
addresses, also deletes kubernetes nodes when deleted on the cloud
provider.
servicecontroller - responsible for creating LoadBalancers
when a service of Type: LoadBalancer is created in Kubernetes.
To try it out clone the project on your master node.
Next get the token key from https://cloud.digitalocean.com/settings/api/tokens and run:
export DIGITALOCEAN_ACCESS_TOKEN=abc123abc123abc123
scripts/generate-secret.sh
kubectl apply -f do-cloud-controller-manager/releases/v0.1.6.yml
There more examples here
What will happen once you do the above? DO's cloud manager will create a load balancer (that has a failover mechanism out of the box, more on it in the load balancer's documentation
Things will change again soon as DigitalOcean are jumping on the Kubernetes bandwagon, check here and you will have a choice to let them manage your Kuberentes cluster instead of you worrying about a lot of the infrastructure (this is my understanding of the service, let's see how it works when it becomes available...)
The LoadBalancer type of service is implemented by adding code to the kubernetes master specific to each cloud provider. There isn't a cloud provider for Digital Ocean (supported cloud providers), so the LoadBalancer type will not be able to take advantage of Digital Ocean's Floating IPs.
Instead, you should consider using a NodePort service or attaching an ExternalIP to your service and mapping the exposed IP to a DO floating IP.
It is actually possible to expose a service through a floating ip. The only catch is that the external IP that you need to use is a little unintuitive.
From what it seems DO has some sort of overlay network for their Floating IP service. To get the actual IP you need to expose you need to ssh into your gateway droplet and find its anchor IP by hitting up the metadata service:
curl -s http://169.254.169.254/metadata/v1/interfaces/public/0/anchor_ipv4/address
and you will get something like
10.x.x.x
This is the address that you can use as an external ip in LoadBalancer type service in kubernetes.
Example:
kubectl expose rc my-nginx --port=80 --public-ip=10.x.x.x --type=LoadBalancer