Cross cluster communication in GKE Multi-cluster Service - kubernetes

I’m using GKE multi-cluster service and have configured two clusters.
On one cluster I have an endpoint I want to consume and it's hard-coded on address:
redpanda-0.redpanda.processing.svc.cluster.local.
Does anyone know how I can reach this from the other cluster?
EDIT:
I have exported the service, which is then automatically imported into the other cluster. Previously, I have been able to connect to the other cluster using SERVICE_EXPORT_NAME.NAMESPACE.svc.clusterset.local, but then I had to change the endpoint address manually to this exact address. In my new case, the endpoint address is not configurable.

Related

Connect to Cassandra on Kubernetes using java-driver

We are bringing up a Cassandra cluster, using k8ssandra helm chart, it exposes several services, our client applications are using the datastax Java-Driver and running at the same k8s cluster as the Cassandra cluster (this is testing phase)
CqlSessionBuilder builder = CqlSession.builder();
What is the recommended way to connect the application (via the Driver) to Cassandra?
Adding all nodes?
for (String node :nodes) {
builder.addContactPoint(new InetSocketAddress(node, 9042));
}
Adding just the service address?
builder.addContactPoint(new InetSocketAddress(service-dns-name , 9042))
Adding the service address as unresolved? (would that even work?)
builder.addContactPoint(InetSocketAddress.createUnresolved(service-dns-name , 9042))
The k8ssandra Helm chart deploys a CassandraDatacenter object and cass-operator in addition to a number of other resources. cass-operator is responsible for managing the CassandraDatacenter. It creates the StatefulSet(s) and creates several headless services including:
datacenter service
seeds service
all pods service
The seeds service only resolves to pods that are seeds. Its name is of the form <cluster-name>-seed-service. Because of the ephemeral nature of pods cass-operator may designate different C* nodes as seed nodes. Do not use the seed service for connecting client applications.
The all pods service resolves to all Cassandra pods, regardless of whether they are readiness. Its name is of the form <cluster-name>-<dc-name>-all-pods-service. This service is intended to facilitate with monitoring. Do not use the all pods service for connecting client applications.
The datacenter service resolves to ready pods. Its name is of the form <cluster-name>-<dc-name>-service This is the service that you should use for connecting client applications. Do not directly use pod IPs as they will change over time.
Adding all nodes?
You definitely do not need to add all of the nodes as contact points. Even in vanilla Cassandra, only adding a few is fine as the driver will gossip and find the rest.
Adding just the service address?
Your second option of binding on the service address is all you should need to do. The nice thing about the service address, is that it will account for changing/removing of IPs in the cluster.

How to allow nodes of one GKE cluster to connect to another GKE

I have a GKE clusters setup, dev and stg let's say, and wanted apps running in pods on stg nodes to connect to dev master and execute some commands on that's GKE - I have all the setup I need and when I add from hand IP address of the nodes all works fine but the IP's are changing,
so my question is how can I add to Master authorised networks the ever-changing default-pool IPs of nodes from the other cluster?
EDIT: I think I found the solution, it's not the node IP but the NAT IP I have added to authorized networks, so assuming I don't change those I just need to add the NAT I guess, unless someone knows better solution ?
I'm not sure that you are doing the correct things. In kubernetes, your communication is performed between services, that represents deployed pods, on one or several nodes.
When you communicate with the outside, you reach an endpoint (an API or a specific port). The endpoint is materialized by a loadbalancer that routes the traffic.
Only the kubernetes master care about the node as resources (CPU, memory, GPU,...) provider inside the cluster. You should never have to directly reach the node of a cluster without using the standard way.
Potentially you can reach the NodePort service exposal on the NodeIP+servicePort.
What you really need to do is configure the kubectl in jenkins pipeline to connect to GKE Master IP. The master is responsible for accepting your commands (rollback, deployment, etc). See Configuring cluster access for kubectl
The Master IP is available in the Kubernetes Engine console along with the Certificate Authority certificate. A good approach is to use a service account token to authenticate with the master. See how to Login to GKE via service account with token.

How DNS service works in the Kubernetes?

I am new to the Kubernetes, and I'm trying to understand that how can I apply it for my use-case scenario.
I managed to install a 3-node cluster on VMs within the same network. Searching about K8S's concepts and reading related articles, still I couldn't find answer for my below question. Please let me know if you have knowledge on this:
I've noticed that internal DNS service of K8S applies on the pods and this way services can find each other with hostnames instead of IPs.
Is this applicable for communication between pods of different nodes or this is only within the services inside a single node? (In other words, do we have a dns service on the node level in the K8S, or its only about pods?)
The reason for this question is the scenario that I have in mind:
I need to deploy a micro-service application (written in Java) with K8S. I made docker images from each service in my application and its working locally. Currently, these services are connected via pre-defined IP addresses.
Is there a way to run each of these services within a separate K8S node and use from its DNS service to connect the nodes without pre-defining IPs?
A service serves as an internal endpoint and (depending on the configuration) load balancer to one or several pods behind it. All communication typically is done between services, not between pods. Pods run on nodes, services don't really run anything, they are just routing traffic to the appropriate pods.
A service is a cluster-wide configuration that does not depend on a node, thus you can use a service name in the whole cluster, completely independent from where a pod is located.
So yes, your use case of running pods on different nodes and communicate between service names is a typical setup.

How to install Kubernetes dashboard on external IP address?

How to install Kubernetes dashboard on external IP address?
Is there any tutorial for this?
You can expose services and pods in several ways:
expose the internal ClusterIP service through Ingress, if you have that set up.
change the service type to use 'type: LoadBalancer', which will try to create an external load balancer.
If you have external IP addresses on your kubernetes nodes, you can also expose the ports directly on the node hosts; however, I would avoid these unless it's a small, test cluster.
change the service type to 'type: NodePort', which will utilize a port above 30000 on all cluster machines.
expose the pod directly using 'type: HostPort' in the pod spec.
Depending on your cluster type (Kops-created, GKE, EKS, AKS and so on), different variants may not be setup. Hosted clusters typically support and recommend LoadBalancers, which they charge for, but may or may not have support for NodePort/HostPort.
Another, more important note is that you must ensure you protect the dashboard. Running an unprotected dashboard is a sure way of getting your cluster compromised; this recently happened to Tesla. A decent writeup on various way to protect yourself was written by Jo Beda of Heptio

Deterministic connection to cloud-internal IP of K8S service or its underlying endpoint?

I have a Kubernetes cluster (1.3.2) in the the GKE and I'd like to connect VMs and services from my google project which shares the same network as the cluster.
Is there a way for a VM that's internal to the subnet but not internal to the cluster itself to connect to the service without hitting the external IP?
I know there's a ton of things you can do to unambiguously determine the IP and port of services, such as the ENVs and DNS...but the clusterIP is not reachable outside of the cluster (obviously).
Is there something I'm missing? An important component to this is that this is meant to be a service "public" to the project, such that I don't know which VMs on the project will want to connect to the service (this could rule out loadBalancerSourceRanges). I understand the endpoint which the services actually wraps is the internal IP I can hit, but the only good way to get to that IP is though the Kube API or kubectl, both of which are not prod-ideal ways of hitting my service.
Check out my more thorough answer here, but the most common solution to this is to create bastion routes in your GCP project.
In the simplest form, you can create a single GCE Route to direct all traffic w/ dest_ip in your cluster's service IP range to land on one of your GKE nodes. If that SPOF scares you, you can create several routes pointing to different nodes, and traffic will round-robin between them.
If that management overhead isn't something you want to do going forward, you could write a simple controller in your GKE cluster to watch the Nodes API endpoint, and make sure that you have a live bastion route to at least N nodes at any given time.
GCP internal load balancing was just released as alpha, so in the future, kube-proxy on GCP could be implemented using that, which would eliminate the need for bastion routes to handle internal services.