how to connect to k8s cluster with bitnami postgresql-ha deployed? - postgresql

My setup (running locally in two minikubes) is I have two k8s clusters:
frontend cluster is running a golang api-server,
backend cluster is running an ha bitnami postgres cluster (used bitnami postgresql-ha chart for this)
Although if i set the pgpool service to use nodeport and i get the ip + port for the node that the pgpool pod is running on i can hardwire this (host + port) to my database connector in the api-server (in the other cluster) this works.
However what i haven't been able to figure out is how to generically connect to the other cluster (e.g. to pgpool) without using the ip address?
I also tried using Skupper, which also has an example of connecting to a backend cluster with postgres running on it, but their example doesn't use bitnami ha postgres helm chart, just a simple postgres install, so it is not at all the same.
Any ideas?

For those times when you have to, or purposely want to, connect pods/deployments across multiple clusters, Nethopper (https://www.nethopper.io/) is a simple and secure solution. The postgresql-ha scenario above is covered under their free tier. There is a two cluster minikube 'how to' tutorial at https://www.nethopper.io/connect2clusters which is very similar to your frontend/backend use case. Nethopper is based on skupper.io, but the configuration is much easier and user friendly, and is centralized so it scales to many clusters if you need to.
To solve your specific use case, you would:
First install your api server in the frontend and your bitnami postgresql-ha chart in the backend, as you normally would.
Go to https://mynethopper.com/ and
Register
Clouds -> define both clusters (clouds), frontend and backend
Application Network -> create an application network
Application Network -> attach both clusters to the network
Application Network -> install nethopper-agent in each cluster with copy paste instructions.
Objects -> import and expose pgpool (call the service 'pgpool') in your backend.
Objects -> distribute the service 'pgpool' to frontend, using a distribution rule.
Now, you should see 'pgpool' service in the frontend cluster
kubectl get service
When the API server pods in the frontend request service from pgpool, they will connect to pgpool in the backend, magically. It's like the 'pgpool' pod is now running in the frontend.
The nethopper part should only take 5-10 minutes, and you do NOT need IP addresses, TLS certs, K8s ingresses or loadbalancers, a VPN, or an istio service mesh or sidecars.

After moving to the one cluster architecture, it became easier to see how to connect to the bitnami postgres-ha cluster, by trying a few different things finally this worked:
-postgresql-ha-postgresql-headless:5432
(that's the host and port I'm using to call from my golang server)
Now i believe it should be fairly straightforward to also run the two cluster case using skupper to bind to the headless service.

Related

How can I access directly stateful pods ports on localhost Kubernetes from localhost (for example Cassandra) - what routing is need?

I want to build some testing environment with use Kubernetes on localhost (can be Docker Desktop. minikube, ...). I want to connect my client to 3 instances of Cassandra inside localhost K8s cluster. Cassandra is example it can be same in etcd, redis, ... or any StatefulSet.
I created StatefulSet with 3 replicas on same ports on localhost Kubernetes.
I create Services to expose each pod.
What I should do next to route traffic with use three different names cassandra-0, cassandra-1, cassandra-2 and same port. This is required by driver - I can not forward individual ports since driver require to run all instances on same port.
So it should be cassandra-0:9042, cassandra-1:9042, cassandra-0:9042.
To shows this I create some drawing to explain it graphically.
I want achieve red line connection with use something ... - I do not know what to use in K8s - maybe services.
I would say you should define a node port and send your request to localhost:NodePort
  ports:
  - protocol: TCP
    port: 8081
    targetPort: 8080
    nodePort: 32000
Just change your ports so they fit your needs.
If you already created a service with ports exposed, get all endpoints and try turn traffic towards them.
kubectl get endpoints -A

How can I deploy one entry point for applications running cross Kubernete clusters?

I have two K8S clusters setup, one on AWS EKS, the other is on GCP. I setup a rancher server which is used to manage this two clusters. I have an application (appA) which is packaged in a helm chart. The application is just a rest api server created by nodejs + express.
It is deployed to both clusters via Rancher. After deploy, appA are running in the two clusters separately.
I have another application (appB) (running outside of K8S) which is listening on a database stream and it needs to call appA (in the K8S clusters) to process the change events.
My question is how I can deploy an entry point, like nginx, which will forward the appB's requests to appA, one of the pod from the clusters should serve this request.
You can expose the appA Kubernetes service type as Loadbalancer.
You can run nginx outside of the k8s, create a upstream and add both EKS
and GKE loadbalancers urls as backend servers.
Follow the below link
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/

Connect to Cassandra on Kubernetes using java-driver

We are bringing up a Cassandra cluster, using k8ssandra helm chart, it exposes several services, our client applications are using the datastax Java-Driver and running at the same k8s cluster as the Cassandra cluster (this is testing phase)
CqlSessionBuilder builder = CqlSession.builder();
What is the recommended way to connect the application (via the Driver) to Cassandra?
Adding all nodes?
for (String node :nodes) {
builder.addContactPoint(new InetSocketAddress(node, 9042));
}
Adding just the service address?
builder.addContactPoint(new InetSocketAddress(service-dns-name , 9042))
Adding the service address as unresolved? (would that even work?)
builder.addContactPoint(InetSocketAddress.createUnresolved(service-dns-name , 9042))
The k8ssandra Helm chart deploys a CassandraDatacenter object and cass-operator in addition to a number of other resources. cass-operator is responsible for managing the CassandraDatacenter. It creates the StatefulSet(s) and creates several headless services including:
datacenter service
seeds service
all pods service
The seeds service only resolves to pods that are seeds. Its name is of the form <cluster-name>-seed-service. Because of the ephemeral nature of pods cass-operator may designate different C* nodes as seed nodes. Do not use the seed service for connecting client applications.
The all pods service resolves to all Cassandra pods, regardless of whether they are readiness. Its name is of the form <cluster-name>-<dc-name>-all-pods-service. This service is intended to facilitate with monitoring. Do not use the all pods service for connecting client applications.
The datacenter service resolves to ready pods. Its name is of the form <cluster-name>-<dc-name>-service This is the service that you should use for connecting client applications. Do not directly use pod IPs as they will change over time.
Adding all nodes?
You definitely do not need to add all of the nodes as contact points. Even in vanilla Cassandra, only adding a few is fine as the driver will gossip and find the rest.
Adding just the service address?
Your second option of binding on the service address is all you should need to do. The nice thing about the service address, is that it will account for changing/removing of IPs in the cluster.

Unable to connect to mysql in Istio environment

We have configured the Kubernetes cluster on bare-metal server with v1.15.1 and Istio-1.4.0 (demo) with mTLS enabled.
And our mysql server is outside the K8s cluster on Azure VM's.
Now when we inject istio-proxy while deploying the application we are unable to connect to mysql server via jdbc and also tried my mysql client. But when remove the istio-proxy by re-deploying we are able to connect instantly with out any issue.
When through many blogs wrt istio and mysql, tried with removing the default mesh policy but tht didnt work. The case in istio faq's is when the mysql is in k8s cluster with istio injected.
You can configure auto-mtls for istio by configuring values.global.mtls.auto=true (ie it uses mtls when possible and falls back for other connections
https://istio.io/docs/tasks/security/authentication/auto-mtls/
Serviceentry and destionation rule does the work form my case

Connect to external database cluster from kubernetes

Is there option to connect to external database cluster from POD? I need to connect to elastic search, zookeeeper, Kafka and couchbase, each of them has its own cluster. Per my understanding the documentation, I can define multi external IPs, but I cannot find how will k8s behave if one of them is down. I am working with pure k8s 1.6 now, and we will migrate to 1.7 soon. Information about OpenShift 3.7 will be also welcome because I cannot find anything specific in its documentation.
The k8s doc on your link has more info on exposing services running on k8s but not externally
You generally want to expose your service using a DNS entry and manage the HA for that service separately.
For example you can a single DNS entry mykafka.mydomain.com and then assign IP addresses to that entry:
kafka1 ip
kafka2 ip
kafka3 ip
You can see that approach on the Openshift docs in the USING AN EXTERNAL DOMAIN NAME section. Yes, its not clear from the docs whether k8s/openshift does a round robin on the multiple IPs for an external service and if automatically fails over.
Hope it helps.