How to connect my Rails Application localhost with replica-set postgres GPC - kubernetes

Is it possible to establish connection from my localhost app to a replica-set postgres kubernetes? or what solution I need to do for having a mirror of my production database?
Thanks in advance

What you need is a so-called PostgreSQL Kubernetes operator that will be responsible for building Kubernetes objects based on your requests.
You can have a look at OperatorHub.io, they have some PostgreSQL operators.
Maybe an easier solution is KubeDB and the KubeDB PostgreSQL implementation.
The operator will also create a Kubernetes Service that will create a resolvable name linked to the Kubernetes Pods of your PostgreSQL cluster. KubeDB doc explains how to connect to the database in their documentation.
Now coming to your question :
Is it possible to establish connection from my localhost app [...]
You can access the Kubernetes service from outside but you will have to create a Kubernetes Load Balancer. See this blog article which explains it in details.

Related

Can AWS RDS Proxy be paired with read replication instance directly?

I created an RDS Proxy with existing Aurora PostgreSQL cluster.
But I want to pair the proxy with specific read replica instance of the cluster. Is that possible?
From what AWS claims about RDS proxy:
The same consideration applies for RDS DB instances in replication configurations. You can associate a proxy only with the writer DB instance, not a read replica.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html
Should be possible now as per https://aws.amazon.com/about-aws/whats-new/2021/03/amazon-rds-proxy-adds-read-only-endpoints-for-amazon-aurora-replicas/
Try RDS Proxy Endpoint, which allows you to get use of read replicas:
You can create and connect to read-only endpoints called reader endpoints when you use RDS Proxy with Aurora clusters. These reader endpoints help to improve the read scalability of your query-intensive applications. Reader endpoints also help to improve the availability of your connections if a reader DB instance in your cluster becomes unavailable.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html#rds-proxy-endpoints

Talking to container on Kubernetes (GKE) from other container

I have an API written in Express, that connects to a Mongo container. When I spin these up locally, I can connect to the mongo instance using something like mongodb://${DB_HOST}:${DB_PORT}/${DB_NAME} and setting .env variables.
What I am struggling to understand, is once this deployed to GKE, how will my API connect to the mongo container / pod?
It won't be running on localhost I assume, so perhaps I will need to use the internal IP created?
Should it actually be connecting via a service? What would that service look like?
I am struggling to find docs on exactly where I am stuck so I am thinking I am missing something really obvious.
I'm free new to GKE so any examples would help massively.
Create a mongodb deployment and a mongodb service of type ClusterIP, which basically means that your api will be able to connect to the db internally. If you want to connect your db from outside, create a service of type LoadBalancer or other service types (see here)
With a service of type ClusterIP, let's say you give it a name of mongodbservice under metadata key. Then your api can connect to it at mongodb://mongodbservice:${DB_PORT}/${DB_NAME}
You'll want to deploy mongodb, probably as a StatefulSet so it can use stable persistent storage. You'll need to configure a StorageClass for the persistent storage. Then you'll want to expose it as a Service. Here's an example on kubernetes.io
If you use Helm (hint, do it) it's a lot easier. You can be up and running with a single command
helm install stable/mongodb
The output of the helm install contains some helpful instructions for connecting to your new mongodb cluster.

Two services for the same Pod on GKE

Question
Is it problematic to create two Services for the same pod, one for internal access and the other for external access?
Context
I have a simple app running on GKE.
There are two pods, each with one container:
flask-pod, which runs a containerized flask app
postgres-pod, which runs a containerized postgres DB
The flask app accesses the postgres DB through a ClusterIP Service around the postgres DB.
Concern
I also have connected a client app, TablePlus (running on my machine), to the postgres DB through a LoadBalancer Service. Now I have 2 separate services to access my postgres DB. Is this redundant, or can this cause problems?
Thanks for your help.
It is perfectly fine. If you look at StatefulSets, you define one headless service that is used for internal purpose and another service to allow access from clients.
This approach is absolutely valid, there is nothing wrong with it. You can create as many Services per Pod as you like.

Connect Flask pod with mongodb pod in kubernetes.

I want to connect flask pod with mongodb in Kubernetes. Have deployed both but no clue how to connect them and do CRUD on it. Any example helps.
Maybe you could approach this in steps. For example, you could start with running a demo flask app in kubernetes like https://github.com/honestbee/flask_app_k8s Then you could look at adding in the database. First you could do this locally like in How can I use MongoDB with Flask? Then to make it work in kubernetes I'd suggest installing the mongodb helm chart (using its instructions at https://github.com/helm/charts/tree/master/stable/mongodb) and then doing kubectl get service to find out what service name and port the deployed mongo is using. Then you can put that service name and port into your app's configuration and the connection should work as it would locally because of kubernetes dns-based discovery (which I see you also have a question about but you don't necessarily need to know all the theory to try it out).

How do you create a mongodb replica set or shard that is externally available in kubernetes?

I have followed tutorials and set up working mongodb replica sets however when it comes to exposing them as a service I am stuck with using a LoadBalancer which directs to any pod. In most cases this ends up being a secondary database and not terrible helpful. I have also managed to get separate mongodb replicas set up and then tried to connect to those externally however connections fail because internal replicaset ips are all through local google cloud dns.
What I am hoping for is something like this.
Then (potentially) there is a single connection uri that could connect you to your mongodb replicaset without needing individual mongodb connection details.
I'm not sure if this is possible but any help is greatly appreciated!
The loadbalancer type service will route traffic to any one pod matches its selector, which is not how mongodb replica set works. The connection string should contain all instances in the set. You probably need to expose each replica instance with type=loadbalancer. Then you may connect via "mongodb://mongo-0_IP,mongo-1_IP,mongo-2_IP:27017/dbname_?"
If you configure a mongodb replica set with stateful sets, you should
also create a headless service. Then you can connect to the replica set with a url like:
“mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname_?”
Here mongo-0,mongo-1,mongo-2 are pod names and "mongo" is the headless service name.
If you still want to able to connect to a specific mongo instance, you can create separate service (of type=NodePort) for each of the deployment/replica and then you should be able to connect to a specific mongo instance using <any-node-ip>:<nodeport>
But you'll not be able to leverage the advantages of having a mongo replica set in this case.