I have an API written in Express, that connects to a Mongo container. When I spin these up locally, I can connect to the mongo instance using something like mongodb://${DB_HOST}:${DB_PORT}/${DB_NAME} and setting .env variables.
What I am struggling to understand, is once this deployed to GKE, how will my API connect to the mongo container / pod?
It won't be running on localhost I assume, so perhaps I will need to use the internal IP created?
Should it actually be connecting via a service? What would that service look like?
I am struggling to find docs on exactly where I am stuck so I am thinking I am missing something really obvious.
I'm free new to GKE so any examples would help massively.
Create a mongodb deployment and a mongodb service of type ClusterIP, which basically means that your api will be able to connect to the db internally. If you want to connect your db from outside, create a service of type LoadBalancer or other service types (see here)
With a service of type ClusterIP, let's say you give it a name of mongodbservice under metadata key. Then your api can connect to it at mongodb://mongodbservice:${DB_PORT}/${DB_NAME}
You'll want to deploy mongodb, probably as a StatefulSet so it can use stable persistent storage. You'll need to configure a StorageClass for the persistent storage. Then you'll want to expose it as a Service. Here's an example on kubernetes.io
If you use Helm (hint, do it) it's a lot easier. You can be up and running with a single command
helm install stable/mongodb
The output of the helm install contains some helpful instructions for connecting to your new mongodb cluster.
Related
I’m little bit unsure what would the correct connection URI for my applications for the Mongodb statefulset. I have three replicas running in my cluster, each one in separate node.
Should I configure the pods OR the headless service (load balancer for the pods)?
The documentation is directing to use pods, like this (Running MongoDB on Kubernetes with StatefulSets | Kubernetes):
mongodb://user:pwd#mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname_?
But I have got it working with the service as well:
mongodb://user:pwd#mongodb-headless.svc.cluster.local:27017/dbname_?authSource=admin&replicaSet=rs0
But, I don’t know what is the correct URI? The problem I’m having is that when some of the replicas goes down, for some reason, the application crashes as the database connection is lost. I think this is where the headless-service comes in picture, but no, the documentation says to configure the pods. And if I scale the replicas I need to reconfigure the URI. This does not sound so dynamic.
I’m also facing some issues with the headless service, as if it is in different namespace I cannot get the connection work with namespace defined, like:
mongodb-headless.namespace.svc.cluster.local:27017
Have I missed something?
Thank you in any advance!
EDIT: added replicaset for service/lb URI example (I had this configured...)
I think your way of referencing the headless service will result in mongodb only using the first in the set.
Another way is using the MongoDB's DNS seed list connection format together with Kubernetes' support for DNS SRV records. If you named your Service's port mongodb, then the following connection string ought to work:
mongodb+srv://user:pwd#mongodb-headless.namespace.svc.cluster.local:27017/dbname_?
MongoDB clients will use DNS to get a seed list on connection, which stays up to date with the actual Pods running.
Note that this enables tls by default, which you probably do not want.
I am trying to deploy Moongo Express - MongoDB on EKS cluster. I have running pods of MongoDB and MongoExpress on EKS cluster. I have created internal service for MongoDB and external service for MongoExpress. Following is the description of my mongo-express-service:
How do I access the given url? I tried using the URL in browser (http://url:8081) but no output. It says site can't be reached. I have tried similar experiment on minikube. In minikube it generates and IP with port number which is accessible through the browser and I get mongo-express UI on the screen. In minikube I use following command to run:
minikube service mongo-express-service
What command I have to run to get it running in case of aws url? This is my first week of learning Kubernetes. I am probably missing something major here. Please let me know what wrong I am doing here.
Is it possible to establish connection from my localhost app to a replica-set postgres kubernetes? or what solution I need to do for having a mirror of my production database?
Thanks in advance
What you need is a so-called PostgreSQL Kubernetes operator that will be responsible for building Kubernetes objects based on your requests.
You can have a look at OperatorHub.io, they have some PostgreSQL operators.
Maybe an easier solution is KubeDB and the KubeDB PostgreSQL implementation.
The operator will also create a Kubernetes Service that will create a resolvable name linked to the Kubernetes Pods of your PostgreSQL cluster. KubeDB doc explains how to connect to the database in their documentation.
Now coming to your question :
Is it possible to establish connection from my localhost app [...]
You can access the Kubernetes service from outside but you will have to create a Kubernetes Load Balancer. See this blog article which explains it in details.
Question
Is it problematic to create two Services for the same pod, one for internal access and the other for external access?
Context
I have a simple app running on GKE.
There are two pods, each with one container:
flask-pod, which runs a containerized flask app
postgres-pod, which runs a containerized postgres DB
The flask app accesses the postgres DB through a ClusterIP Service around the postgres DB.
Concern
I also have connected a client app, TablePlus (running on my machine), to the postgres DB through a LoadBalancer Service. Now I have 2 separate services to access my postgres DB. Is this redundant, or can this cause problems?
Thanks for your help.
It is perfectly fine. If you look at StatefulSets, you define one headless service that is used for internal purpose and another service to allow access from clients.
This approach is absolutely valid, there is nothing wrong with it. You can create as many Services per Pod as you like.
I want to connect flask pod with mongodb in Kubernetes. Have deployed both but no clue how to connect them and do CRUD on it. Any example helps.
Maybe you could approach this in steps. For example, you could start with running a demo flask app in kubernetes like https://github.com/honestbee/flask_app_k8s Then you could look at adding in the database. First you could do this locally like in How can I use MongoDB with Flask? Then to make it work in kubernetes I'd suggest installing the mongodb helm chart (using its instructions at https://github.com/helm/charts/tree/master/stable/mongodb) and then doing kubectl get service to find out what service name and port the deployed mongo is using. Then you can put that service name and port into your app's configuration and the connection should work as it would locally because of kubernetes dns-based discovery (which I see you also have a question about but you don't necessarily need to know all the theory to try it out).