Connect Flask pod with mongodb pod in kubernetes. - mongodb

I want to connect flask pod with mongodb in Kubernetes. Have deployed both but no clue how to connect them and do CRUD on it. Any example helps.

Maybe you could approach this in steps. For example, you could start with running a demo flask app in kubernetes like https://github.com/honestbee/flask_app_k8s Then you could look at adding in the database. First you could do this locally like in How can I use MongoDB with Flask? Then to make it work in kubernetes I'd suggest installing the mongodb helm chart (using its instructions at https://github.com/helm/charts/tree/master/stable/mongodb) and then doing kubectl get service to find out what service name and port the deployed mongo is using. Then you can put that service name and port into your app's configuration and the connection should work as it would locally because of kubernetes dns-based discovery (which I see you also have a question about but you don't necessarily need to know all the theory to try it out).

Related

Mongo connection URI for statefulSet in K8s, each replica (pod) or the (headless) service?

I’m little bit unsure what would the correct connection URI for my applications for the Mongodb statefulset. I have three replicas running in my cluster, each one in separate node.
Should I configure the pods OR the headless service (load balancer for the pods)?
The documentation is directing to use pods, like this (Running MongoDB on Kubernetes with StatefulSets | Kubernetes):
mongodb://user:pwd#mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname_?
But I have got it working with the service as well:
mongodb://user:pwd#mongodb-headless.svc.cluster.local:27017/dbname_?authSource=admin&replicaSet=rs0
But, I don’t know what is the correct URI? The problem I’m having is that when some of the replicas goes down, for some reason, the application crashes as the database connection is lost. I think this is where the headless-service comes in picture, but no, the documentation says to configure the pods. And if I scale the replicas I need to reconfigure the URI. This does not sound so dynamic.
I’m also facing some issues with the headless service, as if it is in different namespace I cannot get the connection work with namespace defined, like:
mongodb-headless.namespace.svc.cluster.local:27017
Have I missed something?
Thank you in any advance!
EDIT: added replicaset for service/lb URI example (I had this configured...)
I think your way of referencing the headless service will result in mongodb only using the first in the set.
Another way is using the MongoDB's DNS seed list connection format together with Kubernetes' support for DNS SRV records. If you named your Service's port mongodb, then the following connection string ought to work:
mongodb+srv://user:pwd#mongodb-headless.namespace.svc.cluster.local:27017/dbname_?
MongoDB clients will use DNS to get a seed list on connection, which stays up to date with the actual Pods running.
Note that this enables tls by default, which you probably do not want.

Not able to access EKS LoadBalancer external IP (link) of Mongo Express service

I am trying to deploy Moongo Express - MongoDB on EKS cluster. I have running pods of MongoDB and MongoExpress on EKS cluster. I have created internal service for MongoDB and external service for MongoExpress. Following is the description of my mongo-express-service:
How do I access the given url? I tried using the URL in browser (http://url:8081) but no output. It says site can't be reached. I have tried similar experiment on minikube. In minikube it generates and IP with port number which is accessible through the browser and I get mongo-express UI on the screen. In minikube I use following command to run:
minikube service mongo-express-service
What command I have to run to get it running in case of aws url? This is my first week of learning Kubernetes. I am probably missing something major here. Please let me know what wrong I am doing here.

How to setup basic auth for Prometheus deployed on K8s cluster using yamls?

How to setup basic auth for Prometheus deployed on K8s cluster using yamls??
I was able to achieve this easily when Prometheus was deployed on a host locally using tar file. But when it is deployed as a pod in K8s cluster, tried almost everything on the internet but no luck.
Any kind of help would be really appreciated!
Thanks!
I'm not sure why would official documentation work only in a vm and not container, but if it truly not work than you can use webserver to hide your web interface behind it and setup authentication on it.

how to connect to k8s cluster with bitnami postgresql-ha deployed?

My setup (running locally in two minikubes) is I have two k8s clusters:
frontend cluster is running a golang api-server,
backend cluster is running an ha bitnami postgres cluster (used bitnami postgresql-ha chart for this)
Although if i set the pgpool service to use nodeport and i get the ip + port for the node that the pgpool pod is running on i can hardwire this (host + port) to my database connector in the api-server (in the other cluster) this works.
However what i haven't been able to figure out is how to generically connect to the other cluster (e.g. to pgpool) without using the ip address?
I also tried using Skupper, which also has an example of connecting to a backend cluster with postgres running on it, but their example doesn't use bitnami ha postgres helm chart, just a simple postgres install, so it is not at all the same.
Any ideas?
For those times when you have to, or purposely want to, connect pods/deployments across multiple clusters, Nethopper (https://www.nethopper.io/) is a simple and secure solution. The postgresql-ha scenario above is covered under their free tier. There is a two cluster minikube 'how to' tutorial at https://www.nethopper.io/connect2clusters which is very similar to your frontend/backend use case. Nethopper is based on skupper.io, but the configuration is much easier and user friendly, and is centralized so it scales to many clusters if you need to.
To solve your specific use case, you would:
First install your api server in the frontend and your bitnami postgresql-ha chart in the backend, as you normally would.
Go to https://mynethopper.com/ and
Register
Clouds -> define both clusters (clouds), frontend and backend
Application Network -> create an application network
Application Network -> attach both clusters to the network
Application Network -> install nethopper-agent in each cluster with copy paste instructions.
Objects -> import and expose pgpool (call the service 'pgpool') in your backend.
Objects -> distribute the service 'pgpool' to frontend, using a distribution rule.
Now, you should see 'pgpool' service in the frontend cluster
kubectl get service
When the API server pods in the frontend request service from pgpool, they will connect to pgpool in the backend, magically. It's like the 'pgpool' pod is now running in the frontend.
The nethopper part should only take 5-10 minutes, and you do NOT need IP addresses, TLS certs, K8s ingresses or loadbalancers, a VPN, or an istio service mesh or sidecars.
After moving to the one cluster architecture, it became easier to see how to connect to the bitnami postgres-ha cluster, by trying a few different things finally this worked:
-postgresql-ha-postgresql-headless:5432
(that's the host and port I'm using to call from my golang server)
Now i believe it should be fairly straightforward to also run the two cluster case using skupper to bind to the headless service.

Talking to container on Kubernetes (GKE) from other container

I have an API written in Express, that connects to a Mongo container. When I spin these up locally, I can connect to the mongo instance using something like mongodb://${DB_HOST}:${DB_PORT}/${DB_NAME} and setting .env variables.
What I am struggling to understand, is once this deployed to GKE, how will my API connect to the mongo container / pod?
It won't be running on localhost I assume, so perhaps I will need to use the internal IP created?
Should it actually be connecting via a service? What would that service look like?
I am struggling to find docs on exactly where I am stuck so I am thinking I am missing something really obvious.
I'm free new to GKE so any examples would help massively.
Create a mongodb deployment and a mongodb service of type ClusterIP, which basically means that your api will be able to connect to the db internally. If you want to connect your db from outside, create a service of type LoadBalancer or other service types (see here)
With a service of type ClusterIP, let's say you give it a name of mongodbservice under metadata key. Then your api can connect to it at mongodb://mongodbservice:${DB_PORT}/${DB_NAME}
You'll want to deploy mongodb, probably as a StatefulSet so it can use stable persistent storage. You'll need to configure a StorageClass for the persistent storage. Then you'll want to expose it as a Service. Here's an example on kubernetes.io
If you use Helm (hint, do it) it's a lot easier. You can be up and running with a single command
helm install stable/mongodb
The output of the helm install contains some helpful instructions for connecting to your new mongodb cluster.