I have an instance of MongoDB running on OpenShift. Without port forwarding, is it possible to connect from the local machine to the database using, say, OpenShift route or the IP address of the service? If so, how can it be achieved?
Would you try Creating a Headless Service
against connecting to mongodb ? Additionally refer MongoDB.
I hope it help you.
Related
We have an airflow task that adds data to the mongodb server.
We can connect to the mongodb server only behind IP Access or VPC Peering.
We are having issues with VPC Peering, so we thought we can just enable direct IP access between the airflow workers and the mongodb server
Has anyone done that?
If not, do you have another suggestion?
I am new to Google Cloud Platform and the following context:
I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.
All services like GCE and GKE are in the same region (us-east-1).
I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.
I have also checked the firewall settings on GCP as well as the bindIp setting on the MongoDB server and it has no blocking on that.
Does anyone know what may be happening? Thank you very much.
In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).
I had to whitelist cluster pod network listed in cluster details:
Pod address range 10.8.0.0/14
https://console.cloud.google.com/kubernetes/list
https://console.cloud.google.com/networking/firewalls/list
By default, containers in a GKE cluster should be able to access GCE VMs of the same VPC through internal IPs. It is just like you access the internet (e.g., google.com) from GKE containers, GKE and VPC know how to route the traffic. The problem must be with other configurations (firewall or your application).
You can do a test, start a simple HTTP server in the GCE VM, say the internal IP is 10.138.0.5:
python -m SimpleHTTPServer 8080
then create a GKE container and try to access the service:
kubectl run my-client -it --image=tutum/curl --generator=run-pod/v1 -- curl http://10.138.0.5:8080
I have mongodb running on an EC2 instance. After setting mongod.conf to accept traffic from 0.0.0.0, I am able to connect and send queries from my local machine. This machine is set to accept all traffic on port 27017.
I have an express app running mongoose also deployed to EC2 on a different instance. However, I cannot connect to the mongo instance from the express instance. I checked the outbound traffic rules, port 27017 is enabled explicitly, though all outbound traffic is enabled as well.
I can't figure out why I would be able to connect from my local machine but not my EC2 instance. The only thing I can think of is perhaps some setting in the VPC these instances are in. Both instances share the same VPC. Both instances are running ubuntu. The only other difference between my local environment and deployment environment is I'm running node 11 (macOS) locally and node 8 in deployment. Any ideas?
I want to connect flask pod with mongodb in Kubernetes. Have deployed both but no clue how to connect them and do CRUD on it. Any example helps.
Maybe you could approach this in steps. For example, you could start with running a demo flask app in kubernetes like https://github.com/honestbee/flask_app_k8s Then you could look at adding in the database. First you could do this locally like in How can I use MongoDB with Flask? Then to make it work in kubernetes I'd suggest installing the mongodb helm chart (using its instructions at https://github.com/helm/charts/tree/master/stable/mongodb) and then doing kubectl get service to find out what service name and port the deployed mongo is using. Then you can put that service name and port into your app's configuration and the connection should work as it would locally because of kubernetes dns-based discovery (which I see you also have a question about but you don't necessarily need to know all the theory to try it out).
tldr: What will I need to do in order to use an elastic IP in my MongoDB replicaset configuration?
We have a three-node MongoDB replicaset running on EC2. One of the instances in the set was retired by AWS yesterday, and so we were forced to stop and restart the EC2 instance.
Unfortunately, when we first configured the replicaset, we were fairly new to AWS and not aware that the public DNS address of the instances was subject to change. We used the public DNS of each instance in the replicaset configuration, and in all of the application connection strings in our code. After reading up on the subject yesterday, I tried to get the node back online by assigning an elastic IP to the instance and changing the replicaset configuration to use that IP. After some pain, I was able to get the other two nodes back up and running with that configuration, but the instance with the elastic IP refused to re-join the replicaset, and the error in mongod.log says:
[rsStart] replSet info self not present in the repl set configuration
After yet more reading, I found that I should not have used the actual elastic IP in the config, but rather the public DNS name of the elastic IP. My question is, before I take everything offline again to try this change, what exactly will I need to do in order to use the elastic IP in the replicaset configuration? I found some information on this 10Gen page: http://docs.mongodb.org/ecosystem/platforms/amazon-ec2/#communication-across-regions that made me think I might need to mess with the hostname of the instance and/or the hosts file, but I haven't been able to find anybody describing my exact scenario.
Any thoughts?
It turned out to be a pretty simple fix; once I changed the replicaset configuration to use the public DNS of the elastic IP, the mongo node came back online. I didn't have to touch the hostname or the hosts file.
You should never use an Elastic IP for internal traffic like replication. You will be charged $0.01/GB for this traffic, whereas using the internal IP would be free.
If you're using something like replica sets, you really should be running in a VPC. Unlike normal EC2 instances, instances in an VPC keep the same private IP addresses and Elastic IP addresses even when stopped.