I have deployed via terraform a setup with lambdas, mongodb cluster, aws vpc and a privatelink endpoint and my connection string isn't working. i am getting a timeout error from mongo.
My connection string is:
"mongodb+srv://:#XXXXXXXXXXXX-pl-0.XXXXX.mongodb.net/?retryWrites=true&w=majority"
I was wondering if it could be to do with security, maybe the ports i'm allowing are wrong, can someone advise what they should be?
Any ideas on why this isn't the connecting?
Related
Theres a NodeJS application deployed on GKE
MongoDB Atlas Peering Connection is successful with GCP VPC Peering Connection
However, NodeJS Application is throwing error with connection with MongoDB
What can I do to test connection from GKE Cluster to MongoDB?
The easiest way would be to deploy your nodeJS application and look at the application logs. If for some reason that's not working you could launch a mongo CLI pod, start a shell session and try to initiate the connection that way.
You mentioned the nodeJS application is throwing an error. You might want to copy paste that error here so people on StackOverflow can be of more help. It's important that you provide as much context as possible in your question.
I have a provided Cassandra Database installation on a server.
On the other hand my customer has a Kubernetes Cluster with a deployed application that needs to connect to the database and we expirience the following error when the container tries to start up.
WARN [com.dat.oss.dri.int.cor.con.ControlConnection] (vert.x-eventloop-thread-1) [s0] Error connecting to Node(endPoint=cassandra:9042, hostId=null, hashCode=379c44fa), trying next node (UnknownHostException: cassandra: Temporary failure in name resolution)
An suggestions what I am missing or what I need to do in my cluster?
Do you have DNS setup where the Cassandra service is available to the k8s cluster through a DNS name cassandra? Since this is an outside component, k8s relies on your external DNS resolution to discover this service.
Notice it is attempting to connect to a URL cassandra:9042. This means k8s should be able to resolve the hostname cassandra somehow, internally or externally.
If not, you have to determine your service URL, like <some-IP>:<some-Port>/some_endpoint and provide this to your k8s application, which will connect with it directly.
The issue is that you haven't configured the correct contact points in your application. In the error you posted, your application is connecting to an unknown host cassandra:
... Error connecting to Node(endPoint=cassandra:9042, ...
but your app doesn't know how to resolve the hostname cassandra leading to:
UnknownHostException: cassandra: Temporary failure in name resolution
We recommend that you specify at least two IP addresses of nodes in the "local DC" as contact points. For example if you're using the Java driver to connect to your Cassandra cluster, configure the contact points with:
datastax-java-driver {
basic {
contact-points = [ "node_IP1:9042", "node_IP2:9042" ]
}
}
Since your application is running in Kubernetes, you'll need to make sure that it has network connectivity to your Cassandra cluster. Cheers!
We have an airflow task that adds data to the mongodb server.
We can connect to the mongodb server only behind IP Access or VPC Peering.
We are having issues with VPC Peering, so we thought we can just enable direct IP access between the airflow workers and the mongodb server
Has anyone done that?
If not, do you have another suggestion?
So, I created a postgreSQL instance in Google Cloud, and I have a Kubernetes Cluster with containers that I would like to connect to it. I know that the cloud sql proxy sidecar is one method, but the documentation says that I should be able to connect to the private IP as well.
I notice that a VPC peering connection was automatically created for me. It's set for a destination network of 10.108.224.0/24, which is where the instance is, with a "Next hop region" of us-central1, where my K8s cluster is.
And yet when I try the private IP via TCP on port 5432, I time out. I see nothing in the documentation about have to modify firewall rules to make this work, but I tried that anyway, finding the firewall interface in GCP rather clumsy and confusing compared with writing my own rules using iptables, but my attempts failed.
Beyond going to the cloud sql sidecar, does anyone have an idea why this would not work?
Thanks.
Does your GKE cluster meet the environment requirements for private IP? It needs to be a VPC enabled cluster on the same VPC and region as your Cloud SQL instance.
In the end, the simplest thing to do was to just use the google cloud sql proxy. As opposed to a sidecar, I have multiple containers needing db access so I put the proxy into my cluster as its own container with a service, and it seems to just work.
If your instance of cloud SQL or compute both in the same VPC then only you can create a VPC peering over private IP.
From cloud SQL compute VM you can choose the VPC and subnet and also setup same for the GKE and you can make the connection from pod to cloud sql.
I have an instance of MongoDB running on OpenShift. Without port forwarding, is it possible to connect from the local machine to the database using, say, OpenShift route or the IP address of the service? If so, how can it be achieved?
Would you try Creating a Headless Service
against connecting to mongodb ? Additionally refer MongoDB.
I hope it help you.