So, I'm trying to set up a MongoDB PrivateLink to my AWS EKS but it seems like my pod is not managing to connect.
I followed this guide https://aws.amazon.com/blogs/apn/connecting-applications-securely-to-a-mongodb-atlas-data-plane-with-aws-privatelink/
and set up the following materials:
Created a new VPC with 3 availability zones
Created EKS and attached it to the VPC (the EKS has Private + Public networking enabled)
I used the private subnets for composing the VPC under the MongoDB privateLink and used the VPC name of my newly created VPC
Afterwards i ran the command that mongodb shows and waited for it to create.
It shows "Available" under "Endpoint Status" and "Ready for connection requests" under endpoint service status
I use my correct username/password and the mongo URI is correct (I whitelisted my computer and tested and it worked)
I'm rather new to the AWS - PrivateLink setup and don't seem to get it working. Should i perhaps use the public subnet ids? What could be the issue?
Related
I have a GCP Project "A" where I had previously added VPC peering with MongoDB Atlas:
This way my development GKE cluster (whose VPC range is 10.7.0.0/16) will support peering when accessing MongoDB. So far everything works as expected.
Now I've provisioned another GKE cluster for preproduction, with its own VPC range on 10.221.0.0/16. I've also created another database in the same Mongo cluster "app-pre" and wanted to add VPC peering for the new cluster.
I followed the same steps:
Mongo Atlas: add peering connection for GCP project "A", VCP name and CIDR 192.168.0.0/16
GCP Create Peering Connection
The problem is I get the following error:
An ip range in the local network (10.221.0.0/16) overlaps with an ip range (10.221.0.0/16) in an active peer of the peer network
Posting this as an answer in order to help other people.
What #john-hanley mentions is correct, basically you can't have 2 or more VPC peerings when they are using overlapping IP ranges, this is because GCP routes would be created with the same "priority" and therefore it would be confusion as to where to send a packet to those routes.
The message you are getting is basically that you are already using one range this way and intend to use this very same range "10.221.0.0/16" again.
I've set up the peering connection between MongoDB Atlas and Googles "default" VPC and the connection is labeled as "active" on both ends.
The ip range of the vpc is whitelisted in MongoDB Atlas.
But my node hosted in google-app-engine still gets timed out when accessing the MongoDB.
I use the connection url of mongodb atlas for peered connection in the form of (notice the "-pri"):
mongodb+srv://<username>:<password>#<my-cluster>-pri.rthhs.mongodb.net/<dbname>?retryWrites=true&w=majority
Which part am i missing to establich the connection? Do i need a google vpc connector?
Thanks for any help!
First of all, make sure you are running M10-Cluster or above!!! VPC-peering is not available for M0/M2/M5...
And YES you do need that connector! All "Serverless"-Services from Gcloud (like GAE in standard environment) need it.
create a connector in the same region as your GAE-App following these instructions. You can find the current region of your GAE-App with gcloud app describe
your app.yaml has to point to that connector like this
app.yaml
runtime: nodejs10
vpc_access_connector:
name: projects/GCLOUD_PROJECT_ID/locations/REGION_WHERE_GAE_RUNS/connectors/NAME_YOU_ENTERED_IN_STEP_1
Go to your Atlas project, navigate to Network Access and whitelist the IP-range you set for the connector in Step 1
You may also need to whitelist the IP-range from Step 1 for the VPC-Network. You can do that in GCP by navigating to VPC-Network -> Firewall
If you have questions about how to setup the VPC-Peering between Atlas and Gcloud try this tutorial. They do it for Kubernetes-Engine (no connector needed). But adding my steps from above will hopefully do the trick.
Try Cannot connect to Mongo Atlas using VPC peering from GCP cluster and MongoDB and Google Cloud Functions VPC Peering?.
First step I suggest identifying whether you have physical connectivity (and so need to fix ip whitelist) or don't have connectivity (and need to fix peering configuration).
I'm trying to setup an VPC Peering from my MongoDB Atlas Cluster to my Kubernetes EKS Cluster on AWS. The Peering is established successfully but i get no connection to the cluster on my pod's.
The peering is setup.
The default entry for the whitelist ist added as well. Once the connection works i will replace it with a security Group.
The peering on AWS is accepted and "DNS resolution from requester VPC to private IP" is enabled.
The route as been added to the Public Route Table of the K8S Cluster.
When i connect to a pod and try to establish a connection with the following command:
# mongo "mongodb://x.mongodb.net:27017,y.mongodb.net:27017,z.mongodb.net:27017/test?replicaSet=Cluster0-shard-0" --ssl --authenticationDatabase admin --username JackBauer
I get "CONNECT_ERROR" for every endpoint.
What am I missing?
NOTE:
I've just created a new paid cluster and the VPC is working perfectly. Might this feature be limited to paid clusters only?
Well... as the documentation states:
You cannot configure Set up a Network Peering Connection on M0 Free
Tier or M2/M5 shared clusters.
Peering is not working on shared Cluster. Which makes, after i think about it, totally sense.
I am fairly new to AWS, so I am sure that I am just missing something, but here is my problem:
I have created a VPC with 3 subnets and one security group linked to all of them. The security group accepts inbound from my machine. Next I have created two RDS instances (both PostgreSQL), put them into that VPC and linked them to the VPC security group. Weirdly, I can only connect to one of them, for the other one I get a generic time-out error.
Any idea on what I am missing? I can share any more details if needed.
EDIT: Both RDS instances are deployed on the same subnets and I am trying to connect from my machine on the internet.
Please verify that to fix your issue:
Both RDS instance have been deployed into the same Subnet.
If not check that both subnets are public subnets and have a route to your internet gateway
If one RDS (the not working one) is in a private subnets you should consider using a bastion to access it because by default you should not have a route to your Internet Gateway.
But still you will find a bellow a simple subnet design if you want to build something secure:
Create 2 public subnets if you want to deploy something directly accessible through internet (one good practice is to deploy only managed instance there (like load balancer)
Create 2 private subnets with NATGateway and correct route configuration to it
Create a bastion in you public subnets to be able to access your instance in private
Deploy your RDS into Private subnets and create one security group for each (or one for both if they are really linked)
You will find an AWS QuickStart which deploy all network stack for you on VPC Architecture - AWS Quick Start.
I have built a Fargate cluster which is running my website. The service starts the task for the website properly but stops when it gets to trying to connect to my database instance.
MongoError: failed to connect to server [123.456.789.0:27017] on first connect [MongoError: connection 0 to 123.456.789.0:27017 timed out]
How do I add the Fargate cluster to the security group on my database instance. I don't have a public ip address for the fargate cluster that I can find or a range. I can't find any proper guides on the aws documentation that go over this.
If mongo is running outside your vpc
If you are running fargate inside private subnet of VPC. The ip
address will be NAT ip address found here NAT GATEWAY
If it's running inside public subnet. You can assign public ip address to your fargate task using network interfaces.
Late to the party (2022) but the way to to this is not using IP addresses but, instead, by adding the ECS SecurityGroup to the inbound rules of whatever we're trying to access (ex: RDS). So, in the RDS inbound SecurityGroup rules, instead of using IP addresses, you'd enter the ECS SecurityGroup identified (example: sg-asdkmh778e7tugfkjhb).